Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki
2015-03-10
This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.
Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki
2015-01-01
This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems. PMID:25763645
Extraction of ECG signal with adaptive filter for hearth abnormalities detection
NASA Astrophysics Data System (ADS)
Turnip, Mardi; Saragih, Rijois. I. E.; Dharma, Abdi; Esti Kusumandari, Dwi; Turnip, Arjon; Sitanggang, Delima; Aisyah, Siti
2018-04-01
This paper demonstrates an adaptive filter method for extraction ofelectrocardiogram (ECG) feature in hearth abnormalities detection. In particular, electrocardiogram (ECG) is a recording of the heart's electrical activity by capturing a tracingof cardiac electrical impulse as it moves from the atrium to the ventricles. The applied algorithm is to evaluate and analyze ECG signals for abnormalities detection based on P, Q, R and S peaks. In the first phase, the real-time ECG data is acquired and pre-processed. In the second phase, the procured ECG signal is subjected to feature extraction process. The extracted features detect abnormal peaks present in the waveform. Thus the normal and abnormal ECG signal could be differentiated based on the features extracted.
Wire bonding quality monitoring via refining process of electrical signal from ultrasonic generator
NASA Astrophysics Data System (ADS)
Feng, Wuwei; Meng, Qingfeng; Xie, Youbo; Fan, Hong
2011-04-01
In this paper, a technique for on-line quality detection of ultrasonic wire bonding is developed. The electrical signals from the ultrasonic generator supply, namely, voltage and current, are picked up by a measuring circuit and transformed into digital signals by a data acquisition system. A new feature extraction method is presented to characterize the transient property of the electrical signals and further evaluate the bond quality. The method includes three steps. First, the captured voltage and current are filtered by digital bandpass filter banks to obtain the corresponding subband signals such as fundamental signal, second harmonic, and third harmonic. Second, each subband envelope is obtained using the Hilbert transform for further feature extraction. Third, the subband envelopes are, respectively, separated into three phases, namely, envelope rising, stable, and damping phases, to extract the tiny waveform changes. The different waveform features are extracted from each phase of these subband envelopes. The principal components analysis (PCA) method is used for the feature selection in order to remove the relevant information and reduce the dimension of original feature variables. Using the selected features as inputs, an artificial neural network (ANN) is constructed to identify the complex bond fault pattern. By analyzing experimental data with the proposed feature extraction method and neural network, the results demonstrate the advantages of the proposed feature extraction method and the constructed artificial neural network in detecting and identifying bond quality.
NASA Astrophysics Data System (ADS)
Patil, Sandeep Baburao; Sinha, G. R.
2017-02-01
India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.
Lavine, B K; Brzozowski, D M; Ritter, J; Moores, A J; Mayfield, H T
2001-12-01
The water-soluble fraction of aviation jet fuels is examined using solid-phase extraction and solid-phase microextraction. Gas chromatographic profiles of solid-phase extracts and solid-phase microextracts of the water-soluble fraction of kerosene- and nonkerosene-based jet fuels reveal that each jet fuel possesses a unique profile. Pattern recognition analysis reveals fingerprint patterns within the data characteristic of fuel type. By using a novel genetic algorithm (GA) that emulates human pattern recognition through machine learning, it is possible to identify features characteristic of the chromatographic profile of each fuel class. The pattern recognition GA identifies a set of features that optimize the separation of the fuel classes in a plot of the two largest principal components of the data. Because principal components maximize variance, the bulk of the information encoded by the selected features is primarily about the differences between the fuel classes.
Multiple feature extraction by using simultaneous wavelet transforms
NASA Astrophysics Data System (ADS)
Mazzaferri, Javier; Ledesma, Silvia; Iemmi, Claudio
2003-07-01
We propose here a method to optically perform multiple feature extraction by using wavelet transforms. The method is based on obtaining the optical correlation by means of a Vander Lugt architecture, where the scene and the filter are displayed on spatial light modulators (SLMs). Multiple phase filters containing the information about the features that we are interested in extracting are designed and then displayed on an SLM working in phase mostly mode. We have designed filters to simultaneously detect edges and corners or different characteristic frequencies contained in the input scene. Simulated and experimental results are shown.
NASA Technical Reports Server (NTRS)
Thomas, Jr., Jess B. (Inventor)
1991-01-01
An improved digital phase lock loop incorporates several distinctive features that attain better performance at high loop gain and better phase accuracy. These features include: phase feedback to a number-controlled oscillator in addition to phase rate; analytical tracking of phase (both integer and fractional cycles); an amplitude-insensitive phase extractor; a more accurate method for extracting measured phase; a method for changing loop gain during a track without loss of lock; and a method for avoiding loss of sampled data during computation delay, while maintaining excellent tracking performance. The advantages of using phase and phase-rate feedback are demonstrated by comparing performance with that of rate-only feedback. Extraction of phase by the method of modeling provides accurate phase measurements even when the number-controlled oscillator phase is discontinuously updated.
NASA Astrophysics Data System (ADS)
Milgram, David L.; Kahn, Philip; Conner, Gary D.; Lawton, Daryl T.
1988-12-01
The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze features from Synthetic Aperture Radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of and technology issues involved in the development of an automated linear feature extraction system. This final report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.
NASA Astrophysics Data System (ADS)
Conner, Gary D.; Milgram, David L.; Lawton, Daryl T.; McConnell, Christopher C.
1988-04-01
The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze linear features from synthetic aperture radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of the technology issues involved in the development of an automatedlinear feature extraction system. This Option 1 Final Report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.
A new method for recognizing hand configurations of Brazilian gesture language.
Costa Filho, C F F; Dos Santos, B L; de Souza, R S; Dos Santos, J R; Costa, M G F
2016-08-01
This paper describes a new method for recognizing hand configurations of the Brazilian Gesture Language - LIBRAS - using depth maps obtained with a Kinect® camera. The proposed method comprised three phases: hand segmentation, feature extraction, and classification. The segmentation phase is independent from the background and depends only on pixel depth information. Using geometric operations and numerical normalization, the feature extraction process was done independent from rotation and translation. The features are extracted employing two techniques: (2D)2LDA and (2D)2PCA. The classification is made with a novelty classifier. A robust database was constructed for classifier evaluation, with 12,200 images of LIBRAS and 200 gestures of each hand configuration. The best accuracy obtained was 95.41%, which was greater than previous values obtained in the literature.
Accelerating Biomedical Signal Processing Using GPU: A Case Study of Snore Sound Feature Extraction.
Guo, Jian; Qian, Kun; Zhang, Gongxuan; Xu, Huijie; Schuller, Björn
2017-12-01
The advent of 'Big Data' and 'Deep Learning' offers both, a great challenge and a huge opportunity for personalised health-care. In machine learning-based biomedical data analysis, feature extraction is a key step for 'feeding' the subsequent classifiers. With increasing numbers of biomedical data, extracting features from these 'big' data is an intensive and time-consuming task. In this case study, we employ a Graphics Processing Unit (GPU) via Python to extract features from a large corpus of snore sound data. Those features can subsequently be imported into many well-known deep learning training frameworks without any format processing. The snore sound data were collected from several hospitals (20 subjects, with 770-990 MB per subject - in total 17.20 GB). Experimental results show that our GPU-based processing significantly speeds up the feature extraction phase, by up to seven times, as compared to the previous CPU system.
Lithgow, Brian J; Moussavi, Zahra
2018-06-05
Electrovestibulography (EVestG) recordings have been previously applied toward classifying and/or measuring the severity of several neurological disorders including depression with and without anxiety. This study's objectives were to: (1) extract EVestG features representing physiological differences of healthy women during their menses, and follicular and luteal phases of their menstrual cycle, and (2) compare these features to those observed in previous studies for depression with and without anxiety. Three EVestG recordings were made on 15 young healthy menstruating females during menses, and follicular and luteal phases. Three features were extracted, using the shape and timing of the detected spontaneously evoked vestibulo-acoustic field potentials. Using these features, a 3-way separation of the 3 phases was achieved, with a leave-one-out cross-validation, resulting in accuracy of > 72%. Using an EVestG shape feature, separation of the follicular and luteal phases was achieved with a leave-one-out cross-validation accuracy of > 93%. The mechanism of separation was not like that in previous depression analyses, and is postulated to be more akin to a form of anxiety and/or progesterone sensitivity. © 2018 S. Karger AG, Basel.
Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures
NASA Astrophysics Data System (ADS)
Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.
2013-05-01
An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.
Diabetic Rethinopathy Screening by Bright Lesions Extraction from Fundus Images
NASA Astrophysics Data System (ADS)
Hanđsková, Veronika; Pavlovičova, Jarmila; Oravec, Miloš; Blaško, Radoslav
2013-09-01
Retinal images are nowadays widely used to diagnose many diseases, for example diabetic retinopathy. In our work, we propose the algorithm for the screening application, which identifies the patients with such severe diabetic complication as diabetic retinopathy is, in early phase. In the application we use the patient's fundus photography without any additional examination by an ophtalmologist. After this screening identification, other examination methods should be considered and the patient's follow-up by a doctor is necessary. Our application is composed of three principal modules including fundus image preprocessing, feature extraction and feature classification. Image preprocessing module has the role of luminance normalization, contrast enhancement and optical disk masking. Feature extraction module includes two stages: bright lesions candidates localization and candidates feature extraction. We selected 16 statistical and structural features. For feature classification, we use multilayer perceptron (MLP) with one hidden layer. We classify images into two classes. Feature classification efficiency is about 93 percent.
Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Wismüller, Axel
2015-01-01
Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns.
Nagarajan, Mahesh B.; Coan, Paola; Huber, Markus B.; Diemoz, Paul C.; Wismüller, Axel
2015-01-01
Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns. PMID:25710875
Liquid-gas phase transition in asymmetric nuclear matter at finite temperature
NASA Astrophysics Data System (ADS)
Maruyama, Toshiki; Tatsumi, Toshitaka; Chiba, Satoshi
2010-03-01
Liquid-gas phase transition is discussed in warm asymmetric nuclear matter. Some peculiar features are figured out from the viewpoint of the basic thermodynamics about the phase equilibrium. We treat the mixed phase of the binary system based on the Gibbs conditions. When the Coulomb interaction is included, the mixed phase is no more uniform and the sequence of the pasta structures appears. Comparing the results with those given by the simple bulk calculation without the Coulomb interaction, we extract specific features of the pasta structures at finite temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
This algorithm processes high-rate 3-phase signals to identify the start time of each signal and estimate its envelope as data features. The start time and magnitude of each signal during the steady state is also extracted. The features can be used to detect abnormal signals. This algorithm is developed to analyze Exxeno's 3-phase voltage and current data recorded from refrigeration systems to detect device failure or degradation.
Conversation Thread Extraction and Topic Detection in Text-Based Chat
2008-09-01
conversation extraction task. Multiple conversations in a session are interleaved. The goal in extraction is to select only those posts that belong...others. Our first-phase experiments quite clearly show the value of using time-distance as a feature in conversation thread extraction . In this set of... EXTRACTION AND TOPIC DETECTION IN TEXT-BASED CHAT by Paige Holland Adams September 2008 Thesis Advisor
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.
Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-05-15
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.
Digital Phase-Locked Loop With Phase And Frequency Feedback
NASA Technical Reports Server (NTRS)
Thomas, J. Brooks
1991-01-01
Advanced design for digital phase-lock loop (DPLL) allows loop gains higher than those used in other designs. Divided into two major components: counterrotation processor and tracking processor. Notable features include use of both phase and rate-of-change-of-phase feedback instead of frequency feedback alone, normalized sine phase extractor, improved method for extracting measured phase, and improved method for "compressing" output rate.
An approach for automatic classification of grouper vocalizations with passive acoustic monitoring.
Ibrahim, Ali K; Chérubin, Laurent M; Zhuang, Hanqi; Schärer Umpierre, Michelle T; Dalgleish, Fraser; Erdol, Nurgun; Ouyang, B; Dalgleish, A
2018-02-01
Grouper, a family of marine fishes, produce distinct vocalizations associated with their reproductive behavior during spawning aggregation. These low frequencies sounds (50-350 Hz) consist of a series of pulses repeated at a variable rate. In this paper, an approach is presented for automatic classification of grouper vocalizations from ambient sounds recorded in situ with fixed hydrophones based on weighted features and sparse classifier. Group sounds were labeled initially by humans for training and testing various feature extraction and classification methods. In the feature extraction phase, four types of features were used to extract features of sounds produced by groupers. Once the sound features were extracted, three types of representative classifiers were applied to categorize the species that produced these sounds. Experimental results showed that the overall percentage of identification using the best combination of the selected feature extractor weighted mel frequency cepstral coefficients and sparse classifier achieved 82.7% accuracy. The proposed algorithm has been implemented in an autonomous platform (wave glider) for real-time detection and classification of group vocalizations.
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Wang, Jie; Ming, Dongping; Lv, Guonian
2017-01-01
In this paper, we first propose several novel concepts for object-based image analysis, which include line-based shape regularity, line density, and scale-based best feature value (SBV), based on the region-line primitive association framework (RLPAF). We then propose a raft cultivation area (RCA) extraction method for high spatial resolution (HSR) remote sensing imagery based on multi-scale feature fusion and spatial rule induction. The proposed method includes the following steps: (1) Multi-scale region primitives (segments) are obtained by image segmentation method HBC-SEG, and line primitives (straight lines) are obtained by phase-based line detection method. (2) Association relationships between regions and lines are built based on RLPAF, and then multi-scale RLPAF features are extracted and SBVs are selected. (3) Several spatial rules are designed to extract RCAs within sea waters after land and water separation. Experiments show that the proposed method can successfully extract different-shaped RCAs from HR images with good performance.
Elahian, Bahareh; Yeasin, Mohammed; Mudigoudar, Basanagoud; Wheless, James W; Babajani-Feremi, Abbas
2017-10-01
Using a novel technique based on phase locking value (PLV), we investigated the potential for features extracted from electrocorticographic (ECoG) recordings to serve as biomarkers to identify the seizure onset zone (SOZ). We computed the PLV between the phase of the amplitude of high gamma activity (80-150Hz) and the phase of lower frequency rhythms (4-30Hz) from ECoG recordings obtained from 10 patients with epilepsy (21 seizures). We extracted five features from the PLV and used a machine learning approach based on logistic regression to build a model that classifies electrodes as SOZ or non-SOZ. More than 96% of electrodes identified as the SOZ by our algorithm were within the resected area in six seizure-free patients. In four non-seizure-free patients, more than 31% of the identified SOZ electrodes by our algorithm were outside the resected area. In addition, we observed that the seizure outcome in non-seizure-free patients correlated with the number of non-resected SOZ electrodes identified by our algorithm. This machine learning approach, based on features extracted from the PLV, effectively identified electrodes within the SOZ. The approach has the potential to assist clinicians in surgical decision-making when pre-surgical intracranial recordings are utilized. Copyright © 2017 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images
Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-01-01
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. PMID:29762519
NASA Astrophysics Data System (ADS)
Ahmed, S.; Salucci, M.; Miorelli, R.; Anselmi, N.; Oliveri, G.; Calmon, P.; Reboud, C.; Massa, A.
2017-10-01
A quasi real-time inversion strategy is presented for groove characterization of a conductive non-ferromagnetic tube structure by exploiting eddy current testing (ECT) signal. Inversion problem has been formulated by non-iterative Learning-by-Examples (LBE) strategy. Within the framework of LBE, an efficient training strategy has been adopted with the combination of feature extraction and a customized version of output space filling (OSF) adaptive sampling in order to get optimal training set during offline phase. Partial Least Squares (PLS) and Support Vector Regression (SVR) have been exploited for feature extraction and prediction technique respectively to have robust and accurate real time inversion during online phase.
A Novel Multi-Class Ensemble Model for Classifying Imbalanced Biomedical Datasets
NASA Astrophysics Data System (ADS)
Bikku, Thulasi; Sambasiva Rao, N., Dr; Rao, Akepogu Ananda, Dr
2017-08-01
This paper mainly focuseson developing aHadoop based framework for feature selection and classification models to classify high dimensionality data in heterogeneous biomedical databases. Wide research has been performing in the fields of Machine learning, Big data and Data mining for identifying patterns. The main challenge is extracting useful features generated from diverse biological systems. The proposed model can be used for predicting diseases in various applications and identifying the features relevant to particular diseases. There is an exponential growth of biomedical repositories such as PubMed and Medline, an accurate predictive model is essential for knowledge discovery in Hadoop environment. Extracting key features from unstructured documents often lead to uncertain results due to outliers and missing values. In this paper, we proposed a two phase map-reduce framework with text preprocessor and classification model. In the first phase, mapper based preprocessing method was designed to eliminate irrelevant features, missing values and outliers from the biomedical data. In the second phase, a Map-Reduce based multi-class ensemble decision tree model was designed and implemented in the preprocessed mapper data to improve the true positive rate and computational time. The experimental results on the complex biomedical datasets show that the performance of our proposed Hadoop based multi-class ensemble model significantly outperforms state-of-the-art baselines.
Fiscal-Ladino, Jhon A; Obando-Ceballos, Mónica; Rosero-Moreano, Milton; Montaño, Diego F; Cardona, Wilson; Giraldo, Luis F; Richter, Pablo
2017-02-08
Montmorillonite (MMT) clays were modified by the intercalation into their galleries of ionic liquids (IL) based on imidazolium quaternary ammonium salts. This new eco-materials exhibited good features for use as a sorptive phase in the extraction of low-polarity analytes from aqueous samples. Spectroscopic analyses of the modified clays were conducted and revealed an increase in the basal spacing and a shifting of the reflection plane towards lower values as a consequence of the effective intercalation of organic cations into the MMT structure. The novel sorbent developed herein was assayed as the sorptive phase in rotating-disk sorptive extraction (RDSE), using polychlorinated biphenyls (PCBs), representative of low-polarity pollutants, as model analytes. The final determination was made by gas chromatography with electron capture detection. Among the synthetized sorptive phases, the selected system for analytical purposes consisted of MMT modified with the 1-hexadecyl-3-methylimidazolium bromide (HDMIM-Br) IL. Satisfactory analytical features were achieved using a sample volume of 5 mL: the relative recoveries from a wastewater sample were higher than 80%, the detection limits were between 3 ng L -1 and 43 ng L -1 , the precision (within-run precision) expressed as the relative standard deviation ranged from 2% to 24%, and the enrichment factors ranged between 18 and 28. Using RDSE, the extraction efficiency achieved for the selected MMT-HDMIM-Br phase was compared with other commercial solid phases/supports, such as polypropylene, polypropylene with 1-octanol (as a supported liquid membrane), octadecyl (C18) and octyl (C8), and showed the highest response for all the studied analytes. Under the optimized extraction conditions, this new device was applied in the analysis of the influent of a wastewater treatment plant in Santiago (Chile), demonstrating its applicability through the good recoveries and precision achieved with real samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Automated Extraction of Secondary Flow Features
NASA Technical Reports Server (NTRS)
Dorney, Suzanne M.; Haimes, Robert
2005-01-01
The use of Computational Fluid Dynamics (CFD) has become standard practice in the design and development of the major components used for air and space propulsion. To aid in the post-processing and analysis phase of CFD many researchers now use automated feature extraction utilities. These tools can be used to detect the existence of such features as shocks, vortex cores and separation and re-attachment lines. The existence of secondary flow is another feature of significant importance to CFD engineers. Although the concept of secondary flow is relatively understood there is no commonly accepted mathematical definition for secondary flow. This paper will present a definition for secondary flow and one approach for automatically detecting and visualizing secondary flow.
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Wang, Yanchun; Liu, Wu
2017-11-01
This paper proposes a novel classification paradigm for hyperspectral image (HSI) using feature-level fusion and deep learning-based methodologies. Operation is carried out in three main steps. First, during a pre-processing stage, wave atoms are introduced into bilateral filter to smooth HSI, and this strategy can effectively attenuate noise and restore texture information. Meanwhile, high quality spectral-spatial features can be extracted from HSI by taking geometric closeness and photometric similarity among pixels into consideration simultaneously. Second, higher order statistics techniques are firstly introduced into hyperspectral data classification to characterize the phase correlations of spectral curves. Third, multifractal spectrum features are extracted to characterize the singularities and self-similarities of spectra shapes. To this end, a feature-level fusion is applied to the extracted spectral-spatial features along with higher order statistics and multifractal spectrum features. Finally, stacked sparse autoencoder is utilized to learn more abstract and invariant high-level features from the multiple feature sets, and then random forest classifier is employed to perform supervised fine-tuning and classification. Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erlinger, C.; Belloni, L.; Zemb, T.
1999-03-30
Using small angle X-ray scattering, conductivity, and phase behavior determination, the authors show that concentrated solutions of malonamide extractants, dimethyldibutyltetradecylmalonamide (DMDBTDMA), are organized in reverse oligomeric aggregates which have many features in common with reverse micelles. The aggregation numbers of these reverse globular aggregates as well as their interaction potential are determined from absolute scattering curves. An attractive interaction is responsible for the demixing of the oil phase when in equilibrium with excess oil. Prediction of conductivity as well as the formation conditions for the third phase is possible using standard liquid theory applied to the extractant aggregates. The interactions,more » modeled with the sticky sphere model proposed by Baster, are shown to be due to steric interactions resulting from the hydrophobic tails of the extractant molecule and van der Waals forces between the highly polarizable water core of the reverse micelles. The attractive interaction in the oil phase, equilibrated with water, is determined as a function of temperature, extractant molecule concentration, and proton and neodynium(III) cation concentration. It is shown that van der Waals interactions, with an effective Hamaker constant of 3kT, quantitatively explain the behavior of DMDBTDMA in n-dodecane in terms of scattering as well as phase stability limits.« less
Klijn, Marieke E; Hubbuch, Jürgen
2018-04-27
Protein phase diagrams are a tool to investigate cause and consequence of solution conditions on protein phase behavior. The effects are scored according to aggregation morphologies such as crystals or amorphous precipitates. Solution conditions affect morphological features, such as crystal size, as well as kinetic features, such as crystal growth time. Common used data visualization techniques include individual line graphs or symbols-based phase diagrams. These techniques have limitations in terms of handling large datasets, comprehensiveness or completeness. To eliminate these limitations, morphological and kinetic features obtained from crystallization images generated with high throughput microbatch experiments have been visualized with radar charts in combination with the empirical phase diagram (EPD) method. Morphological features (crystal size, shape, and number, as well as precipitate size) and kinetic features (crystal and precipitate onset and growth time) are extracted for 768 solutions with varying chicken egg white lysozyme concentration, salt type, ionic strength and pH. Image-based aggregation morphology and kinetic features were compiled into a single and easily interpretable figure, thereby showing that the EPD method can support high throughput crystallization experiments in its data amount as well as its data complexity. Copyright © 2018. Published by Elsevier Inc.
Mixed monofunctional extractants for trivalent actinide/lanthanide separations: TALSPEAK-MME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Aaron T.; Nash, Kenneth L.
The basic features of an f-element extraction process based on a solvent composed of equimolar mixtures of Cyanex-923 (a mixed trialkyl phosphine oxide) and 2-ethylhexylphosphonic acid mono-2-ethylhexyl ester (HEH[EHP]) extractants in n-dodecane are investigated in this report. This system, which combines features of the TRPO and TALSPEAK processes, is based on co-extraction of trivalent lanthanides and actinides from 0.1 to 1.0 M HNO 3 followed by application of a buffered aminopolycarboxylate solution strip to accomplish a Reverse TALSPEAK selective removal of actinides. This mixed-extractant medium could enable a simplified approach to selective trivalent f-element extraction and actinide partitioning in amore » single process. As compared with other combined process applications in development for more compact actinide partitioning processes (DIAMEX-SANEX, GANEX, TRUSPEAK, ALSEP), this combination features only monofunctional extractants with high solubility limits and comparatively low molar mass. Selective actinide stripping from the loaded extractant phase is done using a glycine-buffered solution containing N-(2-hydroxyethyl)ethylenediaminetriacetic acid (HEDTA) or triethylenetetramine-N,N,N',N'',N''',N'''-hexaacetic acid (TTHA). Lastly, the results reported provide evidence for simplified interactions between the two extractants and demonstrate a pathway toward using mixed monofunctional extractants to separate trivalent actinides (An) from fission product lanthanides (Ln).« less
Mixed monofunctional extractants for trivalent actinide/lanthanide separations: TALSPEAK-MME
Johnson, Aaron T.; Nash, Kenneth L.
2015-08-20
The basic features of an f-element extraction process based on a solvent composed of equimolar mixtures of Cyanex-923 (a mixed trialkyl phosphine oxide) and 2-ethylhexylphosphonic acid mono-2-ethylhexyl ester (HEH[EHP]) extractants in n-dodecane are investigated in this report. This system, which combines features of the TRPO and TALSPEAK processes, is based on co-extraction of trivalent lanthanides and actinides from 0.1 to 1.0 M HNO 3 followed by application of a buffered aminopolycarboxylate solution strip to accomplish a Reverse TALSPEAK selective removal of actinides. This mixed-extractant medium could enable a simplified approach to selective trivalent f-element extraction and actinide partitioning in amore » single process. As compared with other combined process applications in development for more compact actinide partitioning processes (DIAMEX-SANEX, GANEX, TRUSPEAK, ALSEP), this combination features only monofunctional extractants with high solubility limits and comparatively low molar mass. Selective actinide stripping from the loaded extractant phase is done using a glycine-buffered solution containing N-(2-hydroxyethyl)ethylenediaminetriacetic acid (HEDTA) or triethylenetetramine-N,N,N',N'',N''',N'''-hexaacetic acid (TTHA). Lastly, the results reported provide evidence for simplified interactions between the two extractants and demonstrate a pathway toward using mixed monofunctional extractants to separate trivalent actinides (An) from fission product lanthanides (Ln).« less
On the application of neural networks to the classification of phase modulated waveforms
NASA Astrophysics Data System (ADS)
Buchenroth, Anthony; Yim, Joong Gon; Nowak, Michael; Chakravarthy, Vasu
2017-04-01
Accurate classification of phase modulated radar waveforms is a well-known problem in spectrum sensing. Identification of such waveforms aids situational awareness enabling radar and communications spectrum sharing. While various feature extraction and engineering approaches have sought to address this problem, the use of a machine learning algorithm that best utilizes these features is becomes foremost. In this effort, a comparison of a standard shallow and a deep learning approach are explored. Experiments provide insights into classifier architecture, training procedure, and performance.
NASA Astrophysics Data System (ADS)
Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.
2018-03-01
The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.
Shanthi, C; Pappa, N
2017-05-01
Flow pattern recognition is necessary to select design equations for finding operating details of the process and to perform computational simulations. Visual image processing can be used to automate the interpretation of patterns in two-phase flow. In this paper, an attempt has been made to improve the classification accuracy of the flow pattern of gas/ liquid two- phase flow using fuzzy logic and Support Vector Machine (SVM) with Principal Component Analysis (PCA). The videos of six different types of flow patterns namely, annular flow, bubble flow, churn flow, plug flow, slug flow and stratified flow are recorded for a period and converted to 2D images for processing. The textural and shape features extracted using image processing are applied as inputs to various classification schemes namely fuzzy logic, SVM and SVM with PCA in order to identify the type of flow pattern. The results obtained are compared and it is observed that SVM with features reduced using PCA gives the better classification accuracy and computationally less intensive than other two existing schemes. This study results cover industrial application needs including oil and gas and any other gas-liquid two-phase flows. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
FAST TRACK COMMUNICATION: Spontaneous symmetry breaking in a bridge model fed by junctions
NASA Astrophysics Data System (ADS)
Popkov, Vladislav; Evans, Martin R.; Mukamel, David
2008-10-01
We introduce a class of 1D models mimicking a single-lane bridge with two junctions and two particle species driven in opposite directions. The model exhibits spontaneous symmetry breaking (SSB) for a range of injection/extraction rates. In this phase the steady-state currents of the two species are not equal. Moreover, there is a co-existence region in which the symmetry-broken phase co-exists with a symmetric phase. Along a path in which the extraction rate is varied, keeping the injection rate fixed and large, hysteresis takes place. The mean-field phase diagram is calculated and supporting Monte Carlo simulations are presented. One of the transition lines exhibits a kink, a feature which cannot exist in transition lines of equilibrium phase transitions.
Heart sounds analysis using probability assessment.
Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P
2017-07-31
This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.
NASA Astrophysics Data System (ADS)
Deng, Botao; Abidin, Anas Z.; D'Souza, Adora M.; Nagarajan, Mahesh B.; Coan, Paola; Wismüller, Axel
2017-03-01
The effectiveness of phase contrast X-ray computed tomography (PCI-CT) in visualizing human patellar cartilage matrix has been demonstrated due to its ability to capture soft tissue contrast on a micrometer resolution scale. Recent studies have shown that off-the-shelf Convolutional Neural Network (CNN) features learned from a nonmedical data set can be used for medical image classification. In this paper, we investigate the ability of features extracted from two different CNNs for characterizing chondrocyte patterns in the cartilage matrix. We obtained features from 842 regions of interest annotated on PCI-CT images of human patellar cartilage using CaffeNet and Inception-v3 Network, which were then used in a machine learning task involving support vector machines with radial basis function kernel to classify the ROIs as healthy or osteoarthritic. Classification performance was evaluated using the area (AUC) under the Receiver Operating Characteristic (ROC) curve. The best classification performance was observed with features from Inception-v3 network (AUC = 0.95), which outperforms features extracted from CaffeNet (AUC = 0.91). These results suggest that such characterization of chondrocyte patterns using features from internal layers of CNNs can be used to distinguish between healthy and osteoarthritic tissue with high accuracy.
NASA Astrophysics Data System (ADS)
Iqtait, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.
Opinion mining on book review using CNN-L2-SVM algorithm
NASA Astrophysics Data System (ADS)
Rozi, M. F.; Mukhlash, I.; Soetrisno; Kimura, M.
2018-03-01
Review of a product can represent quality of a product itself. An extraction to that review can be used to know sentiment of that opinion. Process to extract useful information of user review is called Opinion Mining. Review extraction model that is enhancing nowadays is Deep Learning model. This Model has been used by many researchers to obtain excellent performance on Natural Language Processing. In this research, one of deep learning model, Convolutional Neural Network (CNN) is used for feature extraction and L2 Support Vector Machine (SVM) as classifier. These methods are implemented to know the sentiment of book review data. The result of this method shows state-of-the art performance in 83.23% for training phase and 64.6% for testing phase.
NASA Astrophysics Data System (ADS)
Chen, Xiang; Li, Jingchao; Han, Hui; Ying, Yulong
2018-05-01
Because of the limitations of the traditional fractal box-counting dimension algorithm in subtle feature extraction of radiation source signals, a dual improved generalized fractal box-counting dimension eigenvector algorithm is proposed. First, the radiation source signal was preprocessed, and a Hilbert transform was performed to obtain the instantaneous amplitude of the signal. Then, the improved fractal box-counting dimension of the signal instantaneous amplitude was extracted as the first eigenvector. At the same time, the improved fractal box-counting dimension of the signal without the Hilbert transform was extracted as the second eigenvector. Finally, the dual improved fractal box-counting dimension eigenvectors formed the multi-dimensional eigenvectors as signal subtle features, which were used for radiation source signal recognition by the grey relation algorithm. The experimental results show that, compared with the traditional fractal box-counting dimension algorithm and the single improved fractal box-counting dimension algorithm, the proposed dual improved fractal box-counting dimension algorithm can better extract the signal subtle distribution characteristics under different reconstruction phase space, and has a better recognition effect with good real-time performance.
Resonance-Based Time-Frequency Manifold for Feature Extraction of Ship-Radiated Noise.
Yan, Jiaquan; Sun, Haixin; Chen, Hailan; Junejo, Naveed Ur Rehman; Cheng, En
2018-03-22
In this paper, a novel time-frequency signature using resonance-based sparse signal decomposition (RSSD), phase space reconstruction (PSR), time-frequency distribution (TFD) and manifold learning is proposed for feature extraction of ship-radiated noise, which is called resonance-based time-frequency manifold (RTFM). This is suitable for analyzing signals with oscillatory, non-stationary and non-linear characteristics in a situation of serious noise pollution. Unlike the traditional methods which are sensitive to noise and just consider one side of oscillatory, non-stationary and non-linear characteristics, the proposed RTFM can provide the intact feature signature of all these characteristics in the form of a time-frequency signature by the following steps: first, RSSD is employed on the raw signal to extract the high-oscillatory component and abandon the low-oscillatory component. Second, PSR is performed on the high-oscillatory component to map the one-dimensional signal to the high-dimensional phase space. Third, TFD is employed to reveal non-stationary information in the phase space. Finally, manifold learning is applied to the TFDs to fetch the intrinsic non-linear manifold. A proportional addition of the top two RTFMs is adopted to produce the improved RTFM signature. All of the case studies are validated on real audio recordings of ship-radiated noise. Case studies of ship-radiated noise on different datasets and various degrees of noise pollution manifest the effectiveness and robustness of the proposed method.
Resonance-Based Time-Frequency Manifold for Feature Extraction of Ship-Radiated Noise
Yan, Jiaquan; Sun, Haixin; Chen, Hailan; Junejo, Naveed Ur Rehman; Cheng, En
2018-01-01
In this paper, a novel time-frequency signature using resonance-based sparse signal decomposition (RSSD), phase space reconstruction (PSR), time-frequency distribution (TFD) and manifold learning is proposed for feature extraction of ship-radiated noise, which is called resonance-based time-frequency manifold (RTFM). This is suitable for analyzing signals with oscillatory, non-stationary and non-linear characteristics in a situation of serious noise pollution. Unlike the traditional methods which are sensitive to noise and just consider one side of oscillatory, non-stationary and non-linear characteristics, the proposed RTFM can provide the intact feature signature of all these characteristics in the form of a time-frequency signature by the following steps: first, RSSD is employed on the raw signal to extract the high-oscillatory component and abandon the low-oscillatory component. Second, PSR is performed on the high-oscillatory component to map the one-dimensional signal to the high-dimensional phase space. Third, TFD is employed to reveal non-stationary information in the phase space. Finally, manifold learning is applied to the TFDs to fetch the intrinsic non-linear manifold. A proportional addition of the top two RTFMs is adopted to produce the improved RTFM signature. All of the case studies are validated on real audio recordings of ship-radiated noise. Case studies of ship-radiated noise on different datasets and various degrees of noise pollution manifest the effectiveness and robustness of the proposed method. PMID:29565288
Artificial bee colony algorithm for single-trial electroencephalogram analysis.
Hsu, Wei-Yen; Hu, Ya-Ping
2015-04-01
In this study, we propose an analysis system combined with feature selection to further improve the classification accuracy of single-trial electroencephalogram (EEG) data. Acquiring event-related brain potential data from the sensorimotor cortices, the system comprises artifact and background noise removal, feature extraction, feature selection, and feature classification. First, the artifacts and background noise are removed automatically by means of independent component analysis and surface Laplacian filter, respectively. Several potential features, such as band power, autoregressive model, and coherence and phase-locking value, are then extracted for subsequent classification. Next, artificial bee colony (ABC) algorithm is used to select features from the aforementioned feature combination. Finally, selected subfeatures are classified by support vector machine. Comparing with and without artifact removal and feature selection, using a genetic algorithm on single-trial EEG data for 6 subjects, the results indicate that the proposed system is promising and suitable for brain-computer interface applications. © EEG and Clinical Neuroscience Society (ECNS) 2014.
Assessing Footwear Effects from Principal Features of Plantar Loading during Running.
Trudeau, Matthieu B; von Tscharner, Vinzenz; Vienneau, Jordyn; Hoerzer, Stefan; Nigg, Benno M
2015-09-01
The effects of footwear on the musculoskeletal system are commonly assessed by interpreting the resultant force at the foot during the stance phase of running. However, this approach overlooks loading patterns across the entire foot. An alternative technique for assessing foot loading across different footwear conditions is possible using comprehensive analysis tools that extract different foot loading features, thus enhancing the functional interpretation of the differences across different interventions. The purpose of this article was to use pattern recognition techniques to develop and use a novel comprehensive method for assessing the effects of different footwear interventions on plantar loading. A principal component analysis was used to extract different loading features from the stance phase of running, and a support vector machine (SVM) was used to determine whether and how these loading features were different across three shoe conditions. The results revealed distinct loading features at the foot during the stance phase of running. The loading features determined from the principal component analysis allowed successful classification of all three shoe conditions using the SVM. Several differences were found in the location and timing of the loading across each pairwise shoe comparison using the output from the SVM. The analysis approach proposed can successfully be used to compare different loading patterns with a much greater resolution than has been reported previously. This study has several important applications. One such application is that it would not be relevant for a user to select a shoe or for a manufacturer to alter a shoe's construction if the classification across shoe conditions would not have been significant.
Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan
2014-03-01
Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. © 2013 ISA Published by ISA All rights reserved.
NASA Astrophysics Data System (ADS)
Kistrup, Kasper; Skotte Sørensen, Karen; Wolff, Anders; Fougt Hansen, Mikkel
2015-04-01
We present an all-polymer, single-use microfluidic chip system produced by injection moulding and bonded by ultrasonic welding. Both techniques are compatible with low-cost industrial mass-production. The chip is produced for magnetic bead-based solid-phase extraction facilitated by immiscible phase filtration and features passive liquid filling and magnetic bead manipulation using an external magnet. In this work, we determine the system compatibility with various surfactants. Moreover, we quantify the volume of liquid co-transported with magnetic bead clusters from Milli-Q water or a lysis-binding buffer for nucleic acid extraction (0.1 (v/v)% Triton X-100 in 5 M guanidine hydrochloride). A linear relationship was found between the liquid carry-over and mass of magnetic beads used. Interestingly, similar average carry-overs of 1.74(8) nL/μg and 1.72(14) nL/μg were found for Milli-Q water and lysis-binding buffer, respectively.
Safeguarding End-User Military Software
2014-12-04
product lines using composi- tional symbolic execution [17] Software product lines are families of products defined by feature commonality and vari...ability, with a well-managed asset base. Recent work in testing of software product lines has exploited similarities across development phases to reuse...feature dependence graph to extract the set of possible interaction trees in a product family. It composes these to incrementally and symbolically
Kernel-based discriminant feature extraction using a representative dataset
NASA Astrophysics Data System (ADS)
Li, Honglin; Sancho Gomez, Jose-Luis; Ahalt, Stanley C.
2002-07-01
Discriminant Feature Extraction (DFE) is widely recognized as an important pre-processing step in classification applications. Most DFE algorithms are linear and thus can only explore the linear discriminant information among the different classes. Recently, there has been several promising attempts to develop nonlinear DFE algorithms, among which is Kernel-based Feature Extraction (KFE). The efficacy of KFE has been experimentally verified by both synthetic data and real problems. However, KFE has some known limitations. First, KFE does not work well for strongly overlapped data. Second, KFE employs all of the training set samples during the feature extraction phase, which can result in significant computation when applied to very large datasets. Finally, KFE can result in overfitting. In this paper, we propose a substantial improvement to KFE that overcomes the above limitations by using a representative dataset, which consists of critical points that are generated from data-editing techniques and centroid points that are determined by using the Frequency Sensitive Competitive Learning (FSCL) algorithm. Experiments show that this new KFE algorithm performs well on significantly overlapped datasets, and it also reduces computational complexity. Further, by controlling the number of centroids, the overfitting problem can be effectively alleviated.
Robust electroencephalogram phase estimation with applications in brain-computer interface systems.
Seraj, Esmaeil; Sameni, Reza
2017-03-01
In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.
A biometric identification system based on eigenpalm and eigenfinger features.
Ribaric, Slobodan; Fratric, Ivan
2005-11-01
This paper presents a multimodal biometric identification system based on the features of the human hand. We describe a new biometric approach to personal identification using eigenfinger and eigenpalm features, with fusion applied at the matching-score level. The identification process can be divided into the following phases: capturing the image; preprocessing; extracting and normalizing the palm and strip-like finger subimages; extracting the eigenpalm and eigenfinger features based on the K-L transform; matching and fusion; and, finally, a decision based on the (k, l)-NN classifier and thresholding. The system was tested on a database of 237 people (1,820 hand images). The experimental results showed the effectiveness of the system in terms of the recognition rate (100 percent), the equal error rate (EER = 0.58 percent), and the total error rate (TER = 0.72 percent).
Automatic diagnosis of malaria based on complete circle-ellipse fitting search algorithm.
Sheikhhosseini, M; Rabbani, H; Zekri, M; Talebi, A
2013-12-01
Diagnosis of malaria parasitemia from blood smears is a subjective and time-consuming task for pathologists. The automatic diagnostic process will reduce the diagnostic time. Also, it can be worked as a second opinion for pathologists and may be useful in malaria screening. This study presents an automatic method for malaria diagnosis from thin blood smears. According to this fact that malaria life cycle is started by forming a ring around the parasite nucleus, the proposed approach is mainly based on curve fitting to detect parasite ring in the blood smear. The method is composed of six main phases: stain object extraction step, which extracts candidate objects that may be infected by malaria parasites. This phase includes stained pixel extraction step based on intensity and colour, and stained object segmentation by defining stained circle matching. Second step is preprocessing phase which makes use of nonlinear diffusion filtering. The process continues with detection of parasite nucleus from resulted image of previous step according to image intensity. Fourth step introduces a complete search process in which the circle search step identifies the direction and initial points for direct least-square ellipse fitting algorithm. Furthermore in the ellipse searching process, although parasite shape is completed undesired regions with high error value are removed and ellipse parameters are modified. Features are extracted from the parasite candidate region instead of whole candidate object in the fifth step. By employing this special feature extraction way, which is provided by special searching process, the necessity of employing clump splitting methods is removed. Also, defining stained circle matching process in the first step speeds up the whole procedure. Finally, a series of decision rules are applied on the extracted features to decide on the positivity or negativity of malaria parasite presence. The algorithm is applied on 26 digital images which are provided from thin blood smear films. The images are contained 1274 objects which may be infected by parasite or healthy. Applying the automatic identification of malaria on provided database showed a sensitivity of 82.28% and specificity of 98.02%. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Singla, Neeru; Dubey, Kavita; Srivastava, Vishal; Ahmad, Azeem; Mehta, D. S.
2018-02-01
We developed an automated high-resolution full-field spatial coherence tomography (FF-SCT) microscope for quantitative phase imaging that is based on the spatial, rather than the temporal, coherence gating. The Red and Green color laser light was used for finding the quantitative phase images of unstained human red blood cells (RBCs). This study uses morphological parameters of unstained RBCs phase images to distinguish between normal and infected cells. We recorded the single interferogram by a FF-SCT microscope for red and green color wavelength and average the two phase images to further reduced the noise artifacts. In order to characterize anemia infected from normal cells different morphological features were extracted and these features were used to train machine learning ensemble model to classify RBCs with high accuracy.
NASA Astrophysics Data System (ADS)
Abidin, Anas Z.; Nagarajan, Mahesh B.; Checefsky, Walter A.; Coan, Paola; Diemoz, Paul C.; Hobbs, Susan K.; Huber, Markus B.; Wismüller, Axel
2015-03-01
Phase contrast X-ray computed tomography (PCI-CT) has recently emerged as a novel imaging technique that allows visualization of cartilage soft tissue, subsequent examination of chondrocyte patterns, and their correlation to osteoarthritis. Previous studies have shown that 2D texture features are effective at distinguishing between healthy and osteoarthritic regions of interest annotated in the radial zone of cartilage matrix on PCI-CT images. In this study, we further extend the texture analysis to 3D and investigate the ability of volumetric texture features at characterizing chondrocyte patterns in the cartilage matrix for purposes of classification. Here, we extracted volumetric texture features derived from Minkowski Functionals and gray-level co-occurrence matrices (GLCM) from 496 volumes of interest (VOI) annotated on PCI-CT images of human patellar cartilage specimens. The extracted features were then used in a machine-learning task involving support vector regression to classify ROIs as healthy or osteoarthritic. Classification performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC). The best classification performance was observed with GLCM features correlation (AUC = 0.83 +/- 0.06) and homogeneity (AUC = 0.82 +/- 0.07), which significantly outperformed all Minkowski Functionals (p < 0.05). These results suggest that such quantitative analysis of chondrocyte patterns in human patellar cartilage matrix involving GLCM-derived statistical features can distinguish between healthy and osteoarthritic tissue with high accuracy.
NASA Astrophysics Data System (ADS)
Huschauer, A.; Blas, A.; Borburgh, J.; Damjanovic, S.; Gilardoni, S.; Giovannozzi, M.; Hourican, M.; Kahle, K.; Le Godec, G.; Michels, O.; Sterbini, G.; Hernalsteens, C.
2017-06-01
Following a successful commissioning period, the multiturn extraction (MTE) at the CERN Proton Synchrotron (PS) has been applied for the fixed-target physics programme at the Super Proton Synchrotron (SPS) since September 2015. This exceptional extraction technique was proposed to replace the long-serving continuous transfer (CT) extraction, which has the drawback of inducing high activation in the ring. MTE exploits the principles of nonlinear beam dynamics to perform loss-free beam splitting in the horizontal phase space. Over multiple turns, the resulting beamlets are then transferred to the downstream accelerator. The operational deployment of MTE was rendered possible by the full understanding and mitigation of different hardware limitations and by redesigning the extraction trajectories and nonlinear optics, which was required due to the installation of a dummy septum to reduce the activation of the magnetic extraction septum. This paper focuses on these key features including the use of the transverse damper and the septum shadowing, which allowed a transition from the MTE study to a mature operational extraction scheme.
NASA Astrophysics Data System (ADS)
Lao, Zhiqiang; Zheng, Xin
2011-03-01
This paper proposes a multiscale method to quantify tissue spiculation and distortion in mammography CAD systems that aims at improving the sensitivity in detecting architectural distortion and spiculated mass. This approach addresses the difficulty of predetermining the neighborhood size for feature extraction in characterizing lesions demonstrating spiculated mass/architectural distortion that may appear in different sizes. The quantification is based on the recognition of tissue spiculation and distortion pattern using multiscale first-order phase portrait model in texture orientation field generated by Gabor filter bank. A feature map is generated based on the multiscale quantification for each mammogram and two features are then extracted from the feature map. These two features will be combined with other mass features to provide enhanced discriminate ability in detecting lesions demonstrating spiculated mass and architectural distortion. The efficiency and efficacy of the proposed method are demonstrated with results obtained by applying the method to over 500 cancer cases and over 1000 normal cases.
X-ray phase contrast tomography by tracking near field speckle
Wang, Hongchang; Berujon, Sebastien; Herzen, Julia; Atwood, Robert; Laundy, David; Hipp, Alexander; Sawhney, Kawal
2015-01-01
X-ray imaging techniques that capture variations in the x-ray phase can yield higher contrast images with lower x-ray dose than is possible with conventional absorption radiography. However, the extraction of phase information is often more difficult than the extraction of absorption information and requires a more sophisticated experimental arrangement. We here report a method for three-dimensional (3D) X-ray phase contrast computed tomography (CT) which gives quantitative volumetric information on the real part of the refractive index. The method is based on the recently developed X-ray speckle tracking technique in which the displacement of near field speckle is tracked using a digital image correlation algorithm. In addition to differential phase contrast projection images, the method allows the dark-field images to be simultaneously extracted. After reconstruction, compared to conventional absorption CT images, the 3D phase CT images show greatly enhanced contrast. This new imaging method has advantages compared to other X-ray imaging methods in simplicity of experimental arrangement, speed of measurement and relative insensitivity to beam movements. These features make the technique an attractive candidate for material imaging such as in-vivo imaging of biological systems containing soft tissue. PMID:25735237
Luo, Junhai; Fu, Liang
2017-06-09
With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.
NASA Astrophysics Data System (ADS)
Musa Abbagoni, Baba; Yeung, Hoi
2016-08-01
The identification of flow pattern is a key issue in multiphase flow which is encountered in the petrochemical industry. It is difficult to identify the gas-liquid flow regimes objectively with the gas-liquid two-phase flow. This paper presents the feasibility of a clamp-on instrument for an objective flow regime classification of two-phase flow using an ultrasonic Doppler sensor and an artificial neural network, which records and processes the ultrasonic signals reflected from the two-phase flow. Experimental data is obtained on a horizontal test rig with a total pipe length of 21 m and 5.08 cm internal diameter carrying air-water two-phase flow under slug, elongated bubble, stratified-wavy and, stratified flow regimes. Multilayer perceptron neural networks (MLPNNs) are used to develop the classification model. The classifier requires features as an input which is representative of the signals. Ultrasound signal features are extracted by applying both power spectral density (PSD) and discrete wavelet transform (DWT) methods to the flow signals. A classification scheme of ‘1-of-C coding method for classification’ was adopted to classify features extracted into one of four flow regime categories. To improve the performance of the flow regime classifier network, a second level neural network was incorporated by using the output of a first level networks feature as an input feature. The addition of the two network models provided a combined neural network model which has achieved a higher accuracy than single neural network models. Classification accuracies are evaluated in the form of both the PSD and DWT features. The success rates of the two models are: (1) using PSD features, the classifier missed 3 datasets out of 24 test datasets of the classification and scored 87.5% accuracy; (2) with the DWT features, the network misclassified only one data point and it was able to classify the flow patterns up to 95.8% accuracy. This approach has demonstrated the success of a clamp-on ultrasound sensor for flow regime classification that would be possible in industry practice. It is considerably more promising than other techniques as it uses a non-invasive and non-radioactive sensor.
Earth resources data analysis program, phase 3
NASA Technical Reports Server (NTRS)
1975-01-01
Tasks were performed in two areas: (1) systems analysis and (2) algorithmic development. The major effort in the systems analysis task was the development of a recommended approach to the monitoring of resource utilization data for the Large Area Crop Inventory Experiment (LACIE). Other efforts included participation in various studies concerning the LACIE Project Plan, the utility of the GE Image 100, and the specifications for a special purpose processor to be used in the LACIE. In the second task, the major effort was the development of improved algorithms for estimating proportions of unclassified remotely sensed data. Also, work was performed on optimal feature extraction and optimal feature extraction for proportion estimation.
Note: Fully integrated 3.2 Gbps quantum random number generator with real-time extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiao-Guang; Nie, You-Qi; Liang, Hao
2016-07-15
We present a real-time and fully integrated quantum random number generator (QRNG) by measuring laser phase fluctuations. The QRNG scheme based on laser phase fluctuations is featured for its capability of generating ultra-high-speed random numbers. However, the speed bottleneck of a practical QRNG lies on the limited speed of randomness extraction. To close the gap between the fast randomness generation and the slow post-processing, we propose a pipeline extraction algorithm based on Toeplitz matrix hashing and implement it in a high-speed field-programmable gate array. Further, all the QRNG components are integrated into a module, including a compact and actively stabilizedmore » interferometer, high-speed data acquisition, and real-time data post-processing and transmission. The final generation rate of the QRNG module with real-time extraction can reach 3.2 Gbps.« less
Jian, Wenjuan; Chen, Minyou; McFarland, Dennis J
2017-11-01
Phase-locking value (PLV) is a potentially useful feature in sensorimotor rhythm-based brain-computer interface (BCI). However, volume conduction may cause spurious zero-phase coupling between two EEG signals and it is not clear whether PLV effects are independent of spectral amplitude. Volume conduction might be reduced by spatial filtering, but it is uncertain what impact this might have on PLV. Therefore, the goal of this study was to explore whether zero-phase PLV is meaningful and how it is affected by spatial filtering. Both amplitude and PLV feature were extracted in the frequency band of 10-15 Hz by classical methods using archival EEG data of 18 subjects trained on a two-target BCI task. The results show that with right ear-referenced data, there is meaningful long-range zero-phase synchronization likely involving the primary motor area and the supplementary motor area that cannot be explained by volume conduction. Another novel finding is that the large Laplacian spatial filter enhances the amplitude feature but eliminates most of the phase information seen in ear-referenced data. A bipolar channel using phase-coupled areas also includes both phase and amplitude information and has a significant practical advantage since fewer channels required.
Khotanlou, Hassan; Afrasiabi, Mahlagha
2012-10-01
This paper presents a new feature selection approach for automatically extracting multiple sclerosis (MS) lesions in three-dimensional (3D) magnetic resonance (MR) images. Presented method is applicable to different types of MS lesions. In this method, T1, T2, and fluid attenuated inversion recovery (FLAIR) images are firstly preprocessed. In the next phase, effective features to extract MS lesions are selected by using a genetic algorithm (GA). The fitness function of the GA is the Similarity Index (SI) of a support vector machine (SVM) classifier. The results obtained on different types of lesions have been evaluated by comparison with manual segmentations. This algorithm is evaluated on 15 real 3D MR images using several measures. As a result, the SI between MS regions determined by the proposed method and radiologists was 87% on average. Experiments and comparisons with other methods show the effectiveness and the efficiency of the proposed approach.
Magnetic field feature extraction and selection for indoor location estimation.
Galván-Tejada, Carlos E; García-Vázquez, Juan Pablo; Brena, Ramon F
2014-06-20
User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios.
On the use of feature selection to improve the detection of sea oil spills in SAR images
NASA Astrophysics Data System (ADS)
Mera, David; Bolon-Canedo, Veronica; Cotos, J. M.; Alonso-Betanzos, Amparo
2017-03-01
Fast and effective oil spill detection systems are crucial to ensure a proper response to environmental emergencies caused by hydrocarbon pollution on the ocean's surface. Typically, these systems uncover not only oil spills, but also a high number of look-alikes. The feature extraction is a critical and computationally intensive phase where each detected dark spot is independently examined. Traditionally, detection systems use an arbitrary set of features to discriminate between oil spills and look-alikes phenomena. However, Feature Selection (FS) methods based on Machine Learning (ML) have proved to be very useful in real domains for enhancing the generalization capabilities of the classifiers, while discarding the existing irrelevant features. In this work, we present a generic and systematic approach, based on FS methods, for choosing a concise and relevant set of features to improve the oil spill detection systems. We have compared five FS methods: Correlation-based feature selection (CFS), Consistency-based filter, Information Gain, ReliefF and Recursive Feature Elimination for Support Vector Machine (SVM-RFE). They were applied on a 141-input vector composed of features from a collection of outstanding studies. Selected features were validated via a Support Vector Machine (SVM) classifier and the results were compared with previous works. Test experiments revealed that the classifier trained with the 6-input feature vector proposed by SVM-RFE achieved the best accuracy and Cohen's kappa coefficient (87.1% and 74.06% respectively). This is a smaller feature combination with similar or even better classification accuracy than previous works. The presented finding allows to speed up the feature extraction phase without reducing the classifier accuracy. Experiments also confirmed the significance of the geometrical features since 75.0% of the different features selected by the applied FS methods as well as 66.67% of the proposed 6-input feature vector belong to this category.
NASA Astrophysics Data System (ADS)
Nagarajan, Mahesh B.; Coan, Paola; Huber, Markus B.; Diemoz, Paul C.; Wismüller, Axel
2014-03-01
Current assessment of cartilage is primarily based on identification of indirect markers such as joint space narrowing and increased subchondral bone density on x-ray images. In this context, phase contrast CT imaging (PCI-CT) has recently emerged as a novel imaging technique that allows a direct examination of chondrocyte patterns and their correlation to osteoarthritis through visualization of cartilage soft tissue. This study investigates the use of topological and geometrical approaches for characterizing chondrocyte patterns in the radial zone of the knee cartilage matrix in the presence and absence of osteoarthritic damage. For this purpose, topological features derived from Minkowski Functionals and geometric features derived from the Scaling Index Method (SIM) were extracted from 842 regions of interest (ROI) annotated on PCI-CT images of healthy and osteoarthritic specimens of human patellar cartilage. The extracted features were then used in a machine learning task involving support vector regression to classify ROIs as healthy or osteoarthritic. Classification performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC). The best classification performance was observed with high-dimensional geometrical feature vectors derived from SIM (0.95 ± 0.06) which outperformed all Minkowski Functionals (p < 0.001). These results suggest that such quantitative analysis of chondrocyte patterns in human patellar cartilage matrix involving SIM-derived geometrical features can distinguish between healthy and osteoarthritic tissue with high accuracy.
Ceylan, Murat; Ceylan, Rahime; Ozbay, Yüksel; Kara, Sadik
2008-09-01
In biomedical signal classification, due to the huge amount of data, to compress the biomedical waveform data is vital. This paper presents two different structures formed using feature extraction algorithms to decrease size of feature set in training and test data. The proposed structures, named as wavelet transform-complex-valued artificial neural network (WT-CVANN) and complex wavelet transform-complex-valued artificial neural network (CWT-CVANN), use real and complex discrete wavelet transform for feature extraction. The aim of using wavelet transform is to compress data and to reduce training time of network without decreasing accuracy rate. In this study, the presented structures were applied to the problem of classification in carotid arterial Doppler ultrasound signals. Carotid arterial Doppler ultrasound signals were acquired from left carotid arteries of 38 patients and 40 healthy volunteers. The patient group included 22 males and 16 females with an established diagnosis of the early phase of atherosclerosis through coronary or aortofemoropopliteal (lower extremity) angiographies (mean age, 59 years; range, 48-72 years). Healthy volunteers were young non-smokers who seem to not bear any risk of atherosclerosis, including 28 males and 12 females (mean age, 23 years; range, 19-27 years). Sensitivity, specificity and average detection rate were calculated for comparison, after training and test phases of all structures finished. These parameters have demonstrated that training times of CVANN and real-valued artificial neural network (RVANN) were reduced using feature extraction algorithms without decreasing accuracy rate in accordance to our aim.
Shape based segmentation of MRIs of the bones in the knee using phase and intensity information
NASA Astrophysics Data System (ADS)
Fripp, Jurgen; Bourgeat, Pierrick; Crozier, Stuart; Ourselin, Sébastien
2007-03-01
The segmentation of the bones from MR images is useful for performing subsequent segmentation and quantitative measurements of cartilage tissue. In this paper, we present a shape based segmentation scheme for the bones that uses texture features derived from the phase and intensity information in the complex MR image. The phase can provide additional information about the tissue interfaces, but due to the phase unwrapping problem, this information is usually discarded. By using a Gabor filter bank on the complex MR image, texture features (including phase) can be extracted without requiring phase unwrapping. These texture features are then analyzed using a support vector machine classifier to obtain probability tissue matches. The segmentation of the bone is fully automatic and performed using a 3D active shape model based approach driven using gradient and texture information. The 3D active shape model is automatically initialized using a robust affine registration. The approach is validated using a database of 18 FLASH MR images that are manually segmented, with an average segmentation overlap (Dice similarity coefficient) of 0.92 compared to 0.9 obtained using the classifier only.
1993-01-01
We have developed a cell-free system that induces the morphological transformations characteristic of apoptosis in isolated nuclei. The system uses extracts prepared from mitotic chicken hepatoma cells following a sequential S phase/M phase synchronization. When nuclei are added to these extracts, the chromatin becomes highly condensed into spherical domains that ultimately extrude through the nuclear envelope, forming apoptotic bodies. The process is highly synchronous, and the structural changes are completed within 60 min. Coincident with these morphological changes, the nuclear DNA is cleaved into a nucleosomal ladder. Both processes are inhibited by Zn2+, an inhibitor of apoptosis in intact cells. Nuclear lamina disassembly accompanies these structural changes in added nuclei, and we show that lamina disassembly is a characteristic feature of apoptosis in intact cells of mouse, human and chicken. This system may provide a powerful means of dissecting the biochemical mechanisms underlying the final stages of apoptosis. PMID:8408207
TU-F-12A-05: Sensitivity of Textural Features to 3D Vs. 4D FDG-PET/CT Imaging in NSCLC Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Nyflot, M; Bowen, S
2014-06-15
Purpose: Neighborhood Gray-level difference matrices (NGLDM) based texture parameters extracted from conventional (3D) 18F-FDG PET scans in patients with NSCLC have been previously shown to associate with response to chemoradiation and poorer patient outcome. However, the change in these parameters when utilizing respiratory-correlated (4D) FDG-PET scans has not yet been characterized for NSCLC. The Objectives: of this study was to assess the extent to which NGLDM-based texture parameters on 4D PET images vary with reference to values derived from 3D scans in NSCLC. Methods: Eight patients with newly diagnosed NSCLC treated with concomitant chemoradiotherapy were included in this study. 4Dmore » PET scans were reconstructed with OSEM-IR in 5 respiratory phase-binned images and corresponding CT data of each phase were employed for attenuation correction. NGLDM-based texture features, consisting of coarseness, contrast, busyness, complexity and strength, were evaluated for gross tumor volumes defined on 3D/4D PET scans by radiation oncologists. Variation of the obtained texture parameters over the respiratory cycle were examined with respect to values extracted from 3D scans. Results: Differences between texture parameters derived from 4D scans at different respiratory phases and those extracted from 3D scans ranged from −30% to 13% for coarseness, −12% to 40% for contrast, −5% to 50% for busyness, −7% to 38% for complexity, and −43% to 20% for strength. Furthermore, no evident correlations were observed between respiratory phase and 4D scan texture parameters. Conclusion: Results of the current study showed that NGLDM-based texture parameters varied considerably based on choice of 3D PET and 4D PET reconstruction of NSCLC patient images, indicating that standardized image acquisition and analysis protocols need to be established for clinical studies, especially multicenter clinical trials, intending to validate prognostic values of texture features for NSCLC.« less
Log-Gabor Weber descriptor for face recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Sang, Nong; Gao, Changxin
2015-09-01
The Log-Gabor transform, which is suitable for analyzing gradually changing data such as in iris and face images, has been widely used in image processing, pattern recognition, and computer vision. In most cases, only the magnitude or phase information of the Log-Gabor transform is considered. However, the complementary effect taken by combining magnitude and phase information simultaneously for an image-feature extraction problem has not been systematically explored in the existing works. We propose a local image descriptor for face recognition, called Log-Gabor Weber descriptor (LGWD). The novelty of our LGWD is twofold: (1) to fully utilize the information from the magnitude or phase feature of multiscale and orientation Log-Gabor transform, we apply the Weber local binary pattern operator to each transform response. (2) The encoded Log-Gabor magnitude and phase information are fused at the feature level by utilizing kernel canonical correlation analysis strategy, considering that feature level information fusion is effective when the modalities are correlated. Experimental results on the AR, Extended Yale B, and UMIST face databases, compared with those available from recent experiments reported in the literature, show that our descriptor yields a better performance than state-of-the art methods.
NASA Astrophysics Data System (ADS)
Fauziah; Wibowo, E. P.; Madenda, S.; Hustinawati
2018-03-01
Capturing and recording motion in human is mostly done with the aim for sports, health, animation films, criminality, and robotic applications. In this study combined background subtraction and back propagation neural network. This purpose to produce, find similarity movement. The acquisition process using 8 MP resolution camera MP4 format, duration 48 seconds, 30frame/rate. video extracted produced 1444 pieces and results hand motion identification process. Phase of image processing performed is segmentation process, feature extraction, identification. Segmentation using bakground subtraction, extracted feature basically used to distinguish between one object to another object. Feature extraction performed by using motion based morfology analysis based on 7 invariant moment producing four different classes motion: no object, hand down, hand-to-side and hands-up. Identification process used to recognize of hand movement using seven inputs. Testing and training with a variety of parameters tested, it appears that architecture provides the highest accuracy in one hundred hidden neural network. The architecture is used propagate the input value of the system implementation process into the user interface. The result of the identification of the type of the human movement has been clone to produce the highest acuracy of 98.5447%. The training process is done to get the best results.
NASA Astrophysics Data System (ADS)
Wu, Huijuan; Qian, Ya; Zhang, Wei; Tang, Chenghao
2017-12-01
High sensitivity of a distributed optical-fiber vibration sensing (DOVS) system based on the phase-sensitivity optical time domain reflectometry (Φ-OTDR) technology also brings in high nuisance alarm rates (NARs) in real applications. In this paper, feature extraction methods of wavelet decomposition (WD) and wavelet packet decomposition (WPD) are comparatively studied for three typical field testing signals, and an artificial neural network (ANN) is built for the event identification. The comparison results prove that the WPD performs a little better than the WD for the DOVS signal analysis and identification in oil pipeline safety monitoring. The identification rate can be improved up to 94.4%, and the nuisance alarm rate can be effectively controlled as low as 5.6% for the identification network with the wavelet packet energy distribution features.
Road marking features extraction using the VIAPIX® system
NASA Astrophysics Data System (ADS)
Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.
2016-07-01
Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.
Baharfar, Mahroo; Yamini, Yadollah; Seidi, Shahram; Arain, Muhammad Balal
2018-05-30
A new design of electromembrane extraction (EME) as a lab on-a-chip device was proposed for the extraction and determination of phenazopyridine as the model analyte. The extraction procedure was accomplished by coupling of EME and the packing of a sorbent. The analyte was extracted under the applied electrical field across a membrane sheet impregnated by nitrophenyl octylether (NPOE) into an acceptor phase. It was followed by the absorption of the analyte on strong cation exchanger as a sorbent. The designed chip contained separate spiral channels for donor and acceptor phases featuring embedded platinum electrodes to enhance extraction efficiency. The selected donor and acceptor phases were 0 mM HCl and 100 mM HCl, respectively. The on-chip electromembrane extraction was carried out under the voltage level of 70 V for 50 min. The analysis was carried out by two modes of a simple Red-Green-Blue (RGB) image analysis tool and a conventional HPLC-UV system. After the absorption of the analyte on the solid phase, its color changed and a digital picture of the sorbent was taken for the RGB analysis. The effective parameters on the performance of the chip device, comprising the EME and solid phase microextraction steps, were distinguished and optimized. The accumulation of the analyte on the solid phase showed excellent sensitivity and a limit of detection (LOD) lower than 1.0 μg L-1 achieved by an image analysis using a smartphone. This device also offered acceptable intra- and inter-assay RSD% (<10%). The calibration curves were linear within the range of 10-1000 μg L-1 and 30-1000 μg L-1 (r2 > 0.9969) for HPLC-UV and RGB analysis, respectively. To investigate the applicability of the method in complicated matrices, urine samples of patients being treated with phenazopyridine were analyzed.
A method for fast automated microscope image stitching.
Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong
2013-05-01
Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hachay, Olga; Khachay, Andrey; Khachay, Oleg
2016-04-01
The processes of oil extraction from deposit are linked with the movement of multi-phase multi-component media, which are characterized by non-equilibrium and non-linear rheological features. The real behavior of layered systems is defined by the complexity of the rheology of moving fluids and the morphology structure of the porous medium, and also by the great variety of interactions between the fluid and the porous medium [Hasanov and Bulgakova, 2003]. It is necessary to take into account these features in order to informatively describe the filtration processes due to the non-linearity, non-equilibrium and heterogeneity that are features of real systems. In this way, new synergetic events can be revealed (namely, a loss of stability when oscillations occur, and the formation of ordered structures). This allows us to suggest new methods for the control and management of complicated natural systems that are constructed on account of these phenomena. Thus the layered system, from which it is necessary to extract the oil, is a complicated dynamical hierarchical system. A comparison is provided of non-equilibrium effects of the influence of independent hydrodynamic and electromagnetic induction on an oil layer and the medium which it surrounds. It is known that by drainage and steeping the hysteresis effect on curves of the relative phase permeability in dependence on the porous medium's water saturation in some cycles of influence (drainage-steep-drainage) is observed. Using the earlier developed 3D method of induction electromagnetic frequency geometric monitoring, we showed the possibility of defining the physical and structural features of a hierarchical oil layer structure and estimating the water saturation from crack inclusions. This effect allows managing the process of drainage and steeping the oil out of the layer by water displacement. An algorithm was constructed for 2D modeling of sound diffraction on a porous fluid-saturated intrusion of a hierarchical structure located in layer number J of an N-layered elastic medium. The algorithm developed for modeling, and the method of mapping and monitoring of heterogenic highly complicated two-phase medium can be used for managing viscous oil extraction in mining conditions and light oil in sub-horizontal boreholes. The demand for effective economic parameters and fuller extraction of oil and gas from deposits dictates the necessity of developing new geotechnology based on the fundamental achievements in the area of geophysics and geomechanics
Zhao, Wenjie; Yang, Liu; He, Lijun; Zhang, Shusheng
2016-08-10
On the basis of the definite retention mechanism proven by the stationary phase for high-performance liquid chromatography, tetraazacalix[2]arene[2]triazine featuring multiple recognition sites was assessed as a solid-phase extraction (SPE) selector. The applicability of its silica support was used for the extraction of trace amounts of polycyclic aromatic hydrocarbons (PAHs) and Cu(2+) in aqueous samples, followed by high-performance liquid chromatography fluorometric and graphite furnace atomic absorption spectrometric determination. On the basis of the π-π interaction with PAHs and the chelating interaction with Cu(2+), the simultaneous extraction of PAHs and Cu(2+) and stepwise elution through tuning the eluent were successfully achieved, respectively. The SPE conditions affecting the extraction efficiency were optimized, including type and concentration of organic modifier, sample solution pH, flow rate, and volume. As a result of the special adsorption and desorption mechanism, high extraction efficiency was achieved with relative recoveries of 94.3-102.4% and relative standard deviations of less than 10.5%. The limits of detection were obtained with 0.4-3.1 ng L(-1) for PAHs and 15 ng L(-1) for Cu(2+), respectively. The method was applied to the analyses of PAHs and Cu(2+) in Xiliu Lake water samples collected in Zhengzhou, China.
Modeling listeners' emotional response to music.
Eerola, Tuomas
2012-10-01
An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed. Copyright © 2012 Cognitive Science Society, Inc.
Romeo, Valeria; Maurea, Simone; Cuocolo, Renato; Petretta, Mario; Mainenti, Pier Paolo; Verde, Francesco; Coppola, Milena; Dell'Aversana, Serena; Brunetti, Arturo
2018-01-17
Adrenal adenomas (AA) are the most common benign adrenal lesions, often characterized based on intralesional fat content as either lipid-rich (LRA) or lipid-poor (LPA). The differentiation of AA, particularly LPA, from nonadenoma adrenal lesions (NAL) may be challenging. Texture analysis (TA) can extract quantitative parameters from MR images. Machine learning is a technique for recognizing patterns that can be applied to medical images by identifying the best combination of TA features to create a predictive model for the diagnosis of interest. To assess the diagnostic efficacy of TA-derived parameters extracted from MR images in characterizing LRA, LPA, and NAL using a machine-learning approach. Retrospective, observational study. Sixty MR examinations, including 20 LRA, 20 LPA, and 20 NAL. Unenhanced T 1 -weighted in-phase (IP) and out-of-phase (OP) as well as T 2 -weighted (T 2 -w) MR images acquired at 3T. Adrenal lesions were manually segmented, placing a spherical volume of interest on IP, OP, and T 2 -w images. Different selection methods were trained and tested using the J48 machine-learning classifiers. The feature selection method that obtained the highest diagnostic performance using the J48 classifier was identified; the diagnostic performance was also compared with that of a senior radiologist by means of McNemar's test. A total of 138 TA-derived features were extracted; among these, four features were selected, extracted from the IP (Short_Run_High_Gray_Level_Emphasis), OP (Mean_Intensity and Maximum_3D_Diameter), and T 2 -w (Standard_Deviation) images; the J48 classifier obtained a diagnostic accuracy of 80%. The expert radiologist obtained a diagnostic accuracy of 73%. McNemar's test did not show significant differences in terms of diagnostic performance between the J48 classifier and the expert radiologist. Machine learning conducted on MR TA-derived features is a potential tool to characterize adrenal lesions. 4 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
DEKF system for crowding estimation by a multiple-model approach
NASA Astrophysics Data System (ADS)
Cravino, F.; Dellucca, M.; Tesei, A.
1994-03-01
A distributed extended Kalman filter (DEKF) network devoted to real-time crowding estimation for surveillance in complex scenes is presented. Estimation is carried out by extracting a set of significant features from sequences of images. Feature values are associated by virtual sensors with the estimated number of people using nonlinear models obtained in an off-line training phase. Different models are used, depending on the positions and dimensions of the crowded subareas detected in each image.
2016-01-01
The goal of this study is to quantify the effects of vocal fold nodules on vibratory motion in children using high-speed videoendoscopy. Differences in vibratory motion were evaluated in 20 children with vocal fold nodules (5–11 years) and 20 age and gender matched typically developing children (5–11 years) during sustained phonation at typical pitch and loudness. Normalized kinematic features of vocal fold displacements from the mid-membranous vocal fold point were extracted from the steady-state high-speed video. A total of 12 kinematic features representing spatial and temporal characteristics of vibratory motion were calculated. Average values and standard deviations (cycle-to-cycle variability) of the following kinematic features were computed: normalized peak displacement, normalized average opening velocity, normalized average closing velocity, normalized peak closing velocity, speed quotient, and open quotient. Group differences between children with and without vocal fold nodules were statistically investigated. While a moderate effect size was observed for the spatial feature of speed quotient, and the temporal feature of normalized average closing velocity in children with nodules compared to vocally normal children, none of the features were statistically significant between the groups after Bonferroni correction. The kinematic analysis of the mid-membranous vocal fold displacement revealed that children with nodules primarily differ from typically developing children in closing phase kinematics of the glottal cycle, whereas the opening phase kinematics are similar. Higher speed quotients and similar opening phase velocities suggest greater relative forces are acting on vocal fold in the closing phase. These findings suggest that future large-scale studies should focus on spatial and temporal features related to the closing phase of the glottal cycle for differentiating the kinematics of children with and without vocal fold nodules. PMID:27124157
Digital PCM bit synchronizer and detector
NASA Astrophysics Data System (ADS)
Moghazy, A. E.; Maral, G.; Blanchard, A.
1980-08-01
A theoretical analysis of a digital self-bit synchronizer and detector is presented and supported by the implementation of an experimental model that utilizes standard TTL logic circuits. This synchronizer is based on the generation of spectral line components by nonlinear filtering of the received bit stream, and extracting the line by a digital phase-locked loop (DPLL). The extracted reference signal instructs a digital matched filter (DMF) data detector. This realization features a short acquisition time and an all-digital structure.
Musteata, Florin Marcel; Sandoval, Manuel; Ruiz-Macedo, Juan C; Harrison, Kathleen; McKenna, Dennis; Millington, William
2016-08-24
Although solid phase microextraction (SPME) has been used extensively for fingerprinting volatile compounds emitted by plants, there are very few such reports for direct insertion SPME. In this research, direct contact of SPME probes with the interstitial fluid of plants was investigated as a method for phytochemical analysis. Medicinal plants from the Amazon have been the source of numerous drugs used in western medicine. However, a large number of species used in traditional medicine have not been characterized chemically, partly due to the difficulty of field work. In this project, the phytochemical composition of plants from several genera was fingerprinted by combining convenient field sampling by solid phase microextraction (SPME) with laboratory analysis by LC-MS. The new method was compared with classical sampling followed by liquid extraction (LE). SPME probes were prepared by coating stainless steel wires with a mixture of polyacrylonitrile and either RP-amide or HS-F5 silica particles. Sampling was performed by inserting the microextraction probes into various tissues of living plants in their natural environment. After in vivo extraction, the probes were sealed under vacuum and refrigerated until analyzed. The probes were desorbed in mobile phase and analyzed on a Waters Acquity UPLC with triple quadrupole mass spectrometer in positive ion mode. Twenty Amazonian plant species were sampled and unique metabolomic fingerprints were obtained. In addition, quantitative analysis was performed for previously identified compounds in three species. Comparison of the fingerprints obtained by in vivo SPME with those obtained by LE showed that 27% of the chromatographic features were unique to SPME, 57% were unique to LE, and 16% were common to both methods. In vivo SPME caused minimal damage to the plants, was much faster than traditional liquid extraction, and provided unique fingerprints for all investigated plants. SPME revealed unique chromatographic features, undetected by traditional extraction, although it produced only half as many peaks as ethanol extraction. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Ruofan; Wang, Jiang; Li, Shunan; Yu, Haitao; Deng, Bin; Wei, Xile
2015-01-01
In this paper, we have combined experimental neurophysiologic recording and statistical analysis to investigate the nonlinear characteristic and the cognitive function of the brain. Spectrum and bispectrum analyses are proposed to extract multiple effective features of electroencephalograph (EEG) signals from Alzheimer's disease (AD) patients and further applied to distinguish AD patients from the normal controls. Spectral analysis based on autoregressive Burg method is first used to quantify the power distribution of EEG series in the frequency domain. Compared to the control group, the relative power spectral density of AD group is significantly higher in the theta frequency band, while lower in the alpha frequency bands. In addition, median frequency of spectrum is decreased, and spectral entropy ratio of these two frequency bands undergoes drastic changes at the P3 electrode in the central-parietal brain region, implying that the electrophysiological behavior in AD brain is much slower and less irregular. In order to explore the nonlinear high order information, bispectral analysis which measures the complexity of phase-coupling is further applied to P3 electrode in the whole frequency band. It is demonstrated that less bispectral peaks appear and the amplitudes of peaks fall, suggesting a decrease of non-Gaussianity and nonlinearity of EEG in ADs. Notably, the application of this method to five brain regions shows higher concentration of the weighted center of bispectrum and lower complexity reflecting phase-coupling by bispectral entropy. Based on spectrum and bispectrum analyses, six efficient features are extracted and then applied to discriminate AD from the normal in the five brain regions. The classification results indicate that all these features could differentiate AD patients from the normal controls with a maximum accuracy of 90.2%. Particularly, different brain regions are sensitive to different features. Moreover, the optimal combination of features obtained by discriminant analysis may improve the classification accuracy. These results demonstrate the great promise for scape EEG spectral and bispectral features as a potential effective method for detection of AD, which may facilitate our understanding of the pathological mechanism of the disease.
Wang, Ruofan; Wang, Jiang; Li, Shunan; Yu, Haitao; Deng, Bin; Wei, Xile
2015-01-01
In this paper, we have combined experimental neurophysiologic recording and statistical analysis to investigate the nonlinear characteristic and the cognitive function of the brain. Spectrum and bispectrum analyses are proposed to extract multiple effective features of electroencephalograph (EEG) signals from Alzheimer's disease (AD) patients and further applied to distinguish AD patients from the normal controls. Spectral analysis based on autoregressive Burg method is first used to quantify the power distribution of EEG series in the frequency domain. Compared to the control group, the relative power spectral density of AD group is significantly higher in the theta frequency band, while lower in the alpha frequency bands. In addition, median frequency of spectrum is decreased, and spectral entropy ratio of these two frequency bands undergoes drastic changes at the P3 electrode in the central-parietal brain region, implying that the electrophysiological behavior in AD brain is much slower and less irregular. In order to explore the nonlinear high order information, bispectral analysis which measures the complexity of phase-coupling is further applied to P3 electrode in the whole frequency band. It is demonstrated that less bispectral peaks appear and the amplitudes of peaks fall, suggesting a decrease of non-Gaussianity and nonlinearity of EEG in ADs. Notably, the application of this method to five brain regions shows higher concentration of the weighted center of bispectrum and lower complexity reflecting phase-coupling by bispectral entropy. Based on spectrum and bispectrum analyses, six efficient features are extracted and then applied to discriminate AD from the normal in the five brain regions. The classification results indicate that all these features could differentiate AD patients from the normal controls with a maximum accuracy of 90.2%. Particularly, different brain regions are sensitive to different features. Moreover, the optimal combination of features obtained by discriminant analysis may improve the classification accuracy. These results demonstrate the great promise for scape EEG spectral and bispectral features as a potential effective method for detection of AD, which may facilitate our understanding of the pathological mechanism of the disease.
Three-dimensional spatiotemporal features for fast content-based retrieval of focal liver lesions.
Roy, Sharmili; Chi, Yanling; Liu, Jimin; Venkatesh, Sudhakar K; Brown, Michael S
2014-11-01
Content-based image retrieval systems for 3-D medical datasets still largely rely on 2-D image-based features extracted from a few representative slices of the image stack. Most 2 -D features that are currently used in the literature not only model a 3-D tumor incompletely but are also highly expensive in terms of computation time, especially for high-resolution datasets. Radiologist-specified semantic labels are sometimes used along with image-based 2-D features to improve the retrieval performance. Since radiological labels show large interuser variability, are often unstructured, and require user interaction, their use as lesion characterizing features is highly subjective, tedious, and slow. In this paper, we propose a 3-D image-based spatiotemporal feature extraction framework for fast content-based retrieval of focal liver lesions. All the features are computer generated and are extracted from four-phase abdominal CT images. Retrieval performance and query processing times for the proposed framework is evaluated on a database of 44 hepatic lesions comprising of five pathological types. Bull's eye percentage score above 85% is achieved for three out of the five lesion pathologies and for 98% of query lesions, at least one same type of lesion is ranked among the top two retrieved results. Experiments show that the proposed system's query processing is more than 20 times faster than other already published systems that use 2-D features. With fast computation time and high retrieval accuracy, the proposed system has the potential to be used as an assistant to radiologists for routine hepatic tumor diagnosis.
SU-E-QI-17: Dependence of 3D/4D PET Quantitative Image Features On Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, J; Budzevich, M; Zhang, G
2014-06-15
Purpose: Quantitative imaging is a fast evolving discipline where a large number of features are extracted from images; i.e., radiomics. Some features have been shown to have diagnostic, prognostic and predictive value. However, they are sensitive to acquisition and processing factors; e.g., noise. In this study noise was added to positron emission tomography (PET) images to determine how features were affected by noise. Methods: Three levels of Gaussian noise were added to 8 lung cancer patients PET images acquired in 3D mode (static) and using respiratory tracking (4D); for the latter images from one of 10 phases were used. Amore » total of 62 features: 14 shape, 19 intensity (1stO), 18 GLCM textures (2ndO; from grey level co-occurrence matrices) and 11 RLM textures (2ndO; from run-length matrices) features were extracted from segmented tumors. Dimensions of GLCM were 256×256, calculated using 3D images with a step size of 1 voxel in 13 directions. Grey levels were binned into 256 levels for RLM and features were calculated in all 13 directions. Results: Feature variation generally increased with noise. Shape features were the most stable while RLM were the most unstable. Intensity and GLCM features performed well; the latter being more robust. The most stable 1stO features were compactness, maximum and minimum length, standard deviation, root-mean-squared, I30, V10-V90, and entropy. The most stable 2ndO features were entropy, sum-average, sum-entropy, difference-average, difference-variance, difference-entropy, information-correlation-2, short-run-emphasis, long-run-emphasis, and run-percentage. In general, features computed from images from one of the phases of 4D scans were more stable than from 3D scans. Conclusion: This study shows the need to characterize image features carefully before they are used in research and medical applications. It also shows that the performance of features, and thereby feature selection, may be assessed in part by noise analysis.« less
Lafrenière, Nelson M; Mudrik, Jared M; Ng, Alphonsus H C; Seale, Brendon; Spooner, Neil; Wheeler, Aaron R
2015-04-07
There is great interest in the development of integrated tools allowing for miniaturized sample processing, including solid phase extraction (SPE). We introduce a new format for microfluidic SPE relying on C18-functionalized magnetic beads that can be manipulated in droplets in a digital microfluidic platform. This format provides the opportunity to tune the amount (and potentially the type) of stationary phase on-the-fly, and allows the removal of beads after the extraction (to enable other operations in same device-space), maintaining device reconfigurability. Using the new method, we employed a design of experiments (DOE) operation to enable automated on-chip optimization of elution solvent composition for reversed phase SPE of a model system. Further, conditions were selected to enable on-chip fractionation of multiple analytes. Finally, the method was demonstrated to be useful for online cleanup of extracts from dried blood spot (DBS) samples. We anticipate this combination of features will prove useful for separating a wide range of analytes, from small molecules to peptides, from complex matrices.
Zhang, Jie; Xiao, Wendong; Zhang, Sen; Huang, Shoudong
2017-04-17
Device-free localization (DFL) is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF) DFL system, radio transmitters (RTs) and radio receivers (RXs) are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS) measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM) approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE) is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN), support vector machine (SVM), back propagation neural network (BPNN), as well as the well-known radio tomographic imaging (RTI) DFL approach.
Zhang, Jie; Xiao, Wendong; Zhang, Sen; Huang, Shoudong
2017-01-01
Device-free localization (DFL) is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF) DFL system, radio transmitters (RTs) and radio receivers (RXs) are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS) measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM) approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE) is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN), support vector machine (SVM), back propagation neural network (BPNN), as well as the well-known radio tomographic imaging (RTI) DFL approach. PMID:28420187
Foreground extraction for moving RGBD cameras
NASA Astrophysics Data System (ADS)
Junejo, Imran N.; Ahmed, Naveed
2017-02-01
In this paper, we propose a simple method to perform foreground extraction for a moving RGBD camera. These cameras have now been available for quite some time. Their popularity is primarily due to their low cost and ease of availability. Although the field of foreground extraction or background subtraction has been explored by the computer vision researchers since a long time, the depth-based subtraction is relatively new and has not been extensively addressed as of yet. Most of the current methods make heavy use of geometric reconstruction, making the solutions quite restrictive. In this paper, we make a novel use RGB and RGBD data: from the RGB frame, we extract corner features (FAST) and then represent these features with the histogram of oriented gradients (HoG) descriptor. We train a non-linear SVM on these descriptors. During the test phase, we make used of the fact that the foreground object has distinct depth ordering with respect to the rest of the scene. That is, we use the positively classified FAST features on the test frame to initiate a region growing to obtain the accurate segmentation of the foreground object from just the RGBD data. We demonstrate the proposed method of a synthetic datasets, and demonstrate encouraging quantitative and qualitative results.
Joshuva, A; Sugumaran, V
2017-03-01
Wind energy is one of the important renewable energy resources available in nature. It is one of the major resources for production of energy because of its dependability due to the development of the technology and relatively low cost. Wind energy is converted into electrical energy using rotating blades. Due to environmental conditions and large structure, the blades are subjected to various vibration forces that may cause damage to the blades. This leads to a liability in energy production and turbine shutdown. The downtime can be reduced when the blades are diagnosed continuously using structural health condition monitoring. These are considered as a pattern recognition problem which consists of three phases namely, feature extraction, feature selection, and feature classification. In this study, statistical features were extracted from vibration signals, feature selection was carried out using a J48 decision tree algorithm and feature classification was performed using best-first tree algorithm and functional trees algorithm. The better algorithm is suggested for fault diagnosis of wind turbine blade. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
Zhang, Ying; Kuang, Min; Zhang, Lijuan; Yang, Pengyuan; Lu, Haojie
2013-06-04
In light of the significance of glycosylation for wealthy biological events, it is important to prefractionate glycoproteins/glycopeptides from complex biological samples. Herein, we reported a novel protocol of solid-phase extraction of glycopeptides through a reductive amination reaction by employing the easily accessible 3-aminopropyltriethoxysilane (APTES)-functionalized magnetic nanoparticles. The amino groups from APTES, which were assembled onto the surface of the nanoparticles through a one-step silanization reaction, could conjugate with the aldehydes from oxidized glycopeptides and, therefore, completed the extraction. To the best of our knowledge, this is the first example of applying the reductive amination reaction into the isolation of glycopeptides. Due to the elimination of the desalting step, the detection limit of glycopeptides was improved by 2 orders of magnitude, compared to the traditional hydrazide chemistry-based solid phase extraction, while the extraction time was shortened to 4 h, suggesting the high sensitivity, specificity, and efficiency for the extraction of N-linked glycopeptides by this method. In the meantime, high selectivity toward glycoproteins was also observed in the separation of Ribonuclease B from the mixtures contaminated with bovine serum albumin. What's more, this technique required significantly less sample volume, as demonstrated in the successful mapping of glycosylation of human colorectal cancer serum with the sample volume as little as 5 μL. Because of all these attractive features, we believe that the innovative protocol proposed here will shed new light on the research of glycosylation profiling.
Phase stability analysis of chirp evoked auditory brainstem responses by Gabor frame operators.
Corona-Strauss, Farah I; Delb, Wolfgang; Schick, Bernhard; Strauss, Daniel J
2009-12-01
We have recently shown that click evoked auditory brainstem responses (ABRs) can be efficiently processed using a novelty detection paradigm. Here, ABRs as a large-scale reflection of a stimulus locked neuronal group synchronization at the brainstem level are detected as novel instance-novel as compared to the spontaneous activity which does not exhibit a regular stimulus locked synchronization. In this paper we propose for the first time Gabor frame operators as an efficient feature extraction technique for ABR single sweep sequences that is in line with this paradigm. In particular, we use this decomposition technique to derive the Gabor frame phase stability (GFPS) of sweep sequences of click and chirp evoked ABRs. We show that the GFPS of chirp evoked ABRs provides a stable discrimination of the spontaneous activity from stimulations above the hearing threshold with a small number of sweeps, even at low stimulation intensities. It is concluded that the GFPS analysis represents a robust feature extraction method for ABR single sweep sequences. Further studies are necessary to evaluate the value of the presented approach for clinical applications.
Facial recognition using multisensor images based on localized kernel eigen spaces.
Gundimada, Satyanadh; Asari, Vijayan K
2009-06-01
A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.
Effect of microstructure on the elasto-viscoplastic deformation of dual phase titanium structures
NASA Astrophysics Data System (ADS)
Ozturk, Tugce; Rollett, Anthony D.
2018-02-01
The present study is devoted to the creation of a process-structure-property database for dual phase titanium alloys, through a synthetic microstructure generation method and a mesh-free fast Fourier transform based micromechanical model that operates on a discretized image of the microstructure. A sensitivity analysis is performed as a precursor to determine the statistically representative volume element size for creating 3D synthetic microstructures based on additively manufactured Ti-6Al-4V characteristics, which are further modified to expand the database for features of interest, e.g., lath thickness. Sets of titanium hardening parameters are extracted from literature, and The relative effect of the chosen microstructural features is quantified through comparisons of average and local field distributions.
FEX: A Knowledge-Based System For Planimetric Feature Extraction
NASA Astrophysics Data System (ADS)
Zelek, John S.
1988-10-01
Topographical planimetric features include natural surfaces (rivers, lakes) and man-made surfaces (roads, railways, bridges). In conventional planimetric feature extraction, a photointerpreter manually interprets and extracts features from imagery on a stereoplotter. Visual planimetric feature extraction is a very labour intensive operation. The advantages of automating feature extraction include: time and labour savings; accuracy improvements; and planimetric data consistency. FEX (Feature EXtraction) combines techniques from image processing, remote sensing and artificial intelligence for automatic feature extraction. The feature extraction process co-ordinates the information and knowledge in a hierarchical data structure. The system simulates the reasoning of a photointerpreter in determining the planimetric features. Present efforts have concentrated on the extraction of road-like features in SPOT imagery. Keywords: Remote Sensing, Artificial Intelligence (AI), SPOT, image understanding, knowledge base, apars.
NASA Astrophysics Data System (ADS)
Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery
2017-06-01
Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.
NASA Astrophysics Data System (ADS)
Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi
2011-07-01
In most methods for evaluation of cardiac function based on echocardiography, the heart wall is currently identified manually by an operator. However, this task is very time-consuming and suffers from inter- and intraobserver variability. The present paper proposes a method that uses multiple features of ultrasonic echo signals for automated identification of the heart wall region throughout an entire cardiac cycle. In addition, the optimal cardiac phase to select a frame of interest, i.e., the frame for the initiation of tracking, was determined. The heart wall region at the frame of interest in this cardiac phase was identified by the expectation-maximization (EM) algorithm, and heart wall regions in the following frames were identified by tracking each point classified in the initial frame as the heart wall region using the phased tracking method. The results for two subjects indicate the feasibility of the proposed method in the longitudinal axis view of the heart.
Feature Mining and Health Assessment for Gearboxes Using Run-Up/Coast-Down Signals
Zhao, Ming; Lin, Jing; Miao, Yonghao; Xu, Xiaoqiang
2016-01-01
Vibration signals measured in the run-up/coast-down (R/C) processes usually carry rich information about the health status of machinery. However, a major challenge in R/C signals analysis lies in how to exploit more diagnostic information, and how this information could be properly integrated to achieve a more reliable maintenance decision. Aiming at this problem, a framework of R/C signals analysis is presented for the health assessment of gearbox. In the proposed methodology, we first investigate the data preprocessing and feature selection issues for R/C signals. Based on that, a sparsity-guided feature enhancement scheme is then proposed to extract the weak phase jitter associated with gear defect. In order for an effective feature mining and integration under R/C, a generalized phase demodulation technique is further established to reveal the evolution of modulation feature with operating speed and rotation angle. The experimental results indicate that the proposed methodology could not only detect the presence of gear damage, but also offer a novel insight into the dynamic behavior of gearbox. PMID:27827831
Feature Mining and Health Assessment for Gearboxes Using Run-Up/Coast-Down Signals.
Zhao, Ming; Lin, Jing; Miao, Yonghao; Xu, Xiaoqiang
2016-11-02
Vibration signals measured in the run-up/coast-down (R/C) processes usually carry rich information about the health status of machinery. However, a major challenge in R/C signals analysis lies in how to exploit more diagnostic information, and how this information could be properly integrated to achieve a more reliable maintenance decision. Aiming at this problem, a framework of R/C signals analysis is presented for the health assessment of gearbox. In the proposed methodology, we first investigate the data preprocessing and feature selection issues for R/C signals. Based on that, a sparsity-guided feature enhancement scheme is then proposed to extract the weak phase jitter associated with gear defect. In order for an effective feature mining and integration under R/C, a generalized phase demodulation technique is further established to reveal the evolution of modulation feature with operating speed and rotation angle. The experimental results indicate that the proposed methodology could not only detect the presence of gear damage, but also offer a novel insight into the dynamic behavior of gearbox.
Takahashi, Tadashi; Odagiri, Kayo; Watanabe, Atsushi; Watanabe, Chuichi; Kubo, Takuya; Hosoya, Ken
2011-10-01
A solid-phase extraction element based on epoxy polymer monolith was fabricated for sorptive enrichment of polar compounds from liquid and gaseous samples. After ultrasonication of the element in an aqueous solution for a given period of time, the thermal desorption (TD) using a pyrolyzer with gas chromatography/mass spectrometry (GC/MS), in which TD temperature was programmed from 50 to 250 °C for the analytes absorbed in the element, was used to evaluate the element for basic extraction performance using the aqueous standard mixtures consisting of compounds having varied polarities such as hexanol, isoamyl acetate, linalool, furfural and decanoic acid, in concentrations ranging from 10 μg/L to 1 mg/L. Excellent linear relationships were observed for all compounds in the standard mixture, except decanoic acid. In the extraction of beverages such as red wine, the extraction element showed stronger adsorption characteristics for polar compounds such as alcohols and acids than a non-polar polydimethylsiloxane-based element. This feature is derived from the main polymer structure along with hydroxyl and amino groups present in the epoxy-based monolith polymer matrix. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wang, Huiyong; Campiglia, Andres D
2008-11-01
A novel alternative is presented for the extraction and preconcentration of polycyclic aromatic hydrocarbons (PAH) from water samples. The new approachwhich we have named solid-phase nanoextraction (SPNE)takes advantage of the strong affinity that exists between PAH and gold nanoparticles. Carefully optimization of experimental parameters has led to a high-performance liquid chromatography method with excellent analytical figures of merit. Its most striking feature correlates to the small volume of water sample (500 microL) for complete PAH analyses. The limits of detection ranged from 0.9 (anthracene) to 58 ng.L (-1) (fluorene). The relative standard deviations at medium calibration concentrations vary from 3.2 (acenaphthene) to 9.1% (naphthalene). The analytical recoveries from tap water samples of the six regulated PAH varied from 83.3 +/- 2.4 (benzo[ k]fluoranthene) to 95.7 +/- 4.1% (benzo[ g,h,i]perylene). The entire extraction procedure consumes less than 100 microL of organic solvents per sample, which makes it environmentally friendly. The small volume of extracting solution makes SPNE a relatively inexpensive extraction approach.
A novel feature extraction approach for microarray data based on multi-algorithm fusion
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions. PMID:25780277
A novel feature extraction approach for microarray data based on multi-algorithm fusion.
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.
Weissenberg, M; Schaeffler, I; Menagem, E; Barzilai, M; Levy, A
1997-01-03
A simple, rapid high-performance liquid chromatography method has been devised in order to separate and quantify the xanthophylls capsorubin and capasanthin present in red pepper (Capsicum annuum L.) fruits and preparations made from them (paprika and oleoresin). A reversed-phase isocratic non-aqueous system allows the separation of xanthophylls within a few minutes, with detection at 450 nm, using methyl red as internal standard to locate the various carotenoids and xanthophylls found in plant extracts. The selection of extraction solvents, mild saponification conditions, and chromatographic features is evaluated and discussed. The method is proposed for rapid screening of large plant populations, plant selection, as well as for paprika products and oleoresin, and also for nutrition and quality control studies.
Microscopic medical image classification framework via deep learning and shearlet transform.
Rezaeilouyeh, Hadi; Mollahosseini, Ali; Mahoor, Mohammad H
2016-10-01
Cancer is the second leading cause of death in US after cardiovascular disease. Image-based computer-aided diagnosis can assist physicians to efficiently diagnose cancers in early stages. Existing computer-aided algorithms use hand-crafted features such as wavelet coefficients, co-occurrence matrix features, and recently, histogram of shearlet coefficients for classification of cancerous tissues and cells in images. These hand-crafted features often lack generalizability since every cancerous tissue and cell has a specific texture, structure, and shape. An alternative approach is to use convolutional neural networks (CNNs) to learn the most appropriate feature abstractions directly from the data and handle the limitations of hand-crafted features. A framework for breast cancer detection and prostate Gleason grading using CNN trained on images along with the magnitude and phase of shearlet coefficients is presented. Particularly, we apply shearlet transform on images and extract the magnitude and phase of shearlet coefficients. Then we feed shearlet features along with the original images to our CNN consisting of multiple layers of convolution, max pooling, and fully connected layers. Our experiments show that using the magnitude and phase of shearlet coefficients as extra information to the network can improve the accuracy of detection and generalize better compared to the state-of-the-art methods that rely on hand-crafted features. This study expands the application of deep neural networks into the field of medical image analysis, which is a difficult domain considering the limited medical data available for such analysis.
NASA Technical Reports Server (NTRS)
Boerner, W. M.; Mott, H.; Verdi, J.; Darizhapov, D.; Dorjiev, B.; Tsybjito, T.; Korsunov, V.; Tatchkov, G.; Bashkuyev, Y.; Cloude, S.;
1998-01-01
During the past decade, Radar Polarimetry has established itself as a mature science and advanced technology in high resolution POL-SAR imaging, image target characterization and selective image feature extraction.
NASA Astrophysics Data System (ADS)
Selva Bhuvaneswari, K.; Geetha, P.
2017-05-01
Magnetic resonance imaging segmentation refers to a process of assigning labels to set of pixels or multiple regions. It plays a major role in the field of biomedical applications as it is widely used by the radiologists to segment the medical images input into meaningful regions. In recent years, various brain tumour detection techniques are presented in the literature. The entire segmentation process of our proposed work comprises three phases: threshold generation with dynamic modified region growing phase, texture feature generation phase and region merging phase. by dynamically changing two thresholds in the modified region growing approach, the first phase of the given input image can be performed as dynamic modified region growing process, in which the optimisation algorithm, firefly algorithm help to optimise the two thresholds in modified region growing. After obtaining the region growth segmented image using modified region growing, the edges can be detected with edge detection algorithm. In the second phase, the texture feature can be extracted using entropy-based operation from the input image. In region merging phase, the results obtained from the texture feature-generation phase are combined with the results of dynamic modified region growing phase and similar regions are merged using a distance comparison between regions. After identifying the abnormal tissues, the classification can be done by hybrid kernel-based SVM (Support Vector Machine). The performance analysis of the proposed method will be carried by K-cross fold validation method. The proposed method will be implemented in MATLAB with various images.
A review of EO image information mining
NASA Astrophysics Data System (ADS)
Quartulli, Marco; Olaizola, Igor G.
2013-01-01
We analyze the state of the art of content-based retrieval in Earth observation image archives focusing on complete systems showing promise for operational implementation. The different paradigms at the basis of the main system families are introduced. The approaches taken are considered, focusing in particular on the phases after primitive feature extraction. The solutions envisaged for the issues related to feature simplification and synthesis, indexing, semantic labeling are reviewed. The methodologies for query specification and execution are evaluated. Conclusions are drawn on the state of published research in Earth observation (EO) mining.
Object recognition and pose estimation of planar objects from range data
NASA Technical Reports Server (NTRS)
Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael
1994-01-01
The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and noise.
Azzouz, Abdelmonaim; Ballesteros, Evaristo
2016-01-01
Soil can contain large numbers of endocrine disrupting chemicals (EDCs). The varied physicochemical properties of EDCs constitute a great challenge to their determination in this type of environmental matrix. In this work, an analytical method was developed for the simultaneous determination of various classes of EDCs, including parabens, alkylphenols, phenylphenols, bisphenol A, and triclosan, in soils, sediments, and sewage sludge. The method uses microwave-assisted extraction (MAE) in combination with continuous solid-phase extraction for determination by gas chromatography-mass spectrometry. A systematic comparison of the MAE results with those of ultrasound-assisted and Soxhlet extraction showed MAE to provide the highest extraction efficiency (close to 100%) in the shortest extraction time (3 min). The proposed method provides a linear response over the range 2.0 - 5000 ng kg(-1) and features limits of detection from 0.5 to 4.5 ng kg(-1) depending on the properties of the EDC. The method was successfully applied to the determination of target compounds in agricultural soils, pond and river sediments, and sewage sludge. The sewage sludge samples were found to contain all target compounds except benzylparaben at concentration levels from 36 to 164 ng kg(-1). By contrast, the other types of samples contained fewer EDCs and at lower concentrations (5.6 - 84 ng kg(-1)).
NASA Astrophysics Data System (ADS)
Rama Krishna, K.; Ramachandran, K. I.
2018-02-01
Crack propagation is a major cause of failure in rotating machines. It adversely affects the productivity, safety, and the machining quality. Hence, detecting the crack’s severity accurately is imperative for the predictive maintenance of such machines. Fault diagnosis is an established concept in identifying the faults, for observing the non-linear behaviour of the vibration signals at various operating conditions. In this work, we find the classification efficiencies for both original and the reconstructed vibrational signals. The reconstructed signals are obtained using Variational Mode Decomposition (VMD), by splitting the original signal into three intrinsic mode functional components and framing them accordingly. Feature extraction, feature selection and feature classification are the three phases in obtaining the classification efficiencies. All the statistical features from the original signals and reconstructed signals are found out in feature extraction process individually. A few statistical parameters are selected in feature selection process and are classified using the SVM classifier. The obtained results show the best parameters and appropriate kernel in SVM classifier for detecting the faults in bearings. Hence, we conclude that better results were obtained by VMD and SVM process over normal process using SVM. This is owing to denoising and filtering the raw vibrational signals.
Image processing based detection of lung cancer on CT scan images
NASA Astrophysics Data System (ADS)
Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi
2017-10-01
In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavanaugh, J.E.; McQuarrie, A.D.; Shumway, R.H.
Conventional methods for discriminating between earthquakes and explosions at regional distances have concentrated on extracting specific features such as amplitude and spectral ratios from the waveforms of the P and S phases. We consider here an optimum nonparametric classification procedure derived from the classical approach to discriminating between two Gaussian processes with unequal spectra. Two robust variations based on the minimum discrimination information statistic and Renyi's entropy are also considered. We compare the optimum classification procedure with various amplitude and spectral ratio discriminants and show that its performance is superior when applied to a small population of 8 land-based earthquakesmore » and 8 mining explosions recorded in Scandinavia. Several parametric characterizations of the notion of complexity based on modeling earthquakes and explosions as autoregressive or modulated autoregressive processes are also proposed and their performance compared with the nonparametric and feature extraction approaches.« less
Tan, Chun-Wei; Kumar, Ajay
2014-07-10
Accurate iris recognition from the distantly acquired face or eye images requires development of effective strategies which can account for significant variations in the segmented iris image quality. Such variations can be highly correlated with the consistency of encoded iris features and the knowledge that such fragile bits can be exploited to improve matching accuracy. A non-linear approach to simultaneously account for both local consistency of iris bit and also the overall quality of the weight map is proposed. Our approach therefore more effectively penalizes the fragile bits while simultaneously rewarding more consistent bits. In order to achieve more stable characterization of local iris features, a Zernike moment-based phase encoding of iris features is proposed. Such Zernike moments-based phase features are computed from the partially overlapping regions to more effectively accommodate local pixel region variations in the normalized iris images. A joint strategy is adopted to simultaneously extract and combine both the global and localized iris features. The superiority of the proposed iris matching strategy is ascertained by providing comparison with several state-of-the-art iris matching algorithms on three publicly available databases: UBIRIS.v2, FRGC, CASIA.v4-distance. Our experimental results suggest that proposed strategy can achieve significant improvement in iris matching accuracy over those competing approaches in the literature, i.e., average improvement of 54.3%, 32.7% and 42.6% in equal error rates, respectively for UBIRIS.v2, FRGC, CASIA.v4-distance.
Texture analysis of common renal masses in multiple MR sequences for prediction of pathology
NASA Astrophysics Data System (ADS)
Hoang, Uyen N.; Malayeri, Ashkan A.; Lay, Nathan S.; Summers, Ronald M.; Yao, Jianhua
2017-03-01
This pilot study performs texture analysis on multiple magnetic resonance (MR) images of common renal masses for differentiation of renal cell carcinoma (RCC). Bounding boxes are drawn around each mass on one axial slice in T1 delayed sequence to use for feature extraction and classification. All sequences (T1 delayed, venous, arterial, pre-contrast phases, T2, and T2 fat saturated sequences) are co-registered and texture features are extracted from each sequence simultaneously. Random forest is used to construct models to classify lesions on 96 normal regions, 87 clear cell RCCs, 8 papillary RCCs, and 21 renal oncocytomas; ground truths are verified through pathology reports. The highest performance is seen in random forest model when data from all sequences are used in conjunction, achieving an overall classification accuracy of 83.7%. When using data from one single sequence, the overall accuracies achieved for T1 delayed, venous, arterial, and pre-contrast phase, T2, and T2 fat saturated were 79.1%, 70.5%, 56.2%, 61.0%, 60.0%, and 44.8%, respectively. This demonstrates promising results of utilizing intensity information from multiple MR sequences for accurate classification of renal masses.
The 2D analytic signal for envelope detection and feature extraction on ultrasound images.
Wachinger, Christian; Klein, Tassilo; Navab, Nassir
2012-08-01
The fundamental property of the analytic signal is the split of identity, meaning the separation of qualitative and quantitative information in form of the local phase and the local amplitude, respectively. Especially the structural representation, independent of brightness and contrast, of the local phase is interesting for numerous image processing tasks. Recently, the extension of the analytic signal from 1D to 2D, covering also intrinsic 2D structures, was proposed. We show the advantages of this improved concept on ultrasound RF and B-mode images. Precisely, we use the 2D analytic signal for the envelope detection of RF data. This leads to advantages for the extraction of the information-bearing signal from the modulated carrier wave. We illustrate this, first, by visual assessment of the images, and second, by performing goodness-of-fit tests to a Nakagami distribution, indicating a clear improvement of statistical properties. The evaluation is performed for multiple window sizes and parameter estimation techniques. Finally, we show that the 2D analytic signal allows for an improved estimation of local features on B-mode images. Copyright © 2012 Elsevier B.V. All rights reserved.
Automatic Speech Acquisition and Recognition for Spacesuit Audio Systems
NASA Technical Reports Server (NTRS)
Ye, Sherry
2015-01-01
NASA has a widely recognized but unmet need for novel human-machine interface technologies that can facilitate communication during astronaut extravehicular activities (EVAs), when loud noises and strong reverberations inside spacesuits make communication challenging. WeVoice, Inc., has developed a multichannel signal-processing method for speech acquisition in noisy and reverberant environments that enables automatic speech recognition (ASR) technology inside spacesuits. The technology reduces noise by exploiting differences between the statistical nature of signals (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, ASR accuracy can be improved to the level at which crewmembers will find the speech interface useful. System components and features include beam forming/multichannel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, and ASR decoding. Arithmetic complexity models were developed and will help designers of real-time ASR systems select proper tasks when confronted with constraints in computational resources. In Phase I of the project, WeVoice validated the technology. The company further refined the technology in Phase II and developed a prototype for testing and use by suited astronauts.
Köke, Niklas; Zahn, Daniel; Knepper, Thomas P; Frömel, Tobias
2018-03-01
Analysis of polar organic chemicals in the aquatic environment is exacerbated by the lack of suitable and widely applicable enrichment methods. In this work, we assessed the suitability of a novel combination of well-known solid-phase extraction (SPE) materials in one cartridge as well as an evaporation method and for the enrichment of 26 polar model substances (predominantly log D < 0) covering a broad range of physico-chemical properties in three different aqueous matrices. The multi-layer solid-phase extraction (mlSPE) and evaporation method were investigated for the recovery and matrix effects of the model substances and analyzed with hydrophilic interaction liquid chromatography-tandem mass spectrometry (HILIC-MS/MS). In total, 65% of the model substances were amenable (> 10% recovery) to the mlSPE method with a mean recovery of 76% while 73% of the model substances were enriched with the evaporation method achieving a mean recovery of 78%. Target and non-target screening comparison of both methods with a frequently used reversed-phase SPE method utilizing "hydrophilic and lipophilic balanced" (HLB) material was performed. Target analysis showed that the mlSPE and evaporation method have pronounced advantages over the HLB method since the HLB material retained only 30% of the model substances. Non-target screening of a ground water sample with the investigated enrichment methods showed that the median retention time of all detected features on a HILIC system decreased in the order mlSPE (3641 features, median t R 9.7 min), evaporation (1391, 9.3 min), HLB (4414, 7.2 min), indicating a higher potential of the described methods to enrich polar analytes from water compared with HLB-SPE. Graphical abstract Schematic of the method evaluation (recovery and matrix effects) and method comparison (target and non-target analysis) of the two investigated enrichment methods for very polar chemicals in aqueousmatrices.
NASA Astrophysics Data System (ADS)
Wang, Bingjie; Sun, Qi; Pi, Shaohua; Wu, Hongyan
2014-09-01
In this paper, feature extraction and pattern recognition of the distributed optical fiber sensing signal have been studied. We adopt Mel-Frequency Cepstral Coefficient (MFCC) feature extraction, wavelet packet energy feature extraction and wavelet packet Shannon entropy feature extraction methods to obtain sensing signals (such as speak, wind, thunder and rain signals, etc.) characteristic vectors respectively, and then perform pattern recognition via RBF neural network. Performances of these three feature extraction methods are compared according to the results. We choose MFCC characteristic vector to be 12-dimensional. For wavelet packet feature extraction, signals are decomposed into six layers by Daubechies wavelet packet transform, in which 64 frequency constituents as characteristic vector are respectively extracted. In the process of pattern recognition, the value of diffusion coefficient is introduced to increase the recognition accuracy, while keeping the samples for testing algorithm the same. Recognition results show that wavelet packet Shannon entropy feature extraction method yields the best recognition accuracy which is up to 97%; the performance of 12-dimensional MFCC feature extraction method is less satisfactory; the performance of wavelet packet energy feature extraction method is the worst.
Benay, G; Wipff, G
2014-03-20
We report a molecular dynamics (MD) study of biphasic systems involved in the liquid-liquid extraction of uranyl nitrate by tri-n-butylphosphate (TBP) to hexane, from "pH neutral" or acidic (3 M nitric acid) aqueous solutions, to assess the model dependence of the surface activity and partitioning of TBP alone, of its UO2(NO3)2(TBP)2 complex, and of UO2(NO3)2 or UO2(2+) uncomplexed. For this purpose, we first compare several electrostatic representations of TBP with regards to its polarity and conformational properties, its interactions with H2O, HNO3, and UO2(NO3)2 species, its relative free energies of solvation in water or oil environments, the properties of the pure TBP liquid and of the pure-TBP/water interface. The free energies of transfer of TBP, UO2(NO3)2, UO2(2+), and the UO2(NO3)2(TBP)2 complex across the water/oil interface are then investigated by potential of mean force (PMF) calculations, comparing different TBP models and two charge models of uranyl nitrate. Describing uranyl and nitrate ions with integer charges (+2 and -1, respectively) is shown to exaggerate the hydrophilicity and surface activity of the UO2(NO3)2(TBP)2 complex. With more appropriate ESP charges, mimicking charge transfer and polarization effects in the UO2(NO3)2 moiety or in the whole complex, the latter is no more surface active. This feature is confirmed by MD, PMF, and mixing-demixing simulations with or without polarization. Furthermore, with ESP charges, pulling the UO2(NO3)2 species to the TBP phase affords the formation of UO2(NO3)2(TBP)2 at the interface, followed by its energetically favorable extraction. The neutral complexes should therefore not accumulate at the interface during the extraction process, but diffuse to the oil phase. A similar feature is found for an UO2(NO3)2(Amide)2 neutral complex with fatty amide extracting ligands, calling for further simulations and experimental studies (e.g., time evolution of the nonlinear spectroscopic signature and of surface tension) on the interfacial landscape upon ion extraction.
Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Glaser, Christian; Wismuller, Axel
2013-10-01
Visualization of ex vivo human patellar cartilage matrix through the phase contrast imaging X-ray computed tomography (PCI-CT) has been previously demonstrated. Such studies revealed osteoarthritis-induced changes to chondrocyte organization in the radial zone. This study investigates the application of texture analysis to characterizing such chondrocyte patterns in the presence and absence of osteoarthritic damage. Texture features derived from Minkowski functionals (MF) and gray-level co-occurrence matrices (GLCM) were extracted from 842 regions of interest (ROI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. These texture features were subsequently used in a machine learning task with support vector regression to classify ROIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver operating characteristic curve (AUC). The best classification performance was observed with the MF features perimeter (AUC: 0.94 ±0.08 ) and "Euler characteristic" (AUC: 0.94 ±0.07 ), and GLCM-derived feature "Correlation" (AUC: 0.93 ±0.07). These results suggest that such texture features can provide a detailed characterization of the chondrocyte organization in the cartilage matrix, enabling classification of cartilage as healthy or osteoarthritic with high accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmon, S; Jeraj, R; Galavis, P
Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patchesmore » along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity and correlation of various texture features were shown to significantly differ between 2D and 3D extraction. Additionally, inter-feature correlations were more sensitive to reconstruction variation using single-plane extraction. This work highlights a need for standardized feature extraction/selection techniques in radiomics.« less
Target recognition based on convolutional neural network
NASA Astrophysics Data System (ADS)
Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian
2017-11-01
One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.
The relationship between 2D static features and 2D dynamic features used in gait recognition
NASA Astrophysics Data System (ADS)
Alawar, Hamad M.; Ugail, Hassan; Kamala, Mumtaz; Connah, David
2013-05-01
In most gait recognition techniques, both static and dynamic features are used to define a subject's gait signature. In this study, the existence of a relationship between static and dynamic features was investigated. The correlation coefficient was used to analyse the relationship between the features extracted from the "University of Bradford Multi-Modal Gait Database". This study includes two dimensional dynamic and static features from 19 subjects. The dynamic features were compromised of Phase-Weighted Magnitudes driven by a Fourier Transform of the temporal rotational data of a subject's joints (knee, thigh, shoulder, and elbow). The results concluded that there are eleven pairs of features that are considered significantly correlated with (p<0.05). This result indicates the existence of a statistical relationship between static and dynamics features, which challenges the results of several similar studies. These results bare great potential for further research into the area, and would potentially contribute to the creation of a gait signature using latent data.
Battle, Katrina N; Jackson, Joshua M; Witek, Małgorzata A; Hupert, Mateusz L; Hunsucker, Sally A; Armistead, Paul M; Soper, Steven A
2014-03-21
We present a novel microfluidic solid-phase extraction (μSPE) device for the affinity enrichment of biotinylated membrane proteins from whole cell lysates. The device offers features that address challenges currently associated with the extraction and purification of membrane proteins from whole cell lysates, including the ability to release the enriched membrane protein fraction from the extraction surface so that they are available for downstream processing. The extraction bed was fabricated in PMMA using hot embossing and was comprised of 3600 micropillars. Activation of the PMMA micropillars by UV/O3 treatment permitted generation of surface-confined carboxylic acid groups and the covalent attachment of NeutrAvidin onto the μSPE device surfaces, which was used to affinity select biotinylated MCF-7 membrane proteins directly from whole cell lysates. The inclusion of a disulfide linker within the biotin moiety permitted release of the isolated membrane proteins via DTT incubation. Very low levels (∼20 fmol) of membrane proteins could be isolated and recovered with ∼89% efficiency with a bed capacity of 1.7 pmol. Western blotting indicated no traces of cytosolic proteins in the membrane protein fraction as compared to significant contamination using a commercial detergent-based method. We highlight future avenues for enhanced extraction efficiency and increased dynamic range of the μSPE device using computational simulations of different micropillar geometries to guide future device designs.
Diagnosis of the three-phase induction motor using thermal imaging
NASA Astrophysics Data System (ADS)
Glowacz, Adam; Glowacz, Zygfryd
2017-03-01
Three-phase induction motors are used in the industry commonly for example woodworking machines, blowers, pumps, conveyors, elevators, compressors, mining industry, automotive industry, chemical industry and railway applications. Diagnosis of faults is essential for proper maintenance. Faults may damage a motor and damaged motors generate economic losses caused by breakdowns in production lines. In this paper the authors develop fault diagnostic techniques of the three-phase induction motor. The described techniques are based on the analysis of thermal images of three-phase induction motor. The authors analyse thermal images of 3 states of the three-phase induction motor: healthy three-phase induction motor, three-phase induction motor with 2 broken bars, three-phase induction motor with faulty ring of squirrel-cage. In this paper the authors develop an original method of the feature extraction of thermal images MoASoID (Method of Areas Selection of Image Differences). This method compares many training sets together and it selects the areas with the biggest changes for the recognition process. Feature vectors are obtained with the use of mentioned MoASoID and image histogram. Next 3 methods of classification are used: NN (the Nearest Neighbour classifier), K-means, BNN (the back-propagation neural network). The described fault diagnostic techniques are useful for protection of three-phase induction motor and other types of rotating electrical motors such as: DC motors, generators, synchronous motors.
The phase interrogation method for optical fiber sensor by analyzing the fork interference pattern
NASA Astrophysics Data System (ADS)
Lv, Riqing; Qiu, Liqiang; Hu, Haifeng; Meng, Lu; Zhang, Yong
2018-02-01
The phase interrogation method for optical fiber sensor is proposed based on the fork interference pattern between the orbital angular momentum beam and plane wave. The variation of interference pattern with phase difference between the two light beams is investigated to realize the phase interrogation. By employing principal component analysis method, the features of the interference pattern can be extracted. Moreover, the experimental system is designed to verify the theoretical analysis, as well as feasibility of phase interrogation. In this work, the Mach-Zehnder interferometer was employed to convert the strain applied on sensing fiber to the phase difference between the reference and measuring paths. This interrogation method is also applicable for the measurements of other physical parameters, which can produce the phase delay in optical fiber. The performance of the system can be further improved by employing highlysensitive materials and fiber structures.
Vertical Feature Mask Feature Classification Flag Extraction
Atmospheric Science Data Center
2013-03-28
Vertical Feature Mask Feature Classification Flag Extraction This routine demonstrates extraction of the ... in a CALIPSO Lidar Level 2 Vertical Feature Mask feature classification flag value. It is written in Interactive Data Language (IDL) ...
2012-01-01
Background In recent years, biological event extraction has emerged as a key natural language processing task, aiming to address the information overload problem in accessing the molecular biology literature. The BioNLP shared task competitions have contributed to this recent interest considerably. The first competition (BioNLP'09) focused on extracting biological events from Medline abstracts from a narrow domain, while the theme of the latest competition (BioNLP-ST'11) was generalization and a wider range of text types, event types, and subject domains were considered. We view event extraction as a building block in larger discourse interpretation and propose a two-phase, linguistically-grounded, rule-based methodology. In the first phase, a general, underspecified semantic interpretation is composed from syntactic dependency relations in a bottom-up manner. The notion of embedding underpins this phase and it is informed by a trigger dictionary and argument identification rules. Coreference resolution is also performed at this step, allowing extraction of inter-sentential relations. The second phase is concerned with constraining the resulting semantic interpretation by shared task specifications. We evaluated our general methodology on core biological event extraction and speculation/negation tasks in three main tracks of BioNLP-ST'11 (GENIA, EPI, and ID). Results We achieved competitive results in GENIA and ID tracks, while our results in the EPI track leave room for improvement. One notable feature of our system is that its performance across abstracts and articles bodies is stable. Coreference resolution results in minor improvement in system performance. Due to our interest in discourse-level elements, such as speculation/negation and coreference, we provide a more detailed analysis of our system performance in these subtasks. Conclusions The results demonstrate the viability of a robust, linguistically-oriented methodology, which clearly distinguishes general semantic interpretation from shared task specific aspects, for biological event extraction. Our error analysis pinpoints some shortcomings, which we plan to address in future work within our incremental system development methodology. PMID:22759461
Patch-based automatic retinal vessel segmentation in global and local structural context.
Cao, Shuoying; Bharath, Anil A; Parker, Kim H; Ng, Jeffrey
2012-01-01
In this paper, we extend our published work [1] and propose an automated system to segment retinal vessel bed in digital fundus images with enough adaptability to analyze images from fluorescein angiography. This approach takes into account both the global and local context and enables both vessel segmentation and microvascular centreline extraction. These tools should allow researchers and clinicians to estimate and assess vessel diameter, capillary blood volume and microvascular topology for early stage disease detection, monitoring and treatment. Global vessel bed segmentation is achieved by combining phase-invariant orientation fields with neighbourhood pixel intensities in a patch-based feature vector for supervised learning. This approach is evaluated against benchmarks on the DRIVE database [2]. Local microvascular centrelines within Regions-of-Interest (ROIs) are segmented by linking the phase-invariant orientation measures with phase-selective local structure features. Our global and local structural segmentation can be used to assess both pathological structural alterations and microemboli occurrence in non-invasive clinical settings in a longitudinal study.
Ibrahim, Wisam; Abadeh, Mohammad Saniee
2017-05-21
Protein fold recognition is an important problem in bioinformatics to predict three-dimensional structure of a protein. One of the most challenging tasks in protein fold recognition problem is the extraction of efficient features from the amino-acid sequences to obtain better classifiers. In this paper, we have proposed six descriptors to extract features from protein sequences. These descriptors are applied in the first stage of a three-stage framework PCA-DELM-LDA to extract feature vectors from the amino-acid sequences. Principal Component Analysis PCA has been implemented to reduce the number of extracted features. The extracted feature vectors have been used with original features to improve the performance of the Deep Extreme Learning Machine DELM in the second stage. Four new features have been extracted from the second stage and used in the third stage by Linear Discriminant Analysis LDA to classify the instances into 27 folds. The proposed framework is implemented on the independent and combined feature sets in SCOP datasets. The experimental results show that extracted feature vectors in the first stage could improve the performance of DELM in extracting new useful features in second stage. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parts-based stereoscopic image assessment by learning binocular manifold color visual properties
NASA Astrophysics Data System (ADS)
Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi
2016-11-01
Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.
Deep Learning in Label-free Cell Classification
Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; Blaby, Ian K.; Huang, Allen; Niazi, Kayvan Reza; Jalali, Bahram
2016-01-01
Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells. PMID:26975219
Deep Learning in Label-free Cell Classification
NASA Astrophysics Data System (ADS)
Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; Blaby, Ian K.; Huang, Allen; Niazi, Kayvan Reza; Jalali, Bahram
2016-03-01
Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.
Soguero-Ruiz, Cristina; Hindberg, Kristian; Rojo-Alvarez, Jose Luis; Skrovseth, Stein Olav; Godtliebsen, Fred; Mortensen, Kim; Revhaug, Arthur; Lindsetmo, Rolv-Ole; Augestad, Knut Magne; Jenssen, Robert
2016-09-01
The free text in electronic health records (EHRs) conveys a huge amount of clinical information about health state and patient history. Despite a rapidly growing literature on the use of machine learning techniques for extracting this information, little effort has been invested toward feature selection and the features' corresponding medical interpretation. In this study, we focus on the task of early detection of anastomosis leakage (AL), a severe complication after elective surgery for colorectal cancer (CRC) surgery, using free text extracted from EHRs. We use a bag-of-words model to investigate the potential for feature selection strategies. The purpose is earlier detection of AL and prediction of AL with data generated in the EHR before the actual complication occur. Due to the high dimensionality of the data, we derive feature selection strategies using the robust support vector machine linear maximum margin classifier, by investigating: 1) a simple statistical criterion (leave-one-out-based test); 2) an intensive-computation statistical criterion (Bootstrap resampling); and 3) an advanced statistical criterion (kernel entropy). Results reveal a discriminatory power for early detection of complications after CRC (sensitivity 100%; specificity 72%). These results can be used to develop prediction models, based on EHR data, that can support surgeons and patients in the preoperative decision making phase.
Leontidis, Georgios
2017-11-01
Human retina is a diverse and important tissue, vastly studied for various retinal and other diseases. Diabetic retinopathy (DR), a leading cause of blindness, is one of them. This work proposes a novel and complete framework for the accurate and robust extraction and analysis of a series of retinal vascular geometric features. It focuses on studying the registered bifurcations in successive years of progression from diabetes (no DR) to DR, in order to identify the vascular alterations. Retinal fundus images are utilised, and multiple experimental designs are employed. The framework includes various steps, such as image registration and segmentation, extraction of features, statistical analysis and classification models. Linear mixed models are utilised for making the statistical inferences, alongside the elastic-net logistic regression, boruta algorithm, and regularised random forests for the feature selection and classification phases, in order to evaluate the discriminative potential of the investigated features and also build classification models. A number of geometric features, such as the central retinal artery and vein equivalents, are found to differ significantly across the experiments and also have good discriminative potential. The classification systems yield promising results with the area under the curve values ranging from 0.821 to 0.968, across the four different investigated combinations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Grating-based tomography of human tissues
NASA Astrophysics Data System (ADS)
Müller, Bert; Schulz, Georg; Mehlin, Andrea; Herzen, Julia; Lang, Sabrina; Holme, Margaret; Zanette, Irene; Hieber, Simone; Deyhle, Hans; Beckmann, Felix; Pfeiffer, Franz; Weitkamp, Timm
2012-07-01
The development of therapies to improve our health requires a detailed knowledge on the anatomy of soft tissues from the human body down to the cellular level. Grating-based phase contrast micro computed tomography using synchrotron radiation provides a sensitivity, which allows visualizing micrometer size anatomical features in soft tissue without applying any contrast agent. We show phase contrast tomography data of human brain, tumor vessels and constricted arteries from the beamline ID 19 (ESRF) and urethral tissue from the beamline W2 (HASYLAB/DESY) with micrometer resolution. Here, we demonstrate that anatomical features can be identified within brain tissue as well known from histology. Using human urethral tissue, the application of two photon energies is compared. Tumor vessels thicker than 20 μm can be perfectly segmented. The morphology of coronary arteries can be better extracted in formalin than after paraffin embedding.
Iris recognition based on key image feature extraction.
Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y
2008-01-01
In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.
NASA Astrophysics Data System (ADS)
Markman, Adam; Carnicer, Artur; Javidi, Bahram
2017-05-01
We overview our recent work [1] on utilizing three-dimensional (3D) optical phase codes for object authentication using the random forest classifier. A simple 3D optical phase code (OPC) is generated by combining multiple diffusers and glass slides. This tag is then placed on a quick-response (QR) code, which is a barcode capable of storing information and can be scanned under non-uniform illumination conditions, rotation, and slight degradation. A coherent light source illuminates the OPC and the transmitted light is captured by a CCD to record the unique signature. Feature extraction on the signature is performed and inputted into a pre-trained random-forest classifier for authentication.
Experience improves feature extraction in Drosophila.
Peng, Yueqing; Xi, Wang; Zhang, Wei; Zhang, Ke; Guo, Aike
2007-05-09
Previous exposure to a pattern in the visual scene can enhance subsequent recognition of that pattern in many species from honeybees to humans. However, whether previous experience with a visual feature of an object, such as color or shape, can also facilitate later recognition of that particular feature from multiple visual features is largely unknown. Visual feature extraction is the ability to select the key component from multiple visual features. Using a visual flight simulator, we designed a novel protocol for visual feature extraction to investigate the effects of previous experience on visual reinforcement learning in Drosophila. We found that, after conditioning with a visual feature of objects among combinatorial shape-color features, wild-type flies exhibited poor ability to extract the correct visual feature. However, the ability for visual feature extraction was greatly enhanced in flies trained previously with that visual feature alone. Moreover, we demonstrated that flies might possess the ability to extract the abstract category of "shape" but not a particular shape. Finally, this experience-dependent feature extraction is absent in flies with defective MBs, one of the central brain structures in Drosophila. Our results indicate that previous experience can enhance visual feature extraction in Drosophila and that MBs are required for this experience-dependent visual cognition.
Arabic OCR: toward a complete system
NASA Astrophysics Data System (ADS)
El-Bialy, Ahmed M.; Kandil, Ahmed H.; Hashish, Mohamed; Yamany, Sameh M.
1999-12-01
Latin and Chinese OCR systems have been studied extensively in the literature. Yet little work was performed for Arabic character recognition. This is due to the technical challenges found in the Arabic text. Due to its cursive nature, a powerful and stable text segmentation is needed. Also; features capturing the characteristics of the rich Arabic character representation are needed to build the Arabic OCR. In this paper a novel segmentation technique which is font and size independent is introduced. This technique can segment the cursive written text line even if the line suffers from small skewness. The technique is not sensitive to the location of the centerline of the text line and can segment different font sizes and type (for different character sets) occurring on the same line. Features extraction is considered one of the most important phases of the text reading system. Ideally, the features extracted from a character image should capture the essential characteristics of this character that are independent of the font type and size. In such ideal case, the classifier stores a single prototype per character. However, it is practically challenging to find such ideal set of features. In this paper, a set of features that reflect the topological aspects of Arabia characters is proposed. These proposed features integrated with a topological matching technique introduce an Arabic text reading system that is semi Omni.
Text feature extraction based on deep learning: a review.
Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan
2017-01-01
Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.
Feature extraction for document text using Latent Dirichlet Allocation
NASA Astrophysics Data System (ADS)
Prihatini, P. M.; Suryawan, I. K.; Mandia, IN
2018-01-01
Feature extraction is one of stages in the information retrieval system that used to extract the unique feature values of a text document. The process of feature extraction can be done by several methods, one of which is Latent Dirichlet Allocation. However, researches related to text feature extraction using Latent Dirichlet Allocation method are rarely found for Indonesian text. Therefore, through this research, a text feature extraction will be implemented for Indonesian text. The research method consists of data acquisition, text pre-processing, initialization, topic sampling and evaluation. The evaluation is done by comparing Precision, Recall and F-Measure value between Latent Dirichlet Allocation and Term Frequency Inverse Document Frequency KMeans which commonly used for feature extraction. The evaluation results show that Precision, Recall and F-Measure value of Latent Dirichlet Allocation method is higher than Term Frequency Inverse Document Frequency KMeans method. This shows that Latent Dirichlet Allocation method is able to extract features and cluster Indonesian text better than Term Frequency Inverse Document Frequency KMeans method.
Functional Brain Connectivity as a New Feature for P300 Speller.
Kabbara, Aya; Khalil, Mohamad; El-Falou, Wassim; Eid, Hassan; Hassan, Mahmoud
2016-01-01
The brain is a large-scale complex network often referred to as the "connectome". Cognitive functions and information processing are mainly based on the interactions between distant brain regions. However, most of the 'feature extraction' methods used in the context of Brain Computer Interface (BCI) ignored the possible functional relationships between different signals recorded from distinct brain areas. In this paper, the functional connectivity quantified by the phase locking value (PLV) was introduced to characterize the evoked responses (ERPs) obtained in the case of target and non-targets visual stimuli. We also tested the possibility of using the functional connectivity in the context of 'P300 speller'. The proposed approach was compared to the well-known methods proposed in the state of the art of "P300 Speller", mainly the peak picking, the area, time/frequency based features, the xDAWN spatial filtering and the stepwise linear discriminant analysis (SWLDA). The electroencephalographic (EEG) signals recorded from ten subjects were analyzed offline. The results indicated that phase synchrony offers relevant information for the classification in a P300 speller. High synchronization between the brain regions was clearly observed during target trials, although no significant synchronization was detected for a non-target trial. The results showed also that phase synchrony provides higher performance than some existing methods for letter classification in a P300 speller principally when large number of trials is available. Finally, we tested the possible combination of both approaches (classical features and phase synchrony). Our findings showed an overall improvement of the performance of the P300-speller when using Peak picking, the area and frequency based features. Similar performances were obtained compared to xDAWN and SWLDA when using large number of trials.
Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.
Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu
2016-01-01
The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.
Integrated feature extraction and selection for neuroimage classification
NASA Astrophysics Data System (ADS)
Fan, Yong; Shen, Dinggang
2009-02-01
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
NASA Astrophysics Data System (ADS)
Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia
2018-05-01
Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.
User-assisted video segmentation system for visual communication
NASA Astrophysics Data System (ADS)
Wu, Zhengping; Chen, Chun
2002-01-01
Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.
Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop
NASA Astrophysics Data System (ADS)
Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin
2014-06-01
Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Travis S. Grimes; Peter R. Zalupski
2014-11-01
A new methodology has been developed to study the thermochemical features of the biphasic transfer reactions of trisnitrato complexes of lanthanides and americium by a mono-functional solvating ligand (tri-n-octyl phosphine oxide - TOPO). Stability constants for successive nitrato complexes (M(NO3)x3-x (aq) where M is Eu3+, Am3+ or Cm3+) were determined to assist in the calculation of the extraction constant, Kex, for the metal ions under study. Enthalpies of extraction (?Hextr) for the lanthanide series (excluding Pm3+) and Am3+ by TOPO have been measured using isothermal titration calorimetry. The observed ?Hextr were found to be constant at ~29 kJ mol-1across themore » series from La3+-Er3+, with a slight decrease observed from Tm3+-Lu3+. These heats were found to be consistent with enthalpies determined using van ’t Hoff analysis of temperature dependent extraction studies. A complete set of thermodynamic parameters (?G, ?H, ?S) was calculated for Eu(NO3)3, Am(NO3)3 and Cm(NO3)3 extraction by TOPO and Am3+ and Cm3+ extraction by bis(2-ethylhexyl) phosphoric acid (HDEHP). A discussion comparing the energetics of these systems is offered. The measured biphasic extraction heats for the transplutonium elements, ?Hextr, presented in these studies are the first ever direct measurements offered using two-phase calorimetric techniques.« less
Quantitative analysis of three-dimensional biological cells using interferometric microscopy
NASA Astrophysics Data System (ADS)
Shaked, Natan T.; Wax, Adam
2011-06-01
Live biological cells are three-dimensional microscopic objects that constantly adjust their sizes, shapes and other biophysical features. Wide-field digital interferometry (WFDI) is a holographic technique that is able to record the complex wavefront of the light which has interacted with in-vitro cells in a single camera exposure, where no exogenous contrast agents are required. However, simple quasi-three-dimensional holographic visualization of the cell phase profiles need not be the end of the process. Quantitative analysis should permit extraction of numerical parameters which are useful for cytology or medical diagnosis. Using a transmission-mode setup, the phase profile represents the multiplication between the integral refractive index and the thickness of the sample. These coupled variables may not be distinct when acquiring the phase profiles of dynamic cells. Many morphological parameters which are useful for cell biologists are based on the cell thickness profile rather than on its phase profile. We first overview methods to decouple the cell thickness and its refractive index using the WFDI-based phase profile. Then, we present a whole-cell-imaging approach which is able to extract useful numerical parameters on the cells even in cases where decoupling of cell thickness and refractive index is not possible or desired.
A modal parameter extraction procedure applicable to linear time-invariant dynamic systems
NASA Technical Reports Server (NTRS)
Kurdila, A. J.; Craig, R. R., Jr.
1985-01-01
Modal analysis has emerged as a valuable tool in many phases of the engineering design process. Complex vibration and acoustic problems in new designs can often be remedied through use of the method. Moreover, the technique has been used to enhance the conceptual understanding of structures by serving to verify analytical models. A new modal parameter estimation procedure is presented. The technique is applicable to linear, time-invariant systems and accommodates multiple input excitations. In order to provide a background for the derivation of the method, some modal parameter extraction procedures currently in use are described. Key features implemented in the new technique are elaborated upon.
Spectral-clustering approach to Lagrangian vortex detection.
Hadjighasem, Alireza; Karrasch, Daniel; Teramoto, Hiroshi; Haller, George
2016-06-01
One of the ubiquitous features of real-life turbulent flows is the existence and persistence of coherent vortices. Here we show that such coherent vortices can be extracted as clusters of Lagrangian trajectories. We carry out the clustering on a weighted graph, with the weights measuring pairwise distances of fluid trajectories in the extended phase space of positions and time. We then extract coherent vortices from the graph using tools from spectral graph theory. Our method locates all coherent vortices in the flow simultaneously, thereby showing high potential for automated vortex tracking. We illustrate the performance of this technique by identifying coherent Lagrangian vortices in several two- and three-dimensional flows.
A framework for feature extraction from hospital medical data with applications in risk prediction.
Tran, Truyen; Luo, Wei; Phung, Dinh; Gupta, Sunil; Rana, Santu; Kennedy, Richard Lee; Larkins, Ann; Venkatesh, Svetha
2014-12-30
Feature engineering is a time consuming component of predictive modeling. We propose a versatile platform to automatically extract features for risk prediction, based on a pre-defined and extensible entity schema. The extraction is independent of disease type or risk prediction task. We contrast auto-extracted features to baselines generated from the Elixhauser comorbidities. Hospital medical records was transformed to event sequences, to which filters were applied to extract feature sets capturing diversity in temporal scales and data types. The features were evaluated on a readmission prediction task, comparing with baseline feature sets generated from the Elixhauser comorbidities. The prediction model was through logistic regression with elastic net regularization. Predictions horizons of 1, 2, 3, 6, 12 months were considered for four diverse diseases: diabetes, COPD, mental disorders and pneumonia, with derivation and validation cohorts defined on non-overlapping data-collection periods. For unplanned readmissions, auto-extracted feature set using socio-demographic information and medical records, outperformed baselines derived from the socio-demographic information and Elixhauser comorbidities, over 20 settings (5 prediction horizons over 4 diseases). In particular over 30-day prediction, the AUCs are: COPD-baseline: 0.60 (95% CI: 0.57, 0.63), auto-extracted: 0.67 (0.64, 0.70); diabetes-baseline: 0.60 (0.58, 0.63), auto-extracted: 0.67 (0.64, 0.69); mental disorders-baseline: 0.57 (0.54, 0.60), auto-extracted: 0.69 (0.64,0.70); pneumonia-baseline: 0.61 (0.59, 0.63), auto-extracted: 0.70 (0.67, 0.72). The advantages of auto-extracted standard features from complex medical records, in a disease and task agnostic manner were demonstrated. Auto-extracted features have good predictive power over multiple time horizons. Such feature sets have potential to form the foundation of complex automated analytic tasks.
Comparative analysis of feature extraction methods in satellite imagery
NASA Astrophysics Data System (ADS)
Karim, Shahid; Zhang, Ye; Asif, Muhammad Rizwan; Ali, Saad
2017-10-01
Feature extraction techniques are extensively being used in satellite imagery and getting impressive attention for remote sensing applications. The state-of-the-art feature extraction methods are appropriate according to the categories and structures of the objects to be detected. Based on distinctive computations of each feature extraction method, different types of images are selected to evaluate the performance of the methods, such as binary robust invariant scalable keypoints (BRISK), scale-invariant feature transform, speeded-up robust features (SURF), features from accelerated segment test (FAST), histogram of oriented gradients, and local binary patterns. Total computational time is calculated to evaluate the speed of each feature extraction method. The extracted features are counted under shadow regions and preprocessed shadow regions to compare the functioning of each method. We have studied the combination of SURF with FAST and BRISK individually and found very promising results with an increased number of features and less computational time. Finally, feature matching is conferred for all methods.
1993-01-01
Xenopus egg extracts prepared before and after egg activation retain M- and S-phase specific activity, respectively. Staurosporine, a potent inhibitor of protein kinase, converted M-phase extracts into interphase- like extracts that were capable of forming nuclei upon the addition of sperm DNA. The nuclei formed in the staurosporine treated M-phase extract were incapable of replicating DNA, and they were unable to initiate replication upon the addition of S-phase extracts. Furthermore, replication was inhibited when the staurosporine-treated M- phase extract was added in excess to the staurosporine-treated S-phase extract before the addition of DNA. The membrane-depleted S-phase extract supported neither nuclear formation nor replication; however, preincubation of sperm DNA with these extracts allowed them to form replication-competent nuclei upon the addition of excess staurosporine- treated M-phase extract. These results demonstrate that positive factors in the S-phase extracts determined the initiation of DNA replication before nuclear formation, although these factors were unable to initiate replication after nuclear formation. PMID:8253833
Whitney, G. A.; Mansour, J. M.; Dennis, J. E.
2015-01-01
The mechanical loading environment encountered by articular cartilage in situ makes frictional-shear testing an invaluable technique for assessing engineered cartilage. Despite the important information that is gained from this testing, it remains under-utilized, especially for determining damage behavior. Currently, extensive visual inspection is required to assess damage; this is cumbersome and subjective. Tools to simplify, automate, and remove subjectivity from the analysis may increase the accessibility and usefulness of frictional-shear testing as an evaluation method. The objective of this study was to determine if the friction signal could be used to detect damage that occurred during the testing. This study proceeded in two phases: first, a simplified model of biphasic lubrication that does not require knowledge of interstitial fluid pressure was developed. In the second phase, frictional-shear tests were performed on 74 cartilage samples, and the simplified model was used to extract characteristic features from the friction signals. Using support vector machine classifiers, the extracted features were able to detect damage with a median accuracy of approximately 90%. The accuracy remained high even in samples with minimal damage. In conclusion, the friction signal acquired during frictional-shear testing can be used to detect resultant damage to a high level of accuracy. PMID:25691395
Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion
Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang
2016-01-01
Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. PMID:26840313
Automatic extraction of planetary image features
NASA Technical Reports Server (NTRS)
LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)
2013-01-01
A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.
Cell classification using big data analytics plus time stretch imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Jalali, Bahram; Chen, Claire L.; Mahjoubfar, Ata
2016-09-01
We show that blood cells can be classified with high accuracy and high throughput by combining machine learning with time stretch quantitative phase imaging. Our diagnostic system captures quantitative phase images in a flow microscope at millions of frames per second and extracts multiple biophysical features from individual cells including morphological characteristics, light absorption and scattering parameters, and protein concentration. These parameters form a hyperdimensional feature space in which supervised learning and cell classification is performed. We show binary classification of T-cells against colon cancer cells, as well classification of algae cell strains with high and low lipid content. The label-free screening averts the negative impact of staining reagents on cellular viability or cell signaling. The combination of time stretch machine vision and learning offers unprecedented cell analysis capabilities for cancer diagnostics, drug development and liquid biopsy for personalized genomics.
Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Glaser, Christian; Wismüller, Axel
2014-02-01
Phase-contrast computed tomography (PCI-CT) has shown tremendous potential as an imaging modality for visualizing human cartilage with high spatial resolution. Previous studies have demonstrated the ability of PCI-CT to visualize (1) structural details of the human patellar cartilage matrix and (2) changes to chondrocyte organization induced by osteoarthritis. This study investigates the use of high-dimensional geometric features in characterizing such chondrocyte patterns in the presence or absence of osteoarthritic damage. Geometrical features derived from the scaling index method (SIM) and statistical features derived from gray-level co-occurrence matrices were extracted from 842 regions of interest (ROI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. These features were subsequently used in a machine learning task with support vector regression to classify ROIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic curve (AUC). SIM-derived geometrical features exhibited the best classification performance (AUC, 0.95 ± 0.06) and were most robust to changes in ROI size. These results suggest that such geometrical features can provide a detailed characterization of the chondrocyte organization in the cartilage matrix in an automated and non-subjective manner, while also enabling classification of cartilage as healthy or osteoarthritic with high accuracy. Such features could potentially serve as imaging markers for evaluating osteoarthritis progression and its response to different therapeutic intervention strategies.
NASA Astrophysics Data System (ADS)
Hussnain, Zille; Oude Elberink, Sander; Vosselman, George
2016-06-01
In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.
Hsieh, Fushing; Hsueh, Chih-Hsin; Heitkamp, Constantin; Matthews, Mark
2016-01-01
Multiple datasets of two consecutive vintages of replicated grape and wines from six different deficit irrigation regimes are characterized and compared. The process consists of four temporal-ordered signature phases: harvest field data, juice composition, wine composition before bottling and bottled wine. A new computing paradigm and an integrative inferential platform are developed for discovering phase-to-phase pattern geometries for such characterization and comparison purposes. Each phase is manifested by a distinct set of features, which are measurable upon phase-specific entities subject to the common set of irrigation regimes. Throughout the four phases, this compilation of data from irrigation regimes with subsamples is termed a space of media-nodes, on which measurements of phase-specific features were recoded. All of these collectively constitute a bipartite network of data, which is then normalized and binary coded. For these serial bipartite networks, we first quantify patterns that characterize individual phases by means of a new computing paradigm called "Data Mechanics". This computational technique extracts a coupling geometry which captures and reveals interacting dependence among and between media-nodes and feature-nodes in forms of hierarchical block sub-matrices. As one of the principal discoveries, the holistic year-factor persistently surfaces as the most inferential factor in classifying all media-nodes throughout all phases. This could be deemed either surprising in its over-arching dominance or obvious based on popular belief. We formulate and test pattern-based hypotheses that confirm such fundamental patterns. We also attempt to elucidate the driving force underlying the phase-evolution in winemaking via a newly developed partial coupling geometry, which is designed to integrate two coupling geometries. Such partial coupling geometries are confirmed to bear causal and predictive implications. All pattern inferences are performed with respect to a profile of energy distributions sampled from network bootstrapping ensembles conforming to block-structures specified by corresponding hypotheses.
Hsieh, Fushing; Hsueh, Chih-Hsin; Heitkamp, Constantin; Matthews, Mark
2016-01-01
Multiple datasets of two consecutive vintages of replicated grape and wines from six different deficit irrigation regimes are characterized and compared. The process consists of four temporal-ordered signature phases: harvest field data, juice composition, wine composition before bottling and bottled wine. A new computing paradigm and an integrative inferential platform are developed for discovering phase-to-phase pattern geometries for such characterization and comparison purposes. Each phase is manifested by a distinct set of features, which are measurable upon phase-specific entities subject to the common set of irrigation regimes. Throughout the four phases, this compilation of data from irrigation regimes with subsamples is termed a space of media-nodes, on which measurements of phase-specific features were recoded. All of these collectively constitute a bipartite network of data, which is then normalized and binary coded. For these serial bipartite networks, we first quantify patterns that characterize individual phases by means of a new computing paradigm called “Data Mechanics”. This computational technique extracts a coupling geometry which captures and reveals interacting dependence among and between media-nodes and feature-nodes in forms of hierarchical block sub-matrices. As one of the principal discoveries, the holistic year-factor persistently surfaces as the most inferential factor in classifying all media-nodes throughout all phases. This could be deemed either surprising in its over-arching dominance or obvious based on popular belief. We formulate and test pattern-based hypotheses that confirm such fundamental patterns. We also attempt to elucidate the driving force underlying the phase-evolution in winemaking via a newly developed partial coupling geometry, which is designed to integrate two coupling geometries. Such partial coupling geometries are confirmed to bear causal and predictive implications. All pattern inferences are performed with respect to a profile of energy distributions sampled from network bootstrapping ensembles conforming to block-structures specified by corresponding hypotheses. PMID:27508416
Dong, Sheying; Huang, Guiqi; Su, Meiling; Huang, Tinglin
2015-10-14
We developed two simple, fast, and environmentally friendly methods using carbon aerogel (CA) and magnetic CA (mCA) materials as sorbents for micro-solid-phase extraction (μ-SPE) and magnetic solid-phase extraction (MSPE) techniques. The material performances such as adsorption isotherm, adsorption kinetics, and specific surface area were discussed by N2 adsorption-desorption isotherm measurements, ultraviolet and visible (UV-vis) spectrophotometry, scanning electron microscopy (SEM), and high resolution transmission electron microscopy (HR-TEM). The experimental results proved that the heterogeneities of CA and mCA were well modeled with the Freundlich isotherm model, and the sorption process well followed the pseudo-second-order rate equation. Moreover, plant growth regulators (PGRs) such as kinetin (6-KT), 6-benzylaminopurine (6-BA), 2,4-dichlorophenoxyacetic acid (2,4-D), and uniconazole (UN) in a reservoir raw water sample were selected as the evaluation of applicability for the proposed μ-SPE and MSPE techniques using high performance liquid chromatography (HPLC). The experimental conditions of two methods such as the amount of sorbent, extraction time, pH, salt concentration, and desorption conditions were studied. Under the optimized conditions, two extraction methods provided high recoveries (89-103%), low the limits of detection (LODs) (0.01-0.2 μg L(-1)), and satisfactory analytical features in terms of precision (relative standard deviation, RSD, 1.7-5.1%, n=3). This work demonstrates the feasibility and the potential of CA and mCA materials as sorbents for μ-SPE and MSPE techniques. Besides, it also could serve as a basis for future development of other functional CAs in pretreatment technology and make them valuable for analysis of pollutants in environmental applications.
Phases of New Physics in the BAO Spectrum
NASA Astrophysics Data System (ADS)
Baumann, Daniel; Green, Daniel; Zaldarriaga, Matias
2017-11-01
We show that the phase of the spectrum of baryon acoustic oscillations (BAO) is immune to the effects of nonlinear evolution. This suggests that any new physics that contributes to the initial phase of the BAO spectrum, such as extra light species in the early universe, can be extracted reliably at late times. We provide three arguments in support of our claim: first, we point out that a phase shift of the BAO spectrum maps to a characteristic sign change in the real space correlation function and that this feature cannot be generated or modified by nonlinear dynamics. Second, we confirm this intuition through an explicit computation, valid to all orders in cosmological perturbation theory. Finally, we provide a nonperturbative argument using general analytic properties of the linear response to the initial oscillations. Our result motivates measuring the phase of the BAO spectrum as a robust probe of new physics.
ECG Identification System Using Neural Network with Global and Local Features
ERIC Educational Resources Information Center
Tseng, Kuo-Kun; Lee, Dachao; Chen, Charles
2016-01-01
This paper proposes a human identification system via extracted electrocardiogram (ECG) signals. Two hierarchical classification structures based on global shape feature and local statistical feature is used to extract ECG signals. Global shape feature represents the outline information of ECG signals and local statistical feature extracts the…
Zhang, Yi; Li, Peiyang; Zhu, Xuyang; Su, Steven W; Guo, Qing; Xu, Peng; Yao, Dezhong
2017-01-01
The EMG signal indicates the electrophysiological response to daily living of activities, particularly to lower-limb knee exercises. Literature reports have shown numerous benefits of the Wavelet analysis in EMG feature extraction for pattern recognition. However, its application to typical knee exercises when using only a single EMG channel is limited. In this study, three types of knee exercises, i.e., flexion of the leg up (standing), hip extension from a sitting position (sitting) and gait (walking) are investigated from 14 healthy untrained subjects, while EMG signals from the muscle group of vastus medialis and the goniometer on the knee joint of the detected leg are synchronously monitored and recorded. Four types of lower-limb motions including standing, sitting, stance phase of walking, and swing phase of walking, are segmented. The Wavelet Transform (WT) based Singular Value Decomposition (SVD) approach is proposed for the classification of four lower-limb motions using a single-channel EMG signal from the muscle group of vastus medialis. Based on lower-limb motions from all subjects, the combination of five-level wavelet decomposition and SVD is used to comprise the feature vector. The Support Vector Machine (SVM) is then configured to build a multiple-subject classifier for which the subject independent accuracy will be given across all subjects for the classification of four types of lower-limb motions. In order to effectively indicate the classification performance, EMG features from time-domain (e.g., Mean Absolute Value (MAV), Root-Mean-Square (RMS), integrated EMG (iEMG), Zero Crossing (ZC)) and frequency-domain (e.g., Mean Frequency (MNF) and Median Frequency (MDF)) are also used to classify lower-limb motions. The five-fold cross validation is performed and it repeats fifty times in order to acquire the robust subject independent accuracy. Results show that the proposed WT-based SVD approach has the classification accuracy of 91.85%±0.88% which outperforms other feature models.
Deep Learning in Label-free Cell Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia
Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individualmore » cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. In conclusion, this system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.« less
Deep Learning in Label-free Cell Classification
Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; ...
2016-03-15
Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individualmore » cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. In conclusion, this system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.« less
Benzy, V K; Jasmin, E A; Koshy, Rachel Cherian; Amal, Frank; Indiradevi, K P
2018-01-01
The advancement in medical research and intelligent modeling techniques has lead to the developments in anaesthesia management. The present study is targeted to estimate the depth of anaesthesia using cognitive signal processing and intelligent modeling techniques. The neurophysiological signal that reflects cognitive state of anaesthetic drugs is the electroencephalogram signal. The information available on electroencephalogram signals during anaesthesia are drawn by extracting relative wave energy features from the anaesthetic electroencephalogram signals. Discrete wavelet transform is used to decomposes the electroencephalogram signals into four levels and then relative wave energy is computed from approximate and detail coefficients of sub-band signals. Relative wave energy is extracted to find out the degree of importance of different electroencephalogram frequency bands associated with different anaesthetic phases awake, induction, maintenance and recovery. The Kruskal-Wallis statistical test is applied on the relative wave energy features to check the discriminating capability of relative wave energy features as awake, light anaesthesia, moderate anaesthesia and deep anaesthesia. A novel depth of anaesthesia index is generated by implementing a Adaptive neuro-fuzzy inference system based fuzzy c-means clustering algorithm which uses relative wave energy features as inputs. Finally, the generated depth of anaesthesia index is compared with a commercially available depth of anaesthesia monitor Bispectral index.
Face antispoofing based on frame difference and multilevel representation
NASA Astrophysics Data System (ADS)
Benlamoudi, Azeddine; Aiadi, Kamal Eddine; Ouafi, Abdelkrim; Samai, Djamel; Oussalah, Mourad
2017-07-01
Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysis literature, where several applications with antispoofing task have emerged recently. We propose a solution for distinguishing between real faces and fake ones. Our approach is based on extracting features from the difference between successive frames instead of individual frames. We also used a multilevel representation that divides the frame difference into multiple multiblocks. Different texture descriptors (local binary patterns, local phase quantization, and binarized statistical image features) have then been applied to each block. After the feature extraction step, a Fisher score is applied to sort the features in ascending order according to the associated weights. Finally, a support vector machine is used to differentiate between real and fake faces. We tested our approach on three publicly available databases: CASIA Face Antispoofing database, Replay-Attack database, and MSU Mobile Face Spoofing database. The proposed approach outperforms the other state-of-the-art methods in different media and quality metrics.
Universal Scaling and Critical Exponents of the Anisotropic Quantum Rabi Model.
Liu, Maoxin; Chesi, Stefano; Ying, Zu-Jian; Chen, Xiaosong; Luo, Hong-Gang; Lin, Hai-Qing
2017-12-01
We investigate the quantum phase transition of the anisotropic quantum Rabi model, in which the rotating and counterrotating terms are allowed to have different coupling strengths. The model interpolates between two known limits with distinct universal properties. Through a combination of analytic and numerical approaches, we extract the phase diagram, scaling functions, and critical exponents, which determine the universality class at finite anisotropy (identical to the isotropic limit). We also reveal other interesting features, including a superradiance-induced freezing of the effective mass and discontinuous scaling functions in the Jaynes-Cummings limit. Our findings are extended to the few-body quantum phase transitions with N>1 spins, where we expose the same effective parameters, scaling properties, and phase diagram. Thus, a stronger form of universality is established, valid from N=1 up to the thermodynamic limit.
Universal Scaling and Critical Exponents of the Anisotropic Quantum Rabi Model
NASA Astrophysics Data System (ADS)
Liu, Maoxin; Chesi, Stefano; Ying, Zu-Jian; Chen, Xiaosong; Luo, Hong-Gang; Lin, Hai-Qing
2017-12-01
We investigate the quantum phase transition of the anisotropic quantum Rabi model, in which the rotating and counterrotating terms are allowed to have different coupling strengths. The model interpolates between two known limits with distinct universal properties. Through a combination of analytic and numerical approaches, we extract the phase diagram, scaling functions, and critical exponents, which determine the universality class at finite anisotropy (identical to the isotropic limit). We also reveal other interesting features, including a superradiance-induced freezing of the effective mass and discontinuous scaling functions in the Jaynes-Cummings limit. Our findings are extended to the few-body quantum phase transitions with N >1 spins, where we expose the same effective parameters, scaling properties, and phase diagram. Thus, a stronger form of universality is established, valid from N =1 up to the thermodynamic limit.
Probing the A1 to L1{sub 0} transformation in FeCuPt using the first order reversal curve method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Dustin A.; Liu, Kai; Liao, Jung-Wei
2014-08-01
The A1-L1{sub 0} phase transformation has been investigated in (001) FeCuPt thin films prepared by atomic-scale multilayer sputtering and rapid thermal annealing (RTA). Traditional x-ray diffraction is not always applicable in generating a true order parameter, due to non-ideal crystallinity of the A1 phase. Using the first-order reversal curve (FORC) method, the A1 and L1{sub 0} phases are deconvoluted into two distinct features in the FORC distribution, whose relative intensities change with the RTA temperature. The L1{sub 0} ordering takes place via a nucleation-and-growth mode. A magnetization-based phase fraction is extracted, providing a quantitative measure of the L1{sub 0} phasemore » homogeneity.« less
Rehman, Zia Ur; Idris, Adnan; Khan, Asifullah
2018-06-01
Protein-Protein Interactions (PPI) play a vital role in cellular processes and are formed because of thousands of interactions among proteins. Advancements in proteomics technologies have resulted in huge PPI datasets that need to be systematically analyzed. Protein complexes are the locally dense regions in PPI networks, which extend important role in metabolic pathways and gene regulation. In this work, a novel two-phase protein complex detection and grouping mechanism is proposed. In the first phase, topological and biological features are extracted for each complex, and prediction performance is investigated using Bagging based Ensemble classifier (PCD-BEns). Performance evaluation through cross validation shows improvement in comparison to CDIP, MCode, CFinder and PLSMC methods Second phase employs Multi-Dimensional Scaling (MDS) for the grouping of known complexes by exploring inter complex relations. It is experimentally observed that the combination of topological and biological features in the proposed approach has greatly enhanced prediction performance for protein complex detection, which may help to understand various biological processes, whereas application of MDS based exploration may assist in grouping potentially similar complexes. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua
2017-04-01
Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.
Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Xiaojia; Mao Qirong; Zhan Yongzhao
There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions.more » The experiments show that this method can improve the recognition rate and the time of feature extraction.« less
Image segmentation-based robust feature extraction for color image watermarking
NASA Astrophysics Data System (ADS)
Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen
2018-04-01
This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.
A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis
NASA Astrophysics Data System (ADS)
Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui
2015-07-01
Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.
Cai, Congbo; Chen, Zhong; van Zijl, Peter C.M.
2017-01-01
The reconstruction of MR quantitative susceptibility mapping (QSM) from local phase measurements is an ill posed inverse problem and different regularization strategies incorporating a priori information extracted from magnitude and phase images have been proposed. However, the anatomy observed in magnitude and phase images does not always coincide spatially with that in susceptibility maps, which could give erroneous estimation in the reconstructed susceptibility map. In this paper, we develop a structural feature based collaborative reconstruction (SFCR) method for QSM including both magnitude and susceptibility based information. The SFCR algorithm is composed of two consecutive steps corresponding to complementary reconstruction models, each with a structural feature based l1 norm constraint and a voxel fidelity based l2 norm constraint, which allows both the structure edges and tiny features to be recovered, whereas the noise and artifacts could be reduced. In the M-step, the initial susceptibility map is reconstructed by employing a k-space based compressed sensing model incorporating magnitude prior. In the S-step, the susceptibility map is fitted in spatial domain using weighted constraints derived from the initial susceptibility map from the M-step. Simulations and in vivo human experiments at 7T MRI show that the SFCR method provides high quality susceptibility maps with improved RMSE and MSSIM. Finally, the susceptibility values of deep gray matter are analyzed in multiple head positions, with the supine position most approximate to the gold standard COSMOS result. PMID:27019480
NASA Astrophysics Data System (ADS)
Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab
2017-11-01
Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.
Uniform competency-based local feature extraction for remote sensing images
NASA Astrophysics Data System (ADS)
Sedaghat, Amin; Mohammadi, Nazila
2018-01-01
Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.
Choleva, Tatiana G; Kappi, Foteini A; Tsogas, George Z; Vlessidis, Athanasios G; Giokas, Dimosthenis L
2016-05-01
This work describes a new method for the extraction and determination of gold nanoparticles in environmental samples by means of in-situ suspended aggregate microextraction and electrothermal atomic absorption spectrometry. The method relies on the in-situ formation of a supramolecular aggregate phase through ion-association between a cationic surfactant and a benzene sulfonic acid derivative. Gold nanoparticles are physically entrapped into the aggregate phase which is separated from the bulk aqueous solution by vacuum filtration on the surface of a cellulose filter in the form of a thin film. The film is removed from the filter surface and is dissociated into an acidified methanolic solution which is used for analysis. Under the optimized experimental conditions, gold nanoparticles can be efficiently extracted from water samples with recovery rates between 81.0-93.3%, precision 5.4-12.0% and detection limits as low as 75femtomolL(-1) using only 20mL of sample volume. The satisfactory analytical features of the method along with the simplicity indicate the efficiency of this new approach to adequately collect and extract gold nanoparticle species from water samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Li, Jing; Hong, Wenxue
2014-12-01
The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.
NASA Astrophysics Data System (ADS)
Werdiningsih, Indah; Zaman, Badrus; Nuqoba, Barry
2017-08-01
This paper presents classification of brain cancer using wavelet transformation and Adaptive Neighborhood Based Modified Backpropagation (ANMBP). Three stages of the processes, namely features extraction, features reduction, and classification process. Wavelet transformation is used for feature extraction and ANMBP is used for classification process. The result of features extraction is feature vectors. Features reduction used 100 energy values per feature and 10 energy values per feature. Classifications of brain cancer are normal, alzheimer, glioma, and carcinoma. Based on simulation results, 10 energy values per feature can be used to classify brain cancer correctly. The correct classification rate of proposed system is 95 %. This research demonstrated that wavelet transformation can be used for features extraction and ANMBP can be used for classification of brain cancer.
Fundamental Chemical Kinetic And Thermodynamic Data For Purex Process Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, R.J.; Fox, O.D.; Sarsfield, M.J.
2007-07-01
To support either the continued operations of current reprocessing plants or the development of future fuel processing using hydrometallurgical processes, such as Advanced Purex or UREX type flowsheets, the accurate simulation of Purex solvent extraction is required. In recent years we have developed advanced process modeling capabilities that utilize modern software platforms such as Aspen Custom Modeler and can be run in steady state and dynamic simulations. However, such advanced models of the Purex process require a wide range of fundamental data including all relevant basic chemical kinetic and thermodynamic data for the major species present in the process. Thismore » paper will summarize some of these recent process chemistry studies that underpin our simulation, design and testing of Purex solvent extraction flowsheets. Whilst much kinetic data for actinide redox reactions in nitric acid exists in the literature, the data on reactions in the diluted TBP solvent phase is much rarer. This inhibits the accurate modelization of the Purex process particularly when species show a significant extractability in to the solvent phase or when cycling between solvent and aqueous phases occurs, for example in the reductive stripping of Pu(IV) by ferrous sulfamate in the Magnox reprocessing plant. To support current oxide reprocessing, we have investigated a range of solvent phase reactions: - U(IV)+HNO{sub 3}; - U(IV)+HNO{sub 2}; - U(IV)+HNO{sub 3} (Pu catalysis); - U(IV)+HNO{sub 3} (Tc catalysis); - U(IV)+ Np(VI); - U(IV)+Np(V); - Np(IV)+HNO{sub 3}; - Np(V)+Np(V); Rate equations have been determined for all these reactions and kinetic rate constants and activation energies are now available. Specific features of these reactions in the TBP phase include the roles of water and hydrolyzed intermediates in the reaction mechanisms. In reactions involving Np(V), cation-cation complex formation, which is much more favourable in TBP than in HNO{sub 3}, also occurs and complicates the redox chemistry. Whilst some features of the redox chemistry in TBP appear similar to the corresponding reactions in aqueous HNO{sub 3}, there are notable differences in rates, the forms of the rate equations and mechanisms. Secondly, to underpin the development of advanced single cycle flowsheets using the complexant aceto-hydroxamic acid, we have also characterised in some detail its redox chemistry and solvent extraction behaviour with both Np and Pu ions. We find that simple hydroxamic acids are remarkably rapid reducing agents for Np(VI). They also reduce Pu(VI) and cause a much slower reduction of Pu(IV) through a complex mechanism involving acid hydrolysis of the ligand. AHA is a strong hydrophilic and selective complexant for the tetravalent actinide ions as evidenced by stability constant and solvent extraction data for An(IV), M(III) and U(VI) ions. This has allowed the successful design of U/Pu+Np separation flowsheets suitable for advanced fuel cycles. (authors)« less
Sample-space-based feature extraction and class preserving projection for gene expression data.
Wang, Wenjun
2013-01-01
In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.
Azzouz, Abdelmonaim; Jurado-Sánchez, Beatriz; Souhail, Badredine; Ballesteros, Evaristo
2011-05-11
This paper reports a systematic approach to the development of a method that combines continuous solid-phase extraction and gas chromatography-mass spectrometry for the simultaneous determination of 20 pharmacologically active substances including antibacterials (chloramphenicol, florfenicol, pyrimethamine, thiamphenicol), nonsteroideal anti-inflammatories (diclofenac, flunixin, ibuprofen, ketoprofen, naproxen, mefenamic acid, niflumic acid, phenylbutazone), antiseptic (triclosan), antiepileptic (carbamazepine), lipid regulator (clofibric acid), β-blockers (metoprolol, propranolol), and hormones (17α-ethinylestradiol, estrone, 17β-estradiol) in milk samples. The sample preparation procedure involves deproteination of the milk, followed by sample enrichment and cleanup by continuous solid-phase extraction. The proposed method provides a linear response over the range of 0.6-5000 ng/kg and features limits of detection from 0.2 to 1.2 ng/kg depending on the particular analyte. The method was successfully applied to the determination of pharmacologically active substance residues in food samples including whole, raw, half-skim, skim, and powdered milk from different sources (cow, goat, and human breast).
Feng, Zhichao; Rong, Pengfei; Cao, Peng; Zhou, Qingyu; Zhu, Wenwei; Yan, Zhimin; Liu, Qianyun; Wang, Wei
2018-04-01
To evaluate the diagnostic performance of machine-learning based quantitative texture analysis of CT images to differentiate small (≤ 4 cm) angiomyolipoma without visible fat (AMLwvf) from renal cell carcinoma (RCC). This single-institutional retrospective study included 58 patients with pathologically proven small renal mass (17 in AMLwvf and 41 in RCC groups). Texture features were extracted from the largest possible tumorous regions of interest (ROIs) by manual segmentation in preoperative three-phase CT images. Interobserver reliability and the Mann-Whitney U test were applied to select features preliminarily. Then support vector machine with recursive feature elimination (SVM-RFE) and synthetic minority oversampling technique (SMOTE) were adopted to establish discriminative classifiers, and the performance of classifiers was assessed. Of the 42 extracted features, 16 candidate features showed significant intergroup differences (P < 0.05) and had good interobserver agreement. An optimal feature subset including 11 features was further selected by the SVM-RFE method. The SVM-RFE+SMOTE classifier achieved the best performance in discriminating between small AMLwvf and RCC, with the highest accuracy, sensitivity, specificity and AUC of 93.9 %, 87.8 %, 100 % and 0.955, respectively. Machine learning analysis of CT texture features can facilitate the accurate differentiation of small AMLwvf from RCC. • Although conventional CT is useful for diagnosis of SRMs, it has limitations. • Machine-learning based CT texture analysis facilitate differentiation of small AMLwvf from RCC. • The highest accuracy of SVM-RFE+SMOTE classifier reached 93.9 %. • Texture analysis combined with machine-learning methods might spare unnecessary surgery for AMLwvf.
Low complexity feature extraction for classification of harmonic signals
NASA Astrophysics Data System (ADS)
William, Peter E.
In this dissertation, feature extraction algorithms have been developed for extraction of characteristic features from harmonic signals. The common theme for all developed algorithms is the simplicity in generating a significant set of features directly from the time domain harmonic signal. The features are a time domain representation of the composite, yet sparse, harmonic signature in the spectral domain. The algorithms are adequate for low-power unattended sensors which perform sensing, feature extraction, and classification in a standalone scenario. The first algorithm generates the characteristic features using only the duration between successive zero-crossing intervals. The second algorithm estimates the harmonics' amplitudes of the harmonic structure employing a simplified least squares method without the need to estimate the true harmonic parameters of the source signal. The third algorithm, resulting from a collaborative effort with Daniel White at the DSP Lab, University of Nebraska-Lincoln, presents an analog front end approach that utilizes a multichannel analog projection and integration to extract the sparse spectral features from the analog time domain signal. Classification is performed using a multilayer feedforward neural network. Evaluation of the proposed feature extraction algorithms for classification through the processing of several acoustic and vibration data sets (including military vehicles and rotating electric machines) with comparison to spectral features shows that, for harmonic signals, time domain features are simpler to extract and provide equivalent or improved reliability over the spectral features in both the detection probabilities and false alarm rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trease, Lynn L.; Trease, Harold E.; Fowler, John
2007-03-15
One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process.more » The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.« less
Research on oral test modeling based on multi-feature fusion
NASA Astrophysics Data System (ADS)
Shi, Yuliang; Tao, Yiyue; Lei, Jun
2018-04-01
In this paper, the spectrum of speech signal is taken as an input of feature extraction. The advantage of PCNN in image segmentation and other processing is used to process the speech spectrum and extract features. And a new method combining speech signal processing and image processing is explored. At the same time of using the features of the speech map, adding the MFCC to establish the spectral features and integrating them with the features of the spectrogram to further improve the accuracy of the spoken language recognition. Considering that the input features are more complicated and distinguishable, we use Support Vector Machine (SVM) to construct the classifier, and then compare the extracted test voice features with the standard voice features to achieve the spoken standard detection. Experiments show that the method of extracting features from spectrograms using PCNN is feasible, and the fusion of image features and spectral features can improve the detection accuracy.
Discriminative Nonlinear Analysis Operator Learning: When Cosparse Model Meets Image Classification.
Wen, Zaidao; Hou, Biao; Jiao, Licheng
2017-05-03
Linear synthesis model based dictionary learning framework has achieved remarkable performances in image classification in the last decade. Behaved as a generative feature model, it however suffers from some intrinsic deficiencies. In this paper, we propose a novel parametric nonlinear analysis cosparse model (NACM) with which a unique feature vector will be much more efficiently extracted. Additionally, we derive a deep insight to demonstrate that NACM is capable of simultaneously learning the task adapted feature transformation and regularization to encode our preferences, domain prior knowledge and task oriented supervised information into the features. The proposed NACM is devoted to the classification task as a discriminative feature model and yield a novel discriminative nonlinear analysis operator learning framework (DNAOL). The theoretical analysis and experimental performances clearly demonstrate that DNAOL will not only achieve the better or at least competitive classification accuracies than the state-of-the-art algorithms but it can also dramatically reduce the time complexities in both training and testing phases.
Audio feature extraction using probability distribution function
NASA Astrophysics Data System (ADS)
Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.
2015-05-01
Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.
Morphological Feature Extraction for Automatic Registration of Multispectral Images
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2007-01-01
The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.
NASA Astrophysics Data System (ADS)
Johnsen, Elin; Leknes, Siri; Wilson, Steven Ray; Lundanes, Elsa
2015-03-01
Neurons communicate via chemical signals called neurotransmitters (NTs). The numerous identified NTs can have very different physiochemical properties (solubility, charge, size etc.), so quantification of the various NT classes traditionally requires several analytical platforms/methodologies. We here report that a diverse range of NTs, e.g. peptides oxytocin and vasopressin, monoamines adrenaline and serotonin, and amino acid GABA, can be simultaneously identified/measured in small samples, using an analytical platform based on liquid chromatography and high-resolution mass spectrometry (LC-MS). The automated platform is cost-efficient as manual sample preparation steps and one-time-use equipment are kept to a minimum. Zwitter-ionic HILIC stationary phases were used for both on-line solid phase extraction (SPE) and liquid chromatography (capillary format, cLC). This approach enabled compounds from all NT classes to elute in small volumes producing sharp and symmetric signals, and allowing precise quantifications of small samples, demonstrated with whole blood (100 microliters per sample). An additional robustness-enhancing feature is automatic filtration/filter back-flushing (AFFL), allowing hundreds of samples to be analyzed without any parts needing replacement. The platform can be installed by simple modification of a conventional LC-MS system.
Mathieson, Luke; Mendes, Alexandre; Marsden, John; Pond, Jeffrey; Moscato, Pablo
2017-01-01
This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.
Extraction and representation of common feature from uncertain facial expressions with cloud model.
Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing
2017-12-01
Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.
PyEEG: an open source Python module for EEG/MEG feature extraction.
Bao, Forrest Sheng; Liu, Xin; Zhang, Christina
2011-01-01
Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.
PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction
Bao, Forrest Sheng; Liu, Xin; Zhang, Christina
2011-01-01
Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction. PMID:21512582
Deep feature extraction and combination for synthetic aperture radar target classification
NASA Astrophysics Data System (ADS)
Amrani, Moussa; Jiang, Feng
2017-10-01
Feature extraction has always been a difficult problem in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very important to select discriminative features to train a classifier, which is a prerequisite. Inspired by the great success of convolutional neural network (CNN), we address the problem of SAR target classification by proposing a feature extraction method, which takes advantage of exploiting the extracted deep features from CNNs on SAR images to introduce more powerful discriminative features and robust representation ability for them. First, the pretrained VGG-S net is fine-tuned on moving and stationary target acquisition and recognition (MSTAR) public release database. Second, after a simple preprocessing is performed, the fine-tuned network is used as a fixed feature extractor to extract deep features from the processed SAR images. Third, the extracted deep features are fused by using a traditional concatenation and a discriminant correlation analysis algorithm. Finally, for target classification, K-nearest neighbors algorithm based on LogDet divergence-based metric learning triplet constraints is adopted as a baseline classifier. Experiments on MSTAR are conducted, and the classification accuracy results demonstrate that the proposed method outperforms the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Anderson, Dylan; Bapst, Aleksander; Coon, Joshua; Pung, Aaron; Kudenov, Michael
2017-05-01
Hyperspectral imaging provides a highly discriminative and powerful signature for target detection and discrimination. Recent literature has shown that considering additional target characteristics, such as spatial or temporal profiles, simultaneously with spectral content can greatly increase classifier performance. Considering these additional characteristics in a traditional discriminative algorithm requires a feature extraction step be performed first. An example of such a pipeline is computing a filter bank response to extract spatial features followed by a support vector machine (SVM) to discriminate between targets. This decoupling between feature extraction and target discrimination yields features that are suboptimal for discrimination, reducing performance. This performance reduction is especially pronounced when the number of features or available data is limited. In this paper, we propose the use of Supervised Nonnegative Tensor Factorization (SNTF) to jointly perform feature extraction and target discrimination over hyperspectral data products. SNTF learns a tensor factorization and a classification boundary from labeled training data simultaneously. This ensures that the features learned via tensor factorization are optimal for both summarizing the input data and separating the targets of interest. Practical considerations for applying SNTF to hyperspectral data are presented, and results from this framework are compared to decoupled feature extraction/target discrimination pipelines.
Grating interferometry-based phase microtomography of atherosclerotic human arteries
NASA Astrophysics Data System (ADS)
Buscema, Marzia; Holme, Margaret N.; Deyhle, Hans; Schulz, Georg; Schmitz, Rüdiger; Thalmann, Peter; Hieber, Simone E.; Chicherova, Natalia; Cattin, Philippe C.; Beckmann, Felix; Herzen, Julia; Weitkamp, Timm; Saxer, Till; Müller, Bert
2014-09-01
Cardiovascular diseases are the number one cause of death and morbidity in the world. Understanding disease development in terms of lumen morphology and tissue composition of constricted arteries is essential to improve treatment and patient outcome. X-ray tomography provides non-destructive three-dimensional data with micrometer-resolution. However, a common problem is simultaneous visualization of soft and hard tissue-containing specimens, such as atherosclerotic human coronary arteries. Unlike absorption based techniques, where X-ray absorption strongly depends on atomic number and tissue density, phase contrast methods such as grating interferometry have significant advantages as the phase shift is only a linear function of the atomic number. We demonstrate that grating interferometry-based phase tomography is a powerful method to three-dimensionally visualize a variety of anatomical features in atherosclerotic human coronary arteries, including plaque, muscle, fat, and connective tissue. Three formalin-fixed, human coronary arteries were measured using advanced laboratory μCT. While this technique gives information about plaque morphology, it is impossible to extract the lumen morphology. Therefore, selected regions were measured using grating based phase tomography, sinograms were treated with a wavelet-Fourier filter to remove ring artifacts, and reconstructed data were processed to allow extraction of vessel lumen morphology. Phase tomography data in combination with conventional laboratory μCT data of the same specimen shows potential, through use of a joint histogram, to identify more tissue types than either technique alone. Such phase tomography data was also rigidly registered to subsequently decalcified arteries that were histologically sectioned, although the quality of registration was insufficient for joint histogram analysis.
Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA
NASA Astrophysics Data System (ADS)
He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong
2018-04-01
This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-01-01
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510
New feature extraction method for classification of agricultural products from x-ray images
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.
1999-01-01
Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-03-20
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.
TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krafft, S; The University of Texas Graduate School of Biomedical Sciences, Houston, TX; Briere, T
2015-06-15
Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. Amore » total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP modeling is warranted. This work was supported by the Rosalie B. Hite Fellowship in Cancer research awarded to SPK.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, W; Wang, J; Lu, W
Purpose: To identify the effective quantitative image features (radiomics features) for prediction of response, survival, recurrence and metastasis of hepatocellular carcinoma (HCC) in radiotherapy. Methods: Multiphase contrast enhanced liver CT images were acquired in 16 patients with HCC on pre and post radiation therapy (RT). In this study, arterial phase CT images were selected to analyze the effectiveness of image features for the prediction of treatment outcome of HCC to RT. Response evaluated by RECIST criteria, survival, local recurrence (LR), distant metastasis (DM) and liver metastasis (LM) were examined. A radiation oncologist manually delineated the tumor and normal liver onmore » pre and post CT scans, respectively. Quantitative image features were extracted to characterize the intensity distribution (n=8), spatial patterns (texture, n=36), and shape (n=16) of the tumor and liver, respectively. Moreover, differences between pre and post image features were calculated (n=120). A total of 360 features were extracted and then analyzed by unpaired student’s t-test to rank the effectiveness of features for the prediction of response. Results: The five most effective features were selected for prediction of each outcome. Significant predictors for tumor response and survival are changes in tumor shape (Second Major Axes Length, p= 0.002; Eccentricity, p=0.0002), for LR, liver texture (Standard Deviation (SD) of High Grey Level Run Emphasis and SD of Entropy, both p=0.005) on pre and post CT images, for DM, tumor texture (SD of Entropy, p=0.01) on pre CT image and for LM, liver (Mean of Cluster Shade, p=0.004) and tumor texture (SD of Entropy, p=0.006) on pre CT image. Intensity distribution features were not significant (p>0.09). Conclusion: Quantitative CT image features were found to be potential predictors of the five endpoints of HCC in RT. This work was supported in part by the National Cancer Institute Grant R01CA172638.« less
NASA Astrophysics Data System (ADS)
Ahmed, Shamim; Miorelli, Roberto; Calmon, Pierre; Anselmi, Nicola; Salucci, Marco
2018-04-01
This paper describes Learning-By-Examples (LBE) technique for performing quasi real time flaw localization and characterization within a conductive tube based on Eddy Current Testing (ECT) signals. Within the framework of LBE, the combination of full-factorial (i.e., GRID) sampling and Partial Least Squares (PLS) feature extraction (i.e., GRID-PLS) techniques are applied for generating a suitable training set in offine phase. Support Vector Regression (SVR) is utilized for model development and inversion during offine and online phases, respectively. The performance and robustness of the proposed GIRD-PLS/SVR strategy on noisy test set is evaluated and compared with standard GRID/SVR approach.
Intelligence, Surveillance, and Reconnaissance Fusion for Coalition Operations
2008-07-01
classification of the targets of interest. The MMI features extracted in this manner have two properties that provide a sound justification for...are generalizations of well- known feature extraction methods such as Principal Components Analysis (PCA) and Independent Component Analysis (ICA...augment (without degrading performance) a large class of generic fusion processes. Ontologies Classifications Feature extraction Feature analysis
NASA Astrophysics Data System (ADS)
Shi, Wenzhong; Deng, Susu; Xu, Wenbing
2018-02-01
For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent (< 5 years) landslides and approximately 35% of historical (> 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should be filtered using a filtering strategy based on supplementary information provided by expert knowledge or other data sources.
Orbital disproportionation of electronic density is a universal feature of alkali-doped fullerides
Iwahara, Naoya; Chibotaru, Liviu F.
2016-01-01
Alkali-doped fullerides show a wide range of electronic phases in function of alkali atoms and the degree of doping. Although the presence of strong electron correlations is well established, recent investigations also give evidence for dynamical Jahn–Teller instability in the insulating and the metallic trivalent fullerides. In this work, to reveal the interplay of these interactions in fullerides with even electrons, we address the electronic phase of tetravalent fulleride with accurate many-body calculations within a realistic electronic model including all basic interactions extracted from first principles. We find that the Jahn–Teller instability is always realized in these materials too. In sharp contrast to the correlated metals, tetravalent system displays uncorrelated band-insulating state despite similar interactions present in both fullerides. Our results show that the Jahn–Teller instability and the accompanying orbital disproportionation of electronic density in the degenerate lowest unoccupied molecular orbital band is a universal feature of fullerides. PMID:27713426
Phase Transitions in Geomorphology
NASA Astrophysics Data System (ADS)
Ortiz, C. P.; Jerolmack, D. J.
2015-12-01
Landscapes are patterns in a dynamic steady-state, due to competing processes that smooth or sharpen features over large distances and times. Geomorphic transport laws have been developed to model the mass-flux due to different processes, but are unreasonably effective at recovering the scaling relations of landscape features. Using a continuum approximation to compare experimental landscapes and the observed landscapes of the earth, one finds they share similar morphodynamics despite a breakdown of classical dynamical similarity between the two. We propose the origin of this effectiveness is a different kind of dynamic similarity in the statistics of initiation and cessation of motion of groups of grains, which is common to disordered systems of grains under external driving. We will show how the existing data of sediment transport points to common signatures with dynamical phase transitions between "mobile" and "immobile" phases in other disordered systems, particularly granular materials, colloids, and foams. Viewing landscape evolution from the lens of non-equilibrium statistical physics of disordered systems leads to predictions that the transition of bulk measurements such as particle flux is continuous from one phase to another, that the collective nature of the particle dynamics leads to very slow aging of bulk properties, and that the dynamics are history-dependent. Recent results from sediment transport experiments support these predictions, suggesting that existing geomorphic transport laws may need to be replaced by a new generation of stochastic models with ingredients based on the physics of disordered phase transitions. We discuss possible strategies for extracting the necessary information to develop these models from measurements of geomorphic transport noise by connecting particle-scale collective dynamics and space-time fluctuations over landscape features.
NASA Astrophysics Data System (ADS)
Sebatubun, M. M.; Haryawan, C.; Windarta, B.
2018-03-01
Lung cancer causes a high mortality rate in the world than any other cancers. That can be minimised if the symptoms and cancer cells have been detected early. One of the techniques used to detect lung cancer is by computed tomography (CT) scan. CT scan images have been used in this study to identify one of the lesion characteristics named ground glass opacity (GGO). It has been used to determine the level of malignancy of the lesion. There were three phases in identifying GGO: image cropping, feature extraction using grey level co-occurrence matrices (GLCM) and classification using Naïve Bayes Classifier. In order to improve the classification results, the most significant feature was sought by feature selection using gain ratio evaluation. Based on the results obtained, the most significant features could be identified by using feature selection method used in this research. The accuracy rate increased from 83.33% to 91.67%, the sensitivity from 82.35% to 94.11% and the specificity from 84.21% to 89.47%.
Kmeans-ICA based automatic method for ocular artifacts removal in a motorimagery classification.
Bou Assi, Elie; Rihana, Sandy; Sawan, Mohamad
2014-01-01
Electroencephalogram (EEG) recordings aroused as inputs of a motor imagery based BCI system. Eye blinks contaminate the spectral frequency of the EEG signals. Independent Component Analysis (ICA) has been already proved for removing these artifacts whose frequency band overlap with the EEG of interest. However, already ICA developed methods, use a reference lead such as the ElectroOculoGram (EOG) to identify the ocular artifact components. In this study, artifactual components were identified using an adaptive thresholding by means of Kmeans clustering. The denoised EEG signals have been fed into a feature extraction algorithm extracting the band power, the coherence and the phase locking value and inserted into a linear discriminant analysis classifier for a motor imagery classification.
NASA Astrophysics Data System (ADS)
Sharifi, Fereydoun; Arab-Amiri, Ali Reza; Kamkar-Rouhani, Abolghasem; Yousefi, Mahyar; Davoodabadi-Farahani, Meysam
2017-09-01
The purpose of this study is water prospectivity modeling (WPM) for recognizing karstic water-bearing zones by using analyses of geo-exploration data in Kal-Qorno valley, located in Tepal area, north of Iran. For this, a sequential exploration method applied on geo-evidential data to delineate target areas for further exploration. In this regard, two major exploration phases including regional and local scales were performed. In the first phase, indicator geological features, structures and lithological units, were used to model groundwater prospectivity as a regional scale. In this phase, for karstic WPM, fuzzy lithological and structural evidence layers were generated and combined using fuzzy operators. After generating target areas using WPM, in the second phase geophysical surveys including gravimetry and geoelectrical resistivity were carried out on the recognized high potential zones as a local scale exploration. Finally the results of geophysical analyses in the second phase were used to select suitable drilling locations to access and extract karstic groundwater in the study area.
Li, Ke; Wang, Shudong
2005-05-01
A simple and reliable high performance liquid chromatographic (HPLC) method has been developed and validated for the study of fingerprint chromatograms of extracts from the leaves of Tripterygium wilfordii Hook. F. (TWHF) and for controlling the quality of the herb. HPLC separation of the extracts was performed on a Lichrospher RP-18 column and detected by ultraviolet absorbance at 210 nm. The column temperature was maintained at 35 degrees C. A mobile phase composed of acetonitrile:H2O in the ratio of 39:61 (v/v) was found to be most suitable for this separation at a flow rate of 0.8 mL/min with isocratic elution. Under the chromatographic conditions described, the peak profile of the 10 components collected within 35 min made up the fingerprint of the extracts from leaves of TWHF with universal features. The fingerprint chromatograms had a good stability, precision, and reproducibility. The similarity of the extracts from leaves of TWHF collected in summer and winter was studied with triptolide as a reference peak. The method is suitable for differentiation of extracts from the leaves of TWHF, and can be used as a quality control method for this herb.
Hashim, Shima N N S; Schwarz, Lachlan J; Danylec, Basil; Potdar, Mahesh K; Boysen, Reinhard I; Hearn, Milton T W
2016-12-01
This investigation describes a general procedure for the selectivity mapping of molecularly imprinted polymers, using (E)-resveratrol-imprinted polymers as the exemplar, and polyphenolic compounds present in Pinot noir grape skin extracts as the test compounds. The procedure is based on the analysis of samples generated before and after solid-phase extraction of (E)-resveratrol and other polyphenols contained within the Pinot noir grape skins using (E)-resveratrol-imprinted polymers. Capillary reversed-phase high-performance liquid chromatography (RP-HPLC) and electrospray ionisation tandem mass spectrometry (ESI MS/MS) was then employed for compound analysis and identification. Under optimised solid-phase extraction conditions, the (E)-resveratrol-imprinted polymer showed high binding affinity and selectivity towards (E)-resveratrol, whilst no resveratrol was bound by the corresponding non-imprinted polymer. In addition, quercetin-3-O-glucuronide and a dimer of catechin-methyl-5-furfuraldehyde, which share some structural features with (E)-resveratrol, were also bound by the (E)-resveratrol-imprinted polymer. Polyphenols that were non-specifically retained by both the imprinted and non-imprinted polymer were (+)-catechin, a B-type procyanidin and (-)-epicatechin. The compounds that did not bind to the (E)-resveratrol molecularly imprinted polymer had at least one of the following molecular characteristics in comparison to the (E)-resveratrol template: (i) different spatial arrangements of their phenolic hydroxyl groups, (ii) less than three or more than four phenolic hydroxyl groups, or (iii) contained a bulky substituent moiety. The results show that capillary RP-HPLC in conjunction with ESI MS/MS represent very useful techniques for mapping the selectivity of the binding sites of imprinted polymer. Moreover, this procedure permits performance monitoring of the characteristics of molecularly imprinted polymers intended for solid-phase extraction of bioactive and nutraceutical molecules from diverse agricultural waste sources. Copyright © 2016 Elsevier B.V. All rights reserved.
Wang, ShuLing; Xu, Hui
2016-12-01
An inorganic-organic hybrid nanocomposite (zinc oxide/polypyrrole) that represents a novel kind of coating for in-tube solid-phase microextraction is reported. The composite coating was prepared by a facile electrochemical polymerization strategy on the inner surface of a stainless-steel tube. Based on the coated tube, a novel online in-tube solid-phase microextraction with liquid chromatography and mass spectrometry method was developed and applied for the extraction of three monohydroxy polycyclic aromatic hydrocarbons in human urine. The coating displayed good extraction ability toward monohydroxy polycyclic aromatic hydrocarbons. In addition, long lifespan, excellent stability, and good compression resistance were also obtained for the coating. The experimental conditions affecting the extraction were optimized systematically. Under the optimal conditions, the limits of detection and quantification were in the range of 0.039-0.050 and 0.130-0.167 ng/mL, respectively. Good linearity (0.2-100 ng/mL) was obtained with correlation coefficients larger than 0.9967. The repeatability, expressed as relative standard deviation, ranged between 2.5% and 9.4%. The method offered the advantage of process simplicity, rapidity, automation, and sensitivity in the analysis of human urinary monohydroxy polycyclic aromatic hydrocarbons in two different cities of Hubei province. An acceptable recovery of monohydroxy polycyclic aromatic hydrocarbons (64-122%) represented the additional attractive features of the method in real urine analysis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Solvent extraction system for plutonium colloids and other oxide nano-particles
Soderholm, Lynda; Wilson, Richard E; Chiarizia, Renato; Skanthakumar, Suntharalingam
2014-06-03
The invention provides a method for extracting plutonium from spent nuclear fuel, the method comprising supplying plutonium in a first aqueous phase; contacting the plutonium aqueous phase with a mixture of a dielectric and a moiety having a first acidity so as to allow the plutonium to substantially extract into the mixture; and contacting the extracted plutonium with second a aqueous phase, wherein the second aqueous phase has a second acidity higher than the first acidity, so as to allow the extracted plutonium to extract into the second aqueous phase. The invented method facilitates isolation of plutonium polymer without the formation of crud or unwanted emulsions.
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
Huynh, Benjamin Q; Li, Hui; Giger, Maryellen L
2016-07-01
Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text
Single-trial laser-evoked potentials feature extraction for prediction of pain perception.
Huang, Gan; Xiao, Ping; Hu, Li; Hung, Yeung Sam; Zhang, Zhiguo
2013-01-01
Pain is a highly subjective experience, and the availability of an objective assessment of pain perception would be of great importance for both basic and clinical applications. The objective of the present study is to develop a novel approach to extract pain-related features from single-trial laser-evoked potentials (LEPs) for classification of pain perception. The single-trial LEP feature extraction approach combines a spatial filtering using common spatial pattern (CSP) and a multiple linear regression (MLR). The CSP method is effective in separating laser-evoked EEG response from ongoing EEG activity, while MLR is capable of automatically estimating the amplitudes and latencies of N2 and P2 from single-trial LEP waveforms. The extracted single-trial LEP features are used in a Naïve Bayes classifier to classify different levels of pain perceived by the subjects. The experimental results show that the proposed single-trial LEP feature extraction approach can effectively extract pain-related LEP features for achieving high classification accuracy.
Finding Major Patterns of Aging Process by Data Synchronization
NASA Astrophysics Data System (ADS)
Miyano, Takaya; Tsutsui, Takako
We developed a method for extracting feature patterns from multivariate data using a network of coupled phase oscillators subject to an analogue of the Kuramoto model for collective synchronization. Our method may be called data synchronization. We applied data synchronization to the care-needs-certification data, provided by Otsu City as a historical old city near Kyoto City, in the Japanese public long-term care insurance program to find the trend of the major patterns of the aging process for elderly people needing nursing care.
An Overview of the Production Quality Compiler-Compiler Project
1979-02-01
process. A parse tree is assumed, and there is a set of primitives for extracting information from it and for "walking" it: using its structure to...not adequate for, and even preclude, techniques that involve multiple phases, or non-trivial auxiliary data structures. In recent years there have...VALUE field of node 23: would indicate that the type of the value field was mtcger. As with "union mode" or "variant record" features in many
Alexnet Feature Extraction and Multi-Kernel Learning for Objectoriented Classification
NASA Astrophysics Data System (ADS)
Ding, L.; Li, H.; Hu, C.; Zhang, W.; Wang, S.
2018-04-01
In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.
Neural network-based multiple robot simultaneous localization and mapping.
Saeedi, Sajad; Paull, Liam; Trentini, Michael; Li, Howard
2011-12-01
In this paper, a decentralized platform for simultaneous localization and mapping (SLAM) with multiple robots is developed. Each robot performs single robot view-based SLAM using an extended Kalman filter to fuse data from two encoders and a laser ranger. To extend this approach to multiple robot SLAM, a novel occupancy grid map fusion algorithm is proposed. Map fusion is achieved through a multistep process that includes image preprocessing, map learning (clustering) using neural networks, relative orientation extraction using norm histogram cross correlation and a Radon transform, relative translation extraction using matching norm vectors, and then verification of the results. The proposed map learning method is a process based on the self-organizing map. In the learning phase, the obstacles of the map are learned by clustering the occupied cells of the map into clusters. The learning is an unsupervised process which can be done on the fly without any need to have output training patterns. The clusters represent the spatial form of the map and make further analyses of the map easier and faster. Also, clusters can be interpreted as features extracted from the occupancy grid map so the map fusion problem becomes a task of matching features. Results of the experiments from tests performed on a real environment with multiple robots prove the effectiveness of the proposed solution.
Retina vascular network recognition
NASA Astrophysics Data System (ADS)
Tascini, Guido; Passerini, Giorgio; Puliti, Paolo; Zingaretti, Primo
1993-09-01
The analysis of morphological and structural modifications of the retina vascular network is an interesting investigation method in the study of diabetes and hypertension. Normally this analysis is carried out by qualitative evaluations, according to standardized criteria, though medical research attaches great importance to quantitative analysis of vessel color, shape and dimensions. The paper describes a system which automatically segments and recognizes the ocular fundus circulation and micro circulation network, and extracts a set of features related to morphometric aspects of vessels. For this class of images the classical segmentation methods seem weak. We propose a computer vision system in which segmentation and recognition phases are strictly connected. The system is hierarchically organized in four modules. Firstly the Image Enhancement Module (IEM) operates a set of custom image enhancements to remove blur and to prepare data for subsequent segmentation and recognition processes. Secondly the Papilla Border Analysis Module (PBAM) automatically recognizes number, position and local diameter of blood vessels departing from optical papilla. Then the Vessel Tracking Module (VTM) analyses vessels comparing the results of body and edge tracking and detects branches and crossings. Finally the Feature Extraction Module evaluates PBAM and VTM output data and extracts some numerical indexes. Used algorithms appear to be robust and have been successfully tested on various ocular fundus images.
Broadband Spectroscopy Using Two Suzaku Observations of the HMXB GX 301-2
NASA Astrophysics Data System (ADS)
Suchy, Slawomir; Fürst, Felix; Pottschmidt, Katja; Caballero, Isabel; Kreykenbohm, Ingo; Wilms, Jörn; Markowitz, Alex; Rothschild, Richard E.
2012-02-01
We present the analysis of two Suzaku observations of GX 301-2 at two orbital phases after the periastron passage. Variations in the column density of the line-of-sight absorber are observed, consistent with accretion from a clumpy wind. In addition to a cyclotron resonance scattering feature (CRSF), multiple fluorescence emission lines were detected in both observations. The variations in the pulse profiles and the CRSF throughout the pulse phase have a signature of a magnetic dipole field. Using a simple dipole model we calculated the expected magnetic field values for different pulse phases and were able to extract a set of geometrical angles, loosely constraining the dipole geometry in the neutron star. From the variation of the CRSF width and energy, we found a geometrical solution for the dipole, making the inclination consistent with previously published values.
Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Wismüller, Axel
2015-11-01
Phase-contrast X-ray computed tomography (PCI-CT) has attracted significant interest in recent years for its ability to provide significantly improved image contrast in low absorbing materials such as soft biological tissue. In the research context of cartilage imaging, previous studies have demonstrated the ability of PCI-CT to visualize structural details of human patellar cartilage matrix and capture changes to chondrocyte organization induced by osteoarthritis. This study evaluates the use of geometrical and topological features for volumetric characterization of such chondrocyte patterns in the presence (or absence) of osteoarthritic damage. Geometrical features derived from the scaling index method (SIM) and topological features derived from Minkowski Functionals were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. These features were subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver operating characteristic curve (AUC). Our results show that the classification performance of SIM-derived geometrical features (AUC: 0.90 ± 0.09) is significantly better than Minkowski Functionals volume (AUC: 0.54 ± 0.02), surface (AUC: 0.72 ± 0.06), mean breadth (AUC: 0.74 ± 0.06) and Euler characteristic (AUC: 0.78 ± 0.04) (p < 10(-4)). These results suggest that such geometrical features can provide a detailed characterization of the chondrocyte organization in the cartilage matrix in an automated manner, while also enabling classification of cartilage as healthy or osteoarthritic with high accuracy. Such features could potentially serve as diagnostic imaging markers for evaluating osteoarthritis progression and its response to different therapeutic intervention strategies.
Identifying quantum phase transitions with adversarial neural networks
NASA Astrophysics Data System (ADS)
Huembeli, Patrick; Dauphin, Alexandre; Wittek, Peter
2018-04-01
The identification of phases of matter is a challenging task, especially in quantum mechanics, where the complexity of the ground state appears to grow exponentially with the size of the system. Traditionally, physicists have to identify the relevant order parameters for the classification of the different phases. We here follow a radically different approach: we address this problem with a state-of-the-art deep learning technique, adversarial domain adaptation. We derive the phase diagram of the whole parameter space starting from a fixed and known subspace using unsupervised learning. This method has the advantage that the input of the algorithm can be directly the ground state without any ad hoc feature engineering. Furthermore, the dimension of the parameter space is unrestricted. More specifically, the input data set contains both labeled and unlabeled data instances. The first kind is a system that admits an accurate analytical or numerical solution, and one can recover its phase diagram. The second type is the physical system with an unknown phase diagram. Adversarial domain adaptation uses both types of data to create invariant feature extracting layers in a deep learning architecture. Once these layers are trained, we can attach an unsupervised learner to the network to find phase transitions. We show the success of this technique by applying it on several paradigmatic models: the Ising model with different temperatures, the Bose-Hubbard model, and the Su-Schrieffer-Heeger model with disorder. The method finds unknown transitions successfully and predicts transition points in close agreement with standard methods. This study opens the door to the classification of physical systems where the phase boundaries are complex such as the many-body localization problem or the Bose glass phase.
Classification and pose estimation of objects using nonlinear features
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-03-01
A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.
Oliveira, Hugo M; Segundo, Marcela A; Lima, José L F C; Miró, Manuel; Cerdà, Victor
2010-05-01
In the present work, it is proposed, for the first time, an on-line automatic renewable molecularly imprinted solid-phase extraction (MISPE) protocol for sample preparation prior to liquid chromatographic analysis. The automatic microscale procedure was based on the bead injection (BI) concept under the lab-on-valve (LOV) format, using a multisyringe burette as propulsion unit for handling solutions and suspensions. A high precision on handling the suspensions containing irregularly shaped molecularly imprinted polymer (MIP) particles was attained, enabling the use of commercial MIP as renewable sorbent. The features of the proposed BI-LOV manifold also allowed a strict control of the different steps within the extraction protocol, which are essential for promoting selective interactions in the cavities of the MIP. By using this on-line method, it was possible to extract and quantify riboflavin from different foodstuff samples in the range between 0.450 and 5.00 mg L(-1) after processing 1,000 microL of sample (infant milk, pig liver extract, and energy drink) without any prior treatment. For milk samples, LOD and LOQ values were 0.05 and 0.17 mg L(-1), respectively. The method was successfully applied to the analysis of two certified reference materials (NIST 1846 and BCR 487) with high precision (RSD < 5.5%). Considering the downscale and simplification of the sample preparation protocol and the simultaneous performance of extraction and chromatographic assays, a cost-effective and enhanced throughput (six determinations per hour) methodology for determination of riboflavin in foodstuff samples is deployed here.
Finger vein recognition based on the hyperinformation feature
NASA Astrophysics Data System (ADS)
Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Yang, Lu
2014-01-01
The finger vein is a promising biometric pattern for personal identification due to its advantages over other existing biometrics. In finger vein recognition, feature extraction is a critical step, and many feature extraction methods have been proposed to extract the gray, texture, or shape of the finger vein. We treat them as low-level features and present a high-level feature extraction framework. Under this framework, base attribute is first defined to represent the characteristics of a certain subcategory of a subject. Then, for an image, the correlation coefficient is used for constructing the high-level feature, which reflects the correlation between this image and all base attributes. Since the high-level feature can reveal characteristics of more subcategories and contain more discriminative information, we call it hyperinformation feature (HIF). Compared with low-level features, which only represent the characteristics of one subcategory, HIF is more powerful and robust. In order to demonstrate the potential of the proposed framework, we provide a case study to extract HIF. We conduct comprehensive experiments to show the generality of the proposed framework and the efficiency of HIF on our databases, respectively. Experimental results show that HIF significantly outperforms the low-level features.
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Ferdowsi, Ali; Yoozbashizadeh, Hossein
2017-12-01
Solvent extraction of rare earths from nitrate leach liquor of apatite using mixtures of tributyl phosphate (TBP) and di-(2-ethylhexyl) phosphoric acid (D2EHPA) was studied. The effects of nitrate and hydrogen ion concentration of the aqueous phase as well as the composition and concentration of extractants in the organic phase on the extraction behavior of lanthanum, cerium, neodymium, and yttrium were investigated. The distribution ratio of REEs increases by increasing the nitrate concentration in aqueous phase and concentration of extractants in organic phase, but the hydrogen ion concentration in aqueous phase has a decreasing effect. Yttrium as a heavy rare earth is more sensitive to these parameters than light rare earth elements. Although the composition of organic phase has a minor effect on the extraction of light rare earths, the percent of extraction of yttrium decreases dramatically by increasing the TBP content of organic phase. Mixtures of TBP and D2EHPA can show either synergism or antagonism extraction depending on the concentration and composition of extractants in organic phase. The best condition for separating rare earth elements in groups of heavy and light REEs can be achieved at high nitrate concentration, low H+ concentration, and high concentration of D2EHPA in organic phase. Separation of Ce and La by TBP and D2EHPA is practically impossible in the studied conditions; however, low nitrate concentration and high hydrogen ion concentration in aqueous phase and low concentration of extractants in organic phase favor the separation of Nd from other light rare earth elements.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Wei, Jun; Kazerooni, Ella A.
2016-03-01
We are developing an automated method to identify the best quality segment among the corresponding segments in multiple-phase cCTA. The coronary artery trees are automatically extracted from different cCTA phases using our multi-scale vessel segmentation and tracking method. An automated registration method is then used to align the multiple-phase artery trees. The corresponding coronary artery segments are identified in the registered vessel trees and are straightened by curved planar reformation (CPR). Four features are extracted from each segment in each phase as quality indicators in the original CT volume and the straightened CPR volume. Each quality indicator is used as a voting classifier to vote the corresponding segments. A newly designed weighted voting ensemble (WVE) classifier is finally used to determine the best-quality coronary segment. An observer preference study is conducted with three readers to visually rate the quality of the vessels in 1 to 6 rankings. Six and 10 cCTA cases are used as training and test set in this preliminary study. For the 10 test cases, the agreement between automatically identified best-quality (AI-BQ) segments and radiologist's top 2 rankings is 79.7%, and between AI-BQ and the other two readers are 74.8% and 83.7%, respectively. The results demonstrated that the performance of our automated method was comparable to those of experienced readers for identification of the best-quality coronary segments.
Quintana, José Benito; Miró, Manuel; Estela, José Manuel; Cerdà, Víctor
2006-04-15
In this paper, the third generation of flow injection analysis, also named the lab-on-valve (LOV) approach, is proposed for the first time as a front end to high-performance liquid chromatography (HPLC) for on-line solid-phase extraction (SPE) sample processing by exploiting the bead injection (BI) concept. The proposed microanalytical system based on discontinuous programmable flow features automated packing (and withdrawal after single use) of a small amount of sorbent (<5 mg) into the microconduits of the flow network and quantitative elution of sorbed species into a narrow band (150 microL of 95% MeOH). The hyphenation of multisyringe flow injection analysis (MSFIA) with BI-LOV prior to HPLC analysis is utilized for on-line postextraction treatment to ensure chemical compatibility between the eluate medium and the initial HPLC gradient conditions. This circumvents the band-broadening effect commonly observed in conventional on-line SPE-based sample processors due to the low eluting strength of the mobile phase. The potential of the novel MSFI-BI-LOV hyphenation for on-line handling of complex environmental and biological samples prior to reversed-phase chromatographic separations was assessed for the expeditious determination of five acidic pharmaceutical residues (viz., ketoprofen, naproxen, bezafibrate, diclofenac, and ibuprofen) and one metabolite (viz., salicylic acid) in surface water, urban wastewater, and urine. To this end, the copolymeric divinylbenzene-co-n-vinylpyrrolidone beads (Oasis HLB) were utilized as renewable sorptive entities in the micromachined unit. The automated analytical method features relative recovery percentages of >88%, limits of detection within the range 0.02-0.67 ng mL(-1), and coefficients of variation <11% for the column renewable mode and gives rise to a drastic reduction in operation costs ( approximately 25-fold) as compared to on-line column switching systems.
Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture
NASA Astrophysics Data System (ADS)
Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans
2017-04-01
Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.
Acousto-Optic Technology for Topographic Feature Extraction and Image Analysis.
1981-03-01
This report contains all findings of the acousto - optic technology study for feature extraction conducted by Deft Laboratories Inc. for the U.S. Army...topographic feature extraction and image analysis using acousto - optic (A-O) technology. A conclusion of this study was that A-O devices are potentially
NASA Astrophysics Data System (ADS)
Shi, Bibo; Grimm, Lars J.; Mazurowski, Maciej A.; Marks, Jeffrey R.; King, Lorraine M.; Maley, Carlo C.; Hwang, E. Shelley; Lo, Joseph Y.
2017-03-01
Predicting the risk of occult invasive disease in ductal carcinoma in situ (DCIS) is an important task to help address the overdiagnosis and overtreatment problems associated with breast cancer. In this work, we investigated the feasibility of using computer-extracted mammographic features to predict occult invasive disease in patients with biopsy proven DCIS. We proposed a computer-vision algorithm based approach to extract mammographic features from magnification views of full field digital mammography (FFDM) for patients with DCIS. After an expert breast radiologist provided a region of interest (ROI) mask for the DCIS lesion, the proposed approach is able to segment individual microcalcifications (MCs), detect the boundary of the MC cluster (MCC), and extract 113 mammographic features from MCs and MCC within the ROI. In this study, we extracted mammographic features from 99 patients with DCIS (74 pure DCIS; 25 DCIS plus invasive disease). The predictive power of the mammographic features was demonstrated through binary classifications between pure DCIS and DCIS with invasive disease using linear discriminant analysis (LDA). Before classification, the minimum redundancy Maximum Relevance (mRMR) feature selection method was first applied to choose subsets of useful features. The generalization performance was assessed using Leave-One-Out Cross-Validation and Receiver Operating Characteristic (ROC) curve analysis. Using the computer-extracted mammographic features, the proposed model was able to distinguish DCIS with invasive disease from pure DCIS, with an average classification performance of AUC = 0.61 +/- 0.05. Overall, the proposed computer-extracted mammographic features are promising for predicting occult invasive disease in DCIS.
Jonke, A.A.
1957-10-01
In improved solvent extraction process is described for the extraction of metal values from highly dilute aqueous solutions. The process comprises contacting an aqueous solution with an organic substantially water-immiscible solvent, whereby metal values are taken up by a solvent extract phase; scrubbing the solvent extract phase with an aqueous scrubbing solution; separating an aqueous solution from the scrubbed solvent extract phase; and contacting the scrubbed solvent phase with an aqueous medium whereby the extracted metal values are removed from the solvent phase and taken up by said medium to form a strip solution containing said metal values, the aqueous scrubbing solution being a mixture of strip solution and an aqueous solution which contains mineral acids anions and is free of the metal values. The process is particularly effective for purifying uranium, where one starts with impure aqueous uranyl nitrate, extracts with tributyl phosphate dissolved in carbon tetrachloride, scrubs with aqueous nitric acid and employs water to strip the uranium from the scrubbed organic phase.
Region of interest extraction based on multiscale visual saliency analysis for remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan
2015-01-01
Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.
Object-based vegetation classification with high resolution remote sensing imagery
NASA Astrophysics Data System (ADS)
Yu, Qian
Vegetation species are valuable indicators to understand the earth system. Information from mapping of vegetation species and community distribution at large scales provides important insight for studying the phenological (growth) cycles of vegetation and plant physiology. Such information plays an important role in land process modeling including climate, ecosystem and hydrological models. The rapidly growing remote sensing technology has increased its potential in vegetation species mapping. However, extracting information at a species level is still a challenging research topic. I proposed an effective method for extracting vegetation species distribution from remotely sensed data and investigated some ways for accuracy improvement. The study consists of three phases. Firstly, a statistical analysis was conducted to explore the spatial variation and class separability of vegetation as a function of image scale. This analysis aimed to confirm that high resolution imagery contains the information on spatial vegetation variation and these species classes can be potentially separable. The second phase was a major effort in advancing classification by proposing a method for extracting vegetation species from high spatial resolution remote sensing data. The proposed classification employs an object-based approach that integrates GIS and remote sensing data and explores the usefulness of ancillary information. The whole process includes image segmentation, feature generation and selection, and nearest neighbor classification. The third phase introduces a spatial regression model for evaluating the mapping quality from the above vegetation classification results. The effects of six categories of sample characteristics on the classification uncertainty are examined: topography, sample membership, sample density, spatial composition characteristics, training reliability and sample object features. This evaluation analysis answered several interesting scientific questions such as (1) whether the sample characteristics affect the classification accuracy and how significant if it does; (2) how much variance of classification uncertainty can be explained by above factors. This research is carried out on a hilly peninsular area in Mediterranean climate, Point Reyes National Seashore (PRNS) in Northern California. The area mainly consists of a heterogeneous, semi-natural broadleaf and conifer woodland, shrub land, and annual grassland. A detailed list of vegetation alliances is used in this study. Research results from the first phase indicates that vegetation spatial variation as reflected by the average local variance (ALV) keeps a high level of magnitude between 1 m and 4 m resolution. (Abstract shortened by UMI.)
Sanz-Estébanez, Santiago; Cordero-Grande, Lucilio; Sevilla, Teresa; Revilla-Orodea, Ana; de Luis-García, Rodrigo; Martín-Fernández, Marcos; Alberola-López, Carlos
2018-07-01
Left ventricular rotational motion is a feature of normal and diseased cardiac function. However, classical torsion and twist measures rely on the definition of a rotational axis which may not exist. This paper reviews global and local rotation descriptors of myocardial motion and introduces new curl-based (vortical) features built from tensorial magnitudes, intended to provide better comprehension about fibrotic tissue characteristics mechanical properties. Fifty-six cardiomyopathy patients and twenty-two healthy volunteers have been studied using tagged magnetic resonance by means of harmonic phase analysis. Rotation descriptors are built, with no assumption about a regular geometrical model, from different approaches. The extracted vortical features have been tested by means of a sequential cardiomyopathy classification procedure; they have proven useful for the regional characterization of the left ventricular function by showing great separability not only between pathologic and healthy patients but also, and specifically, between heterogeneous phenotypes within cardiomyopathies. Copyright © 2018 Elsevier B.V. All rights reserved.
Secure and Privacy Enhanced Gait Authentication on Smart Phone
Choi, Deokjai
2014-01-01
Smart environments established by the development of mobile technology have brought vast benefits to human being. However, authentication mechanisms on portable smart devices, particularly conventional biometric based approaches, still remain security and privacy concerns. These traditional systems are mostly based on pattern recognition and machine learning algorithms, wherein original biometric templates or extracted features are stored under unconcealed form for performing matching with a new biometric sample in the authentication phase. In this paper, we propose a novel gait based authentication using biometric cryptosystem to enhance the system security and user privacy on the smart phone. Extracted gait features are merely used to biometrically encrypt a cryptographic key which is acted as the authentication factor. Gait signals are acquired by using an inertial sensor named accelerometer in the mobile device and error correcting codes are adopted to deal with the natural variation of gait measurements. We evaluate our proposed system on a dataset consisting of gait samples of 34 volunteers. We achieved the lowest false acceptance rate (FAR) and false rejection rate (FRR) of 3.92% and 11.76%, respectively, in terms of key length of 50 bits. PMID:24955403
HEp-2 cell image classification method based on very deep convolutional networks with small datasets
NASA Astrophysics Data System (ADS)
Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping
2017-07-01
Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.
Morphological learning in a novel language: A cross-language comparison.
Havas, Viktória; Waris, Otto; Vaquero, Lucía; Rodríguez-Fornells, Antoni; Laine, Matti
2015-01-01
Being able to extract and interpret the internal structure of complex word forms such as the English word dance+r+s is crucial for successful language learning. We examined whether the ability to extract morphological information during word learning is affected by the morphological features of one's native tongue. Spanish and Finnish adult participants performed a word-picture associative learning task in an artificial language where the target words included a suffix marking the gender of the corresponding animate object. The short exposure phase was followed by a word recognition task and a generalization task for the suffix. The participants' native tongues vary greatly in terms of morphological structure, leading to two opposing hypotheses. On the one hand, Spanish speakers may be more effective in identifying gender in a novel language because this feature is present in Spanish but not in Finnish. On the other hand, Finnish speakers may have an advantage as the abundance of bound morphemes in their language calls for continuous morphological decomposition. The results support the latter alternative, suggesting that lifelong experience on morphological decomposition provides an advantage in novel morphological learning.
Sulfur Speciation and Extraction in Jet A (Briefing Charts)
2015-08-16
Extraction fluid: denatured ethanol from Fisher Scientific and deionized water – Jet A fuel , approximately 500-800 ppm sulfur by weight – Data...Outline • Background • Experimental Setup – Extraction of sulfur compounds from fuel to alcohol/water extraction fluid – Each rinse is...Hydrophobic / Oleophillic Membrane Oleophobic / Hydrophillic Membrane Emulsion Phase Fuel Phase Water (Extraction Fluid) Phase DISTRIBUTION A
Emotion Recognition from EEG Signals Using Multidimensional Information in EMD Domain.
Zhuang, Ning; Zeng, Ying; Tong, Li; Zhang, Chi; Zhang, Hanming; Yan, Bin
2017-01-01
This paper introduces a method for feature extraction and emotion recognition based on empirical mode decomposition (EMD). By using EMD, EEG signals are decomposed into Intrinsic Mode Functions (IMFs) automatically. Multidimensional information of IMF is utilized as features, the first difference of time series, the first difference of phase, and the normalized energy. The performance of the proposed method is verified on a publicly available emotional database. The results show that the three features are effective for emotion recognition. The role of each IMF is inquired and we find that high frequency component IMF1 has significant effect on different emotional states detection. The informative electrodes based on EMD strategy are analyzed. In addition, the classification accuracy of the proposed method is compared with several classical techniques, including fractal dimension (FD), sample entropy, differential entropy, and discrete wavelet transform (DWT). Experiment results on DEAP datasets demonstrate that our method can improve emotion recognition performance.
Engagement Assessment Using EEG Signals
NASA Technical Reports Server (NTRS)
Li, Feng; Li, Jiang; McKenzie, Frederic; Zhang, Guangfan; Wang, Wei; Pepe, Aaron; Xu, Roger; Schnell, Thomas; Anderson, Nick; Heitkamp, Dean
2012-01-01
In this paper, we present methods to analyze and improve an EEG-based engagement assessment approach, consisting of data preprocessing, feature extraction and engagement state classification. During data preprocessing, spikes, baseline drift and saturation caused by recording devices in EEG signals are identified and eliminated, and a wavelet based method is utilized to remove ocular and muscular artifacts in the EEG recordings. In feature extraction, power spectrum densities with 1 Hz bin are calculated as features, and these features are analyzed using the Fisher score and the one way ANOVA method. In the classification step, a committee classifier is trained based on the extracted features to assess engagement status. Finally, experiment results showed that there exist significant differences in the extracted features among different subjects, and we have implemented a feature normalization procedure to mitigate the differences and significantly improved the engagement assessment performance.
The optional selection of micro-motion feature based on Support Vector Machine
NASA Astrophysics Data System (ADS)
Li, Bo; Ren, Hongmei; Xiao, Zhi-he; Sheng, Jing
2017-11-01
Micro-motion form of target is multiple, different micro-motion forms are apt to be modulated, which makes it difficult for feature extraction and recognition. Aiming at feature extraction of cone-shaped objects with different micro-motion forms, this paper proposes the best selection method of micro-motion feature based on support vector machine. After the time-frequency distribution of radar echoes, comparing the time-frequency spectrum of objects with different micro-motion forms, features are extracted based on the differences between the instantaneous frequency variations of different micro-motions. According to the methods based on SVM (Support Vector Machine) features are extracted, then the best features are acquired. Finally, the result shows the method proposed in this paper is feasible under the test condition of certain signal-to-noise ratio(SNR).
A Review of Feature Extraction Software for Microarray Gene Expression Data
Tan, Ching Siang; Ting, Wai Soon; Mohamad, Mohd Saberi; Chan, Weng Howe; Deris, Safaai; Ali Shah, Zuraini
2014-01-01
When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method. PMID:25250315
Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals
Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu
2012-01-01
Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017
Liu, Jian; Cheng, Yuhu; Wang, Xuesong; Zhang, Lin; Liu, Hui
2017-08-17
It is urgent to diagnose colorectal cancer in the early stage. Some feature genes which are important to colorectal cancer development have been identified. However, for the early stage of colorectal cancer, less is known about the identity of specific cancer genes that are associated with advanced clinical stage. In this paper, we conducted a feature extraction method named Optimal Mean based Block Robust Feature Extraction method (OMBRFE) to identify feature genes associated with advanced colorectal cancer in clinical stage by using the integrated colorectal cancer data. Firstly, based on the optimal mean and L 2,1 -norm, a novel feature extraction method called Optimal Mean based Robust Feature Extraction method (OMRFE) is proposed to identify feature genes. Then the OMBRFE method which introduces the block ideology into OMRFE method is put forward to process the colorectal cancer integrated data which includes multiple genomic data: copy number alterations, somatic mutations, methylation expression alteration, as well as gene expression changes. Experimental results demonstrate that the OMBRFE is more effective than previous methods in identifying the feature genes. Moreover, genes identified by OMBRFE are verified to be closely associated with advanced colorectal cancer in clinical stage.
A judicious multiple hypothesis tracker with interacting feature extraction
NASA Astrophysics Data System (ADS)
McAnanama, James G.; Kirubarajan, T.
2009-05-01
The multiple hypotheses tracker (mht) is recognized as an optimal tracking method due to the enumeration of all possible measurement-to-track associations, which does not involve any approximation in its original formulation. However, its practical implementation is limited by the NP-hard nature of this enumeration. As a result, a number of maintenance techniques such as pruning and merging have been proposed to bound the computational complexity. It is possible to improve the performance of a tracker, mht or not, using feature information (e.g., signal strength, size, type) in addition to kinematic data. However, in most tracking systems, the extraction of features from the raw sensor data is typically independent of the subsequent association and filtering stages. In this paper, a new approach, called the Judicious Multi Hypotheses Tracker (jmht), whereby there is an interaction between feature extraction and the mht, is presented. The measure of the quality of feature extraction is input into measurement-to-track association while the prediction step feeds back the parameters to be used in the next round of feature extraction. The motivation for this forward and backward interaction between feature extraction and tracking is to improve the performance in both steps. This approach allows for a more rational partitioning of the feature space and removes unlikely features from the assignment problem. Simulation results demonstrate the benefits of the proposed approach.
A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.
Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun
2017-07-01
Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time results show that the proposed method has less time complexity after feature selection. The proposed feature extraction method is very effective for getting representatives information from mental states EEG signals in BCI applications and reducing the computational complexity of classifiers by reducing the number of extracted features. Copyright © 2017 Elsevier B.V. All rights reserved.
Investigation of HV/HR-CMOS technology for the ATLAS Phase-II Strip Tracker Upgrade
NASA Astrophysics Data System (ADS)
Fadeyev, V.; Galloway, Z.; Grabas, H.; Grillo, A. A.; Liang, Z.; Martinez-Mckinney, F.; Seiden, A.; Volk, J.; Affolder, A.; Buckland, M.; Meng, L.; Arndt, K.; Bortoletto, D.; Huffman, T.; John, J.; McMahon, S.; Nickerson, R.; Phillips, P.; Plackett, R.; Shipsey, I.; Vigani, L.; Bates, R.; Blue, A.; Buttar, C.; Kanisauskas, K.; Maneuski, D.; Benoit, M.; Di Bello, F.; Caragiulo, P.; Dragone, A.; Grenier, P.; Kenney, C.; Rubbo, F.; Segal, J.; Su, D.; Tamma, C.; Das, D.; Dopke, J.; Turchetta, R.; Wilson, F.; Worm, S.; Ehrler, F.; Peric, I.; Gregor, I. M.; Stanitzki, M.; Hoeferkamp, M.; Seidel, S.; Hommels, L. B. A.; Kramberger, G.; Mandić, I.; Mikuž, M.; Muenstermann, D.; Wang, R.; Zhang, J.; Warren, M.; Song, W.; Xiu, Q.; Zhu, H.
2016-09-01
ATLAS has formed strip CMOS project to study the use of CMOS MAPS devices as silicon strip sensors for the Phase-II Strip Tracker Upgrade. This choice of sensors promises several advantages over the conventional baseline design, such as better resolution, less material in the tracking volume, and faster construction speed. At the same time, many design features of the sensors are driven by the requirement of minimizing the impact on the rest of the detector. Hence the target devices feature long pixels which are grouped to form a virtual strip with binary-encoded z position. The key performance aspects are radiation hardness compatibility with HL-LHC environment, as well as extraction of the full hit position with full-reticle readout architecture. To date, several test chips have been submitted using two different CMOS technologies. The AMS 350 nm is a high voltage CMOS process (HV-CMOS), that features the sensor bias of up to 120 V. The TowerJazz 180 nm high resistivity CMOS process (HR-CMOS) uses a high resistivity epitaxial layer to provide the depletion region on top of the substrate. We have evaluated passive pixel performance, and charge collection projections. The results strongly support the radiation tolerance of these devices to radiation dose of the HL-LHC in the strip tracker region. We also describe design features for the next chip submission that are motivated by our technology evaluation.
An Investigation of Aggregation in Synergistic Solvent Extraction Systems
NASA Astrophysics Data System (ADS)
Jackson, Andy Steven
With an increasing focus on anthropogenic climate change, nuclear reactors present an attractive option for base load power generation with regard to air pollution and carbon emissions, especially when compared with traditional fossil fuel based options. However, used nuclear fuel (UNF) is highly radiotoxic and contains minor actinides (americium and curium) which remain more radiotoxic than natural uranium ore for hundreds of thousands of years, presenting a challenge for long-term storage . Advanced nuclear fuel recycling can reduce this required storage time to thousands of years by removing the highly radiotoxic minor actinides. Many advanced separation schemes have been proposed to achieve this separation but none have been implemented to date. A key feature among many proposed schemes is the use of more than one extraction reagent in a single extraction phase, which can lead to the phenomenon known as "synergism" in which the extraction efficiency for a combination of the reagents is greater than that of the individual extractants alone. This feature is not well understood for many systems and a comprehensive picture of the mechanism behind synergism does not exist. There are several proposed mechanisms for synergism though none have been used to model multiple extraction systems. This work examines several proposed advanced extractant combinations which exhibit synergism: 2-bromodecanoic acid (BDA) with 2,2':6',2"-terpyridine (TERPY), tri-n-butylphosphine oxide (TPBO) with 2-thenoyltrifluoro acetone (HTTA), and dinonylnaphthalene sulfonic acid (HDNNS) with 5,8-diethyl-7-hydroxy-dodecan-6-oxime (LIX). We examine two proposed synergistic mechanisms involving and attempt to verify the ability of these mechanisms to predict the extraction behavior of the chosen systems. These are a reverse micellar catalyzed extraction model and a mixed complex formation model. Neither was able to effectively predict the synergistic behavior of the systems. We further examine these systems for the presence of large reverse micellar aggregates and thermodynamic signatures of aggregation. Behaviors differed widely from system to system, suggesting the possibility of more than one mechanism being responsible for similar observed extraction trends.
User-oriented summary extraction for soccer video based on multimodal analysis
NASA Astrophysics Data System (ADS)
Liu, Huayong; Jiang, Shanshan; He, Tingting
2011-11-01
An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.
NASA Astrophysics Data System (ADS)
Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.
2017-01-01
Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.
Germaine, Stephen S.; O'Donnell, Michael S.; Aldridge, Cameron L.; Baer, Lori; Fancher, Tammy; McBeth, Jamie; McDougal, Robert R.; Waltermire, Robert; Bowen, Zachary H.; Diffendorfer, James; Garman, Steven; Hanson, Leanne
2012-01-01
We evaluated how well three leading information-extraction software programs (eCognition, Feature Analyst, Feature Extraction) and manual hand digitization interpreted information from remotely sensed imagery of a visually complex gas field in Wyoming. Specifically, we compared how each mapped the area of and classified the disturbance features present on each of three remotely sensed images, including 30-meter-resolution Landsat, 10-meter-resolution SPOT (Satellite Pour l'Observation de la Terre), and 0.6-meter resolution pan-sharpened QuickBird scenes. Feature Extraction mapped the spatial area of disturbance features most accurately on the Landsat and QuickBird imagery, while hand digitization was most accurate on the SPOT imagery. Footprint non-overlap error was smallest on the Feature Analyst map of the Landsat imagery, the hand digitization map of the SPOT imagery, and the Feature Extraction map of the QuickBird imagery. When evaluating feature classification success against a set of ground-truthed control points, Feature Analyst, Feature Extraction, and hand digitization classified features with similar success on the QuickBird and SPOT imagery, while eCognition classified features poorly relative to the other methods. All maps derived from Landsat imagery classified disturbance features poorly. Using the hand digitized QuickBird data as a reference and making pixel-by-pixel comparisons, Feature Extraction classified features best overall on the QuickBird imagery, and Feature Analyst classified features best overall on the SPOT and Landsat imagery. Based on the entire suite of tasks we evaluated, Feature Extraction performed best overall on the Landsat and QuickBird imagery, while hand digitization performed best overall on the SPOT imagery, and eCognition performed worst overall on all three images. Error rates for both area measurements and feature classification were prohibitively high on Landsat imagery, while QuickBird was time and cost prohibitive for mapping large spatial extents. The SPOT imagery produced map products that were far more accurate than Landsat and did so at a far lower cost than QuickBird imagery. Consideration of degree of map accuracy required, costs associated with image acquisition, software, operator and computation time, and tradeoffs in the form of spatial extent versus resolution should all be considered when evaluating which combination of imagery and information-extraction method might best serve any given land use mapping project. When resources permit, attaining imagery that supports the highest classification and measurement accuracy possible is recommended.
Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection
NASA Astrophysics Data System (ADS)
Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav
2014-03-01
Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.
Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images
NASA Astrophysics Data System (ADS)
Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.
2018-04-01
A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.
NASA Astrophysics Data System (ADS)
Yuliusman; Huda, M.; Ramadhan, I. T.; Farry, A. R.; Wulandari, P. T.; Alfia, R.
2018-03-01
In this study was conducted to recover nickel metal from spent nickel catalyst resulting from hydrotreating process in petroleum industry. The nickel extraction study with the emulsion liquid membrane using Cyanex 272 as an extractant to extract and separate nickel from the feed phase solution. Feed phase solution was preapred from spent catalyst using sulphuric acid. Liquid membrane consists of a kerosene as diluent, a Span 80 as surfactant, a Cyanex 272 as carrier and sulphuric acid solutions have been used as the stripping solution. The important parameters governing the permeation of nickel and their effect on the separation process have been studied. These parameters are surfactant concentration, extractant concentration feed phase pH. The optimum conditions of the emulsion membrane making process is using 0.06 M Cyanex 272, 8% w/v SPAN 80, 0.05 M H2SO4, internal phase extractant / phase volume ratio: 1/1, and stirring speed 1150 rpm for 60 Minute that can produce emulsion membrane with stability level above 90% after 4 hours. In the extraction process with optimum condition pH 6 for feed phase, ratio of phase emulsion/phase of feed: 1/2, and stirring speed 175 rpm for 15 minutes with result 81.51% nickel was extracted.
Chen, Zhen; Zhao, Pei; Li, Fuyi; Leier, André; Marquez-Lago, Tatiana T; Wang, Yanan; Webb, Geoffrey I; Smith, A Ian; Daly, Roger J; Chou, Kuo-Chen; Song, Jiangning
2018-03-08
Structural and physiochemical descriptors extracted from sequence data have been widely used to represent sequences and predict structural, functional, expression and interaction profiles of proteins and peptides as well as DNAs/RNAs. Here, we present iFeature, a versatile Python-based toolkit for generating various numerical feature representation schemes for both protein and peptide sequences. iFeature is capable of calculating and extracting a comprehensive spectrum of 18 major sequence encoding schemes that encompass 53 different types of feature descriptors. It also allows users to extract specific amino acid properties from the AAindex database. Furthermore, iFeature integrates 12 different types of commonly used feature clustering, selection, and dimensionality reduction algorithms, greatly facilitating training, analysis, and benchmarking of machine-learning models. The functionality of iFeature is made freely available via an online web server and a stand-alone toolkit. http://iFeature.erc.monash.edu/; https://github.com/Superzchen/iFeature/. jiangning.song@monash.edu; kcchou@gordonlifescience.org; roger.daly@monash.edu. Supplementary data are available at Bioinformatics online.
Zhang, Rong; Watson, David G; Wang, Lijie; Westrop, Gareth D; Coombs, Graham H; Zhang, Tong
2014-10-03
It has been reported that HILIC column chemistry has a great effect on the number of detected metabolites in LC-HRMS-based untargeted metabolite profiling studies. However, no systematic investigation has been carried out with regard to the optimisation of mobile phase characteristics. In this study using 223 metabolite standards, we explored the retention mechanisms on three zwitterionic columns with varied mobile phase composition, demonstrated the interference from poor chromatographic peak shapes on the output of data extraction, and assessed the quality of chromatographic signals and the separation of isomers under each LC condition. As expected, on the ZIC-cHILIC column the acidic metabolites showed improved chromatographic performance at low pH which can be attributed to the opposite arrangement of the permanently charged groups on this column in comparison with the ZIC-HILIC column. Using extracts from the protozoan parasite Leishmania, we compared the numbers of repeatedly detected LC-HRMS features under different LC conditions with putative identification of metabolites not amongst the standards being based on accurate mass (±3ppm). Besides column chemistry, the pH of the mobile phase plays a key role in not only determining the retention mechanisms of solutes but also the output of the LC-HRMS data processing. Fast evaporation of ammonium carbonate produced less ion suppression in ESI source and consequently improved the detectability of the metabolites in low abundance in comparison with other ammonium salts. Our results show that the combination of a ZIC-pHILIC column with an ammonium carbonate mobile phase, pH 9.2, at 20mM in the aqueous phase or 10mM in both aqueous and organic mobile phase components, provided the most suitable LC conditions for LC-HRMS-based untargeted metabolite profiling of Leishmania parasite extracts. The signal reliability of the mass spectrometer used in this study (Exactive Orbitrap) was also investigated. Copyright © 2014 Elsevier B.V. All rights reserved.
A graph-Laplacian-based feature extraction algorithm for neural spike sorting.
Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos
2009-01-01
Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.
NASA Astrophysics Data System (ADS)
Haftbaradaran, H.; Maddahian, A.; Mossaiby, F.
2017-05-01
It is well known that phase separation could severely intensify mechanical degradation and expedite capacity fading in lithium-ion battery electrodes during electrochemical cycling. Experiments have frequently revealed that such degradation effects could be substantially mitigated via reducing the electrode feature size to the nanoscale. The purpose of this work is to present a fracture mechanics study of the phase separating planar electrodes. To this end, a phase field model is utilized to predict how phase separation affects evolution of the solute distribution and stress profile in a planar electrode. Behavior of the preexisting flaws in the electrode in response to the diffusion induced stresses is then examined via computing the time dependent stress intensity factor arising at the tip of flaws during both the insertion and extraction half-cycles. Further, adopting a sharp-interphase approximation of the system, a critical electrode thickness is derived below which the phase separating electrode becomes flaw tolerant. Numerical results of the phase field model are also compared against analytical predictions of the sharp-interphase model. The results are further discussed with reference to the available experiments in the literature. Finally, some of the limitations of the model are cautioned.
Robust digital image watermarking using distortion-compensated dither modulation
NASA Astrophysics Data System (ADS)
Li, Mianjie; Yuan, Xiaochen
2018-04-01
In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.
Difet: Distributed Feature Extraction Tool for High Spatial Resolution Remote Sensing Images
NASA Astrophysics Data System (ADS)
Eken, S.; Aydın, E.; Sayar, A.
2017-11-01
In this paper, we propose distributed feature extraction tool from high spatial resolution remote sensing images. Tool is based on Apache Hadoop framework and Hadoop Image Processing Interface. Two corner detection (Harris and Shi-Tomasi) algorithms and five feature descriptors (SIFT, SURF, FAST, BRIEF, and ORB) are considered. Robustness of the tool in the task of feature extraction from LandSat-8 imageries are evaluated in terms of horizontal scalability.
NASA Technical Reports Server (NTRS)
Botha, Pieter; Butcher, Alan R.; Horsch, Hana; Rickman, Doug; Wentworth, Susan J.; Schrader, Christian M.; Stoeser, Doug; Benedictus, Aukje; Gottlieb, Paul; McKay, David
2008-01-01
Polished thin-sections of samples extracted from Apollo drive tubes provide unique insights into the structure of the Moon's regolith at various landing sites. In particular, they allow the mineralogy and texture of the regolith to be studied as a function of depth. Much has been written about such thin-sections based on optical, SEM and EPMA studies, in terms of their essential petrographic features, but there has been little attempt to quantify these aspects from a spatial perspective. In this study, we report the findings of experimental analysis of two thin-sections (64002, 6019, depth range 5.0 - 8.0 cm & 64001, 6031, depth range 50.0 - 53.1 cm), from a single Apollo 16 drive tube using QEMSCAN . A key feature of the method is phase identification by ultrafast energy dispersive x-ray mapping on a pixel-by-pixel basis. By selecting pixel resolutions ranging from 1 - 5 microns, typically 8,500,000 individual measurement points can be collected on a thin-section. The results we present include false colour digital images of both thin-sections. From these images, information such as phase proportions (major, minor and trace phases), particle textures, packing densities, and particle geometries, has been quantified. Parameters such as porosity and average phase density, which are of geomechanical interest, can also be calculated automatically. This study is part of an on-going investigation into spatial variation of lunar regolith and NASA's ISRU Lunar Simulant Development Project.
Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures
NASA Astrophysics Data System (ADS)
Li, Quanbao; Wei, Fajie; Zhou, Shenghan
2017-05-01
The linear discriminant analysis (LDA) is one of popular means for linear feature extraction. It usually performs well when the global data structure is consistent with the local data structure. Other frequently-used approaches of feature extraction usually require linear, independence, or large sample condition. However, in real world applications, these assumptions are not always satisfied or cannot be tested. In this paper, we introduce an adaptive method, local kernel nonparametric discriminant analysis (LKNDA), which integrates conventional discriminant analysis with nonparametric statistics. LKNDA is adept in identifying both complex nonlinear structures and the ad hoc rule. Six simulation cases demonstrate that LKNDA have both parametric and nonparametric algorithm advantages and higher classification accuracy. Quartic unilateral kernel function may provide better robustness of prediction than other functions. LKNDA gives an alternative solution for discriminant cases of complex nonlinear feature extraction or unknown feature extraction. At last, the application of LKNDA in the complex feature extraction of financial market activities is proposed.
AI-augmented time stretch microscopy
NASA Astrophysics Data System (ADS)
Mahjoubfar, Ata; Chen, Claire L.; Lin, Jiahao; Jalali, Bahram
2017-02-01
Cell reagents used in biomedical analysis often change behavior of the cells that they are attached to, inhibiting their native signaling. On the other hand, label-free cell analysis techniques have long been viewed as challenging either due to insufficient accuracy by limited features, or because of low throughput as a sacrifice of improved precision. We present a recently developed artificial-intelligence augmented microscope, which builds upon high-throughput time stretch quantitative phase imaging (TS-QPI) and deep learning to perform label-free cell classification with record high-accuracy. Our system captures quantitative optical phase and intensity images simultaneously by frequency multiplexing, extracts multiple biophysical features of the individual cells from these images fused, and feeds these features into a supervised machine learning model for classification. The enhanced performance of our system compared to other label-free assays is demonstrated by classification of white blood T-cells versus colon cancer cells and lipid accumulating algal strains for biofuel production, which is as much as five-fold reduction in inaccuracy. This system obtains the accuracy required in practical applications such as personalized drug development, while the cells remain intact and the throughput is not sacrificed. Here, we introduce a data acquisition scheme based on quadrature phase demodulation that enables interruptionless storage of TS-QPI cell images. Our proof of principle demonstration is capable of saving 40 TB of cell images in about four hours, i.e. pictures of every single cell in 10 mL of a sample.
Reliable structural information from multiscale decomposition with the Mellor-Brady filter
NASA Astrophysics Data System (ADS)
Szilágyi, Tünde; Brady, Michael
2009-08-01
Image-based medical diagnosis typically relies on the (poorly reproducible) subjective classification of textures in order to differentiate between diseased and healthy pathology. Clinicians claim that significant benefits would arise from quantitative measures to inform clinical decision making. The first step in generating such measures is to extract local image descriptors - from noise corrupted and often spatially and temporally coarse resolution medical signals - that are invariant to illumination, translation, scale and rotation of the features. The Dual-Tree Complex Wavelet Transform (DT-CWT) provides a wavelet multiresolution analysis (WMRA) tool e.g. in 2D with good properties, but has limited rotational selectivity. Also, it requires computationally-intensive steering due to the inherently 1D operations performed. The monogenic signal, which is defined in n >= 2D with the Riesz transform gives excellent orientation information without the need for steering. Recent work has suggested the Monogenic Riesz-Laplace wavelet transform as a possible tool for integrating these two concepts into a coherent mathematical framework. We have found that the proposed construction suffers from a lack of rotational invariance and is not optimal for retrieving local image descriptors. In this paper we show: 1. Local frequency and local phase from the monogenic signal are not equivalent, especially in the phase congruency model of a "feature", and so they are not interchangeable for medical image applications. 2. The accuracy of local phase computation may be improved by estimating the denoising parameters while maximizing a new measure of "featureness".
Asymmetric Operation of the Locomotor Central Pattern Generator in the Neonatal Mouse Spinal Cord
Endo, Toshiaki; Kiehn, Ole
2008-01-01
The rhythmic voltage oscillations in motor neurons (MNs) during locomotor movements reflect the operation of the pre-MN central pattern generator (CPG) network. Recordings from MNs can thus be used as a method to deduct the organization of CPGs. Here, we use continuous conductance measurements and decomposition methods to quantitatively assess the weighting and phase tuning of synaptic inputs to different flexor and extensor MNs during locomotor-like activity in the isolated neonatal mice lumbar spinal cord preparation. Whole cell recordings were obtained from 22 flexor and 18 extensor MNs in rostral and caudal lumbar segments. In all flexor and the large majority of extensor MNs the extracted excitatory and inhibitory synaptic conductances alternate but with a predominance of inhibitory conductances, most pronounced in extensors. These conductance changes are consistent with a “push–pull” operation of locomotor CPG. The extracted excitatory and inhibitory synaptic conductances varied between 2 and 56% of the mean total conductance. Analysis of the phase tuning of the extracted synaptic conductances in flexor and extensor MNs in the rostral lumbar cord showed that the flexor-phase–related synaptic conductance changes have sharper locomotor-phase tuning than the extensor-phase–related conductances, suggesting a modular organization of premotor CPG networks consisting of reciprocally coupled, but differently composed, flexor and extensor CPG networks. There was a clear difference between phase tuning in rostral and caudal MNs, suggesting a distinct operation of CPG networks in different lumbar segments. The highly asymmetric features were preserved throughout all ranges of locomotor frequencies investigated and with different combinations of locomotor-inducing drugs. The asymmetric nature of CPG operation and phase tuning of the conductance profiles provide important clues to the organization of the rodent locomotor CPG and are compatible with a multilayered and distributed structure of the network. PMID:18829847
Wen, Tingxi; Zhang, Zhongnan
2017-01-01
Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789
Wen, Tingxi; Zhang, Zhongnan
2017-05-01
In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.
Skeletonization with hollow detection on gray image by gray weighted distance transform
NASA Astrophysics Data System (ADS)
Bhattacharya, Prabir; Qian, Kai; Cao, Siqi; Qian, Yi
1998-10-01
A skeletonization algorithm that could be used to process non-uniformly distributed gray-scale images with hollows was presented. This algorithm is based on the Gray Weighted Distance Transformation. The process includes a preliminary phase of investigation in the hollows in the gray-scale image, whether these hollows are considered as topological constraints for the skeleton structure depending on their statistically significant depth. We then extract the resulting skeleton that has certain meaningful information for understanding the object in the image. This improved algorithm can overcome the possible misinterpretation of some complicated images in the extracted skeleton, especially in images with asymmetric hollows and asymmetric features. This algorithm can be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.
Shen, Aijin; Wei, Jie; Yan, Jingyu; Jin, Gaowa; Ding, Junjie; Yang, Bingcheng; Guo, Zhimou; Zhang, Feifang; Liang, Xinmiao
2017-03-01
An orthogonal two-dimensional solid-phase extraction strategy was established for the selective enrichment of three aminoglycosides including spectinomycin, streptomycin, and dihydrostreptomycin in milk. A reversed-phase liquid chromatography material (C 18 ) and a weak cation-exchange material (TGA) were integrated in a single solid-phase extraction cartridge. The feasibility of two-dimensional clean-up procedure that experienced two-step adsorption, two-step rinsing, and two-step elution was systematically investigated. Based on the orthogonality of reversed-phase and weak cation-exchange procedures, the two-dimensional solid-phase extraction strategy could minimize the interference from the hydrophobic matrix existing in traditional reversed-phase solid-phase extraction. In addition, high ionic strength in the extracts could be effectively removed before the second dimension of weak cation-exchange solid-phase extraction. Combined with liquid chromatography and tandem mass spectrometry, the optimized procedure was validated according to the European Union Commission directive 2002/657/EC. A good performance was achieved in terms of linearity, recovery, precision, decision limit, and detection capability in milk. Finally, the optimized two-dimensional clean-up procedure incorporated with liquid chromatography and tandem mass spectrometry was successfully applied to the rapid monitoring of aminoglycoside residues in milk. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A portable foot-parameter-extracting system
NASA Astrophysics Data System (ADS)
Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan
2016-03-01
In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.
Slip avoidance strategies in children with bilateral spastic cerebral palsy and crouch gait.
Kleiner, Ana Francisca Rozin; Pacifici, Ilaria; Condoluci, Claudia; Sforza, Chiarella; Galli, Manuela
2018-06-01
A slip occurs when the required friction (RCOF) to prevent slipping at the foot/floor interfaces exceeds the available friction. The RCOF is dependent upon the biomechanics features of individuals and their gait. On the other hand, the available friction depends on environmental features. Once individuals with crouch gait have their biomechanics of gait completely altered, how do they interact with a supporting surface? The aim was to quantify the RCOF in children with bilateral spastic cerebral palsy (BSCP) and crouch gait. 11 children with crouch gait and 11 healthy age-matched children were instructed to walk barefoot at self-selected speed over a force platform. The RCOF curve was obtained as the ratio between the tangential forces (FT), and the vertical ground reaction force (FZ). Three points were extracted by the RCOF, FT and FZ curves at the loading response, midstance and push-off phases. Children with BSCP presented higher values of RCOF in all support phase and lower gait velocity relative to the healthy controls. For BSCP group no correlation between FT and FZ were found, indicating that this group is not able to negotiate the forces during the support phase. Children with BSCP and crouch gait are not able to negotiate the forces applied on the ground in support phase, so to avoid the fall, their strategy is to reduce the gait velocity. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Jingbo; Yue, Anzhi; Wang, Chengyi; Huang, Qingqing; Chen, Jiansheng; Meng, Yu; He, Dongxu
2018-01-01
The wind turbine is a device that converts the wind's kinetic energy into electrical power. Accurate and automatic extraction of wind turbine is instructive for government departments to plan wind power plant projects. A hybrid and practical framework based on saliency detection for wind turbine extraction, using Google Earth image at spatial resolution of 1 m, is proposed. It can be viewed as a two-phase procedure: coarsely detection and fine extraction. In the first stage, we introduced a frequency-tuned saliency detection approach for initially detecting the area of interest of the wind turbines. This method exploited features of color and luminance, was simple to implement, and was computationally efficient. Taking into account the complexity of remote sensing images, in the second stage, we proposed a fast method for fine-tuning results in frequency domain and then extracted wind turbines from these salient objects by removing the irrelevant salient areas according to the special properties of the wind turbines. Experiments demonstrated that our approach consistently obtains higher precision and better recall rates. Our method was also compared with other techniques from the literature and proves that it is more applicable and robust.
Carbon Nanotubes Application in the Extraction Techniques of Pesticides: A Review.
Jakubus, Aleksandra; Paszkiewicz, Monika; Stepnowski, Piotr
2017-01-02
Carbon nanotubes (CNTs) are currently one of the most promising groups of materials with some interesting properties, such as lightness, rigidity, high surface area, high mechanical strength in tension, good thermal conductivity or resistance to mechanical damage. These unique properties make CNTs a competitive alternative to conventional sorbents used in analytical chemistry, especially in extraction techniques. The amount of work that discusses the usefulness of CNTs as a sorbent in a variety of extraction techniques has increased significantly in recent years. In this review article, the most important feature and different applications of solid-phase extraction (SPE), including, classical SPE and dispersive SPE using CNTs for pesticides isolation from different matrices, are summarized. Because of high number of articles concerning the applicability of carbon materials to extraction of pesticides, the main aim of proposed publication is to provide updated review of the latest uses of CNTs by covering the period 2006-2015. Moreover, in this review, the recent papers and this one, which are covered in previous reviews, will be addressed and particular attention has been paid on the division of publications in terms of classes of pesticides, in order to systematize the available literature reports.
Process for radioisotope recovery and system for implementing same
Meikrantz, David H [Idaho Falls, ID; Todd, Terry A [Aberdeen, ID; Tranter, Troy J [Idaho Falls, ID; Horwitz, E Philip [Naperville, IL
2009-10-06
A method of recovering daughter isotopes from a radioisotope mixture. The method comprises providing a radioisotope mixture solution comprising at least one parent isotope. The at least one parent isotope is extracted into an organic phase, which comprises an extractant and a solvent. The organic phase is substantially continuously contacted with an aqueous phase to extract at least one daughter isotope into the aqueous phase. The aqueous phase is separated from the organic phase, such as by using an annular centrifugal contactor. The at least one daughter isotope is purified from the aqueous phase, such as by ion exchange chromatography or extraction chromatography. The at least one daughter isotope may include actinium-225, radium-225, bismuth-213, or mixtures thereof. A liquid-liquid extraction system for recovering at least one daughter isotope from a source material is also disclosed.
Process for radioisotope recovery and system for implementing same
Meikrantz, David H.; Todd, Terry A.; Tranter, Troy J.; Horwitz, E. Philip
2007-01-02
A method of recovering daughter isotopes from a radioisotope mixture. The method comprises providing a radioisotope mixture solution comprising at least one parent isotope. The at least one parent isotope is extracted into an organic phase, which comprises an extractant and a solvent. The organic phase is substantially continuously contacted with an aqueous phase to extract at least one daughter isotope into the aqueous phase. The aqueous phase is separated from the organic phase, such as by using an annular centrifugal contactor. The at least one daughter isotope is purified from the aqueous phase, such as by ion exchange chromatography or extraction chromatography. The at least one daughter isotope may include actinium-225, radium-225, bismuth-213, or mixtures thereof. A liquid-liquid extraction system for recovering at least one daughter isotope from a source material is also disclosed.
Chen, Zhi; Zhang, Wei; Tang, Xunyou; Fan, Huajun; Xie, Xiujuan; Wan, Qiang; Wu, Xuehao; Tang, James Z
2016-06-25
A novel and rapid method for simultaneous extraction and separation of the different polysaccharides from Semen Cassiae (SC) was developed by microwave-assisted aqueous two-phase extraction (MAATPE) in a one-step procedure. Using ethanol/ammonium sulfate system as a multiphase solvent, the effects of MAATPE on the extraction of polysaccharides from SC such as the composition of the ATPS, extraction time, temperature and solvent-to-material ratio were investigated by UV-vis analysis. Under the optimum conditions, the yields of polysaccharides were 4.49% for the top phase, 8.80% for the bottom phase and 13.29% for total polysaccharides, respectively. Compared with heating solvent extraction and ultrasonic assisted extraction, MAATPE exhibited the higher extraction yields in shorter time. Fourier-transform infrared spectra showed that two polysaccharides extracted from SC to the top and bottom phases by MAATPE were different from each other in their chemical structures. Through acid hydrolysis and PMP derivatization prior to HPLC, analytical results by indicated that a polysaccharide of the top phases was a relatively homogeneous homepolysaccharide composed of dominant gucose glucose while that of the bottom phase was a water-soluble heteropolysaccharide with multiple components of glucose, xylose, arabinose, galactose, mannose and glucuronic acid. Molar ratios of monosaccharides were 95.13:4.27:0.60 of glucose: arabinose: galactose for the polysaccharide from the top phase and 62.96:14.07:6.67: 6.67:5.19:4.44 of glucose: xylose: arabinose: galactose: mannose: glucuronic acid for that from the bottom phase, respectively. The mechanism for MAATPE process was also discussed in detail. MAATPE with the aid of microwave and the selectivity of the ATPS not only improved yields of the extraction, but also obtained a variety of polysaccharides. Hence, it was proved as a green, efficient and promising alternative to simultaneous extraction of polysaccharides from SC. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhang, Heng; Pan, Zhongming; Zhang, Wenna
2018-06-07
An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.
NASA Astrophysics Data System (ADS)
Gascooke, Jason R.; Lawrance, Warren D.
2017-11-01
Two dimensional laser induced fluorescence (2D-LIF) extends the usual laser induced fluorescence technique by adding a second dimension, the wavelength at which excited states emit, thereby significantly enhancing the information that can be extracted. It allows overlapping absorption features, whether they arise from within the same molecule or from different molecules in a mixture, to be associated with their appropriate "parent" state and/or molecule. While the first gas phase version of the technique was published a decade ago, the technique is in its infancy, having been exploited by only a few groups to date. However, its potential in gas phase spectroscopy and dynamics is significant. In this article we provide an overview of the technique and illustrate its potential with examples, with a focus on those utilising high resolution in the dispersed fluorescence dimension.
Recognition of Handwritten Arabic words using a neuro-fuzzy network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boukharouba, Abdelhak; Bennia, Abdelhak
We present a new method for the recognition of handwritten Arabic words based on neuro-fuzzy hybrid network. As a first step, connected components (CCs) of black pixels are detected. Then the system determines which CCs are sub-words and which are stress marks. The stress marks are then isolated and identified separately and the sub-words are segmented into graphemes. Each grapheme is described by topological and statistical features. Fuzzy rules are extracted from training examples by a hybrid learning scheme comprised of two phases: rule generation phase from data using a fuzzy c-means, and rule parameter tuning phase using gradient descentmore » learning. After learning, the network encodes in its topology the essential design parameters of a fuzzy inference system.The contribution of this technique is shown through the significant tests performed on a handwritten Arabic words database.« less
Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang
2013-09-13
Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks.
Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang
2013-01-01
Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks. PMID:24064602
BROADBAND SPECTROSCOPY USING TWO SUZAKU OBSERVATIONS OF THE HMXB GX 301-2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suchy, Slawomir; Markowitz, Alex; Rothschild, Richard E.
2012-02-01
We present the analysis of two Suzaku observations of GX 301-2 at two orbital phases after the periastron passage. Variations in the column density of the line-of-sight absorber are observed, consistent with accretion from a clumpy wind. In addition to a cyclotron resonance scattering feature (CRSF), multiple fluorescence emission lines were detected in both observations. The variations in the pulse profiles and the CRSF throughout the pulse phase have a signature of a magnetic dipole field. Using a simple dipole model we calculated the expected magnetic field values for different pulse phases and were able to extract a set ofmore » geometrical angles, loosely constraining the dipole geometry in the neutron star. From the variation of the CRSF width and energy, we found a geometrical solution for the dipole, making the inclination consistent with previously published values.« less
Recursive Feature Extraction in Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-08-14
ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.
Robust image features: concentric contrasting circles and their image extraction
NASA Astrophysics Data System (ADS)
Gatrell, Lance B.; Hoff, William A.; Sklair, Cheryl W.
1992-03-01
Many computer vision tasks can be simplified if special image features are placed on the objects to be recognized. A review of special image features that have been used in the past is given and then a new image feature, the concentric contrasting circle, is presented. The concentric contrasting circle image feature has the advantages of being easily manufactured, easily extracted from the image, robust extraction (true targets are found, while few false targets are found), it is a passive feature, and its centroid is completely invariant to the three translational and one rotational degrees of freedom and nearly invariant to the remaining two rotational degrees of freedom. There are several examples of existing parallel implementations which perform most of the extraction work. Extraction robustness was measured by recording the probability of correct detection and the false alarm rate in a set of images of scenes containing mockups of satellites, fluid couplings, and electrical components. A typical application of concentric contrasting circle features is to place them on modeled objects for monocular pose estimation or object identification. This feature is demonstrated on a visually challenging background of a specular but wrinkled surface similar to a multilayered insulation spacecraft thermal blanket.
Deep Learning Methods for Underwater Target Feature Extraction and Recognition
Peng, Yuan; Qiu, Mengran; Shi, Jianfei; Liu, Liangliang
2018-01-01
The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved. PMID:29780407
SVM-based multisensor data fusion for phase concentration measurement in biomass-coal co-combustion
NASA Astrophysics Data System (ADS)
Wang, Xiaoxin; Hu, Hongli; Jia, Huiqin; Tang, Kaihao
2018-05-01
In this paper, the electrical method combines the electrostatic sensor and capacitance sensor to measure the phase concentration of pulverized coal/biomass/air three-phase flow through data fusion technology. In order to eliminate the effects of flow regimes and improve the accuracy of the phase concentration measurement, the mel frequency cepstrum coefficient features extracted from electrostatic signals are used to train the Continuous Gaussian Mixture Hidden Markov Model (CGHMM) for flow regime identification. Support Vector Machine (SVM) is introduced to establish the concentration information fusion model under identified flow regimes. The CGHMM models and SVM models are transplanted on digital signal processing (DSP) to realize on-line accurate measurement. The DSP flow regime identification time is 1.4 ms, and the concentration predict time is 164 μs, which can fully meet the real-time requirement. The average absolute value of the relative error of the pulverized coal is about 1.5% and that of the biomass is about 2.2%.
Real-time phase correlation based integrated system for seizure detection
NASA Astrophysics Data System (ADS)
Romaine, James B.; Delgado-Restituto, Manuel; Leñero-Bardallo, Juan A.; Rodríguez-Vázquez, Ángel
2017-05-01
This paper reports a low area, low power, integer-based digital processor for the calculation of phase synchronization between two neural signals. The processor calculates the phase-frequency content of a signal by identifying the specific time periods associated with two consecutive minima. The simplicity of this phase-frequency content identifier allows for the digital processor to utilize only basic digital blocks, such as registers, counters, adders and subtractors, without incorporating any complex multiplication and or division algorithms. In fact, the processor, fabricated in a 0.18μm CMOS process, only occupies an area of 0.0625μm2 and consumes 12.5nW from a 1.2V supply voltage when operated at 128kHz. These low-area, low-power features make the proposed processor a valuable computing element in closed loop neural prosthesis for the treatment of neural diseases, such as epilepsy, or for extracting functional connectivity maps between different recording sites in the brain.
Rahman, Md Musfiqur; Abd El-Aty, A M; Kim, Sung-Woo; Shin, Sung Chul; Shin, Ho-Chul; Shim, Jae-Han
2017-01-01
In pesticide residue analysis, relatively low-sensitivity traditional detectors, such as UV, diode array, electron-capture, flame photometric, and nitrogen-phosphorus detectors, have been used following classical sample preparation (liquid-liquid extraction and open glass column cleanup); however, the extraction method is laborious, time-consuming, and requires large volumes of toxic organic solvents. A quick, easy, cheap, effective, rugged, and safe method was introduced in 2003 and coupled with selective and sensitive mass detectors to overcome the aforementioned drawbacks. Compared to traditional detectors, mass spectrometers are still far more expensive and not available in most modestly equipped laboratories, owing to maintenance and cost-related issues. Even available, traditional detectors are still being used for analysis of residues in agricultural commodities. It is widely known that the quick, easy, cheap, effective, rugged, and safe method is incompatible with conventional detectors owing to matrix complexity and low sensitivity. Therefore, modifications using column/cartridge-based solid-phase extraction instead of dispersive solid-phase extraction for cleanup have been applied in most cases to compensate and enable the adaptation of the extraction method to conventional detectors. In gas chromatography, the matrix enhancement effect of some analytes has been observed, which lowers the limit of detection and, therefore, enables gas chromatography to be compatible with the quick, easy, cheap, effective, rugged, and safe extraction method. For liquid chromatography with a UV detector, a combination of column/cartridge-based solid-phase extraction and dispersive solid-phase extraction was found to reduce the matrix interference and increase the sensitivity. A suitable double-layer column/cartridge-based solid-phase extraction might be the perfect solution, instead of a time-consuming combination of column/cartridge-based solid-phase extraction and dispersive solid-phase extraction. Therefore, replacing dispersive solid-phase extraction with column/cartridge-based solid-phase extraction in the cleanup step can make the quick, easy, cheap, effective, rugged, and safe extraction method compatible with traditional detectors for more sensitive, effective, and green analysis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Using X-Ray In-Line Phase-Contrast Imaging for the Investigation of Nude Mouse Hepatic Tumors
Zhang, Lu; Luo, Shuqian
2012-01-01
The purpose of this paper is to report the noninvasive imaging of hepatic tumors without contrast agents. Both normal tissues and tumor tissues can be detected, and tumor tissues in different stages can be classified quantitatively. We implanted BEL-7402 human hepatocellular carcinoma cells into the livers of nude mice and then imaged the livers using X-ray in-line phase-contrast imaging (ILPCI). The projection images' texture feature based on gray level co-occurrence matrix (GLCM) and dual-tree complex wavelet transforms (DTCWT) were extracted to discriminate normal tissues and tumor tissues. Different stages of hepatic tumors were classified using support vector machines (SVM). Images of livers from nude mice sacrificed 6 days after inoculation with cancer cells show diffuse distribution of the tumor tissue, but images of livers from nude mice sacrificed 9, 12, or 15 days after inoculation with cancer cells show necrotic lumps in the tumor tissue. The results of the principal component analysis (PCA) of the texture features based on GLCM of normal regions were positive, but those of tumor regions were negative. The results of PCA of the texture features based on DTCWT of normal regions were greater than those of tumor regions. The values of the texture features in low-frequency coefficient images increased monotonically with the growth of the tumors. Different stages of liver tumors can be classified using SVM, and the accuracy is 83.33%. Noninvasive and micron-scale imaging can be achieved by X-ray ILPCI. We can observe hepatic tumors and small vessels from the phase-contrast images. This new imaging approach for hepatic cancer is effective and has potential use in the early detection and classification of hepatic tumors. PMID:22761929
A method for real-time implementation of HOG feature extraction
NASA Astrophysics Data System (ADS)
Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai
2011-08-01
Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.
Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.
Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi
2016-09-13
Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.
Using input feature information to improve ultraviolet retrieval in neural networks
NASA Astrophysics Data System (ADS)
Sun, Zhibin; Chang, Ni-Bin; Gao, Wei; Chen, Maosi; Zempila, Melina
2017-09-01
In neural networks, the training/predicting accuracy and algorithm efficiency can be improved significantly via accurate input feature extraction. In this study, some spatial features of several important factors in retrieving surface ultraviolet (UV) are extracted. An extreme learning machine (ELM) is used to retrieve the surface UV of 2014 in the continental United States, using the extracted features. The results conclude that more input weights can improve the learning capacities of neural networks.
A Hybrid Neural Network and Feature Extraction Technique for Target Recognition.
target features are extracted, the extracted data being evaluated in an artificial neural network to identify a target at a location within the image scene from which the different viewing angles extend.
NASA Astrophysics Data System (ADS)
Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien
2017-09-01
Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.
Classification of product inspection items using nonlinear features
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.; Lee, H.-W.
1998-03-01
Automated processing and classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non-invasive detection of defective product items on a conveyor belt. This approach involves two main steps: preprocessing and classification. Preprocessing locates individual items and segments ones that touch using a modified watershed algorithm. The second stage involves extraction of features that allow discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper. We use a new nonlinear feature extraction scheme called the maximum representation and discriminating feature (MRDF) extraction method to compute nonlinear features that are used as inputs to a classifier. The MRDF is shown to provide better classification and a better ROC (receiver operating characteristic) curve than other methods.
A harmonic linear dynamical system for prominent ECG feature extraction.
Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc
2014-01-01
Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.
NASA Astrophysics Data System (ADS)
Li, Yane; Fan, Ming; Cheng, Hu; Zhang, Peng; Zheng, Bin; Li, Lihua
2018-01-01
This study aims to develop and test a new imaging marker-based short-term breast cancer risk prediction model. An age-matched dataset of 566 screening mammography cases was used. All ‘prior’ images acquired in the two screening series were negative, while in the ‘current’ screening images, 283 cases were positive for cancer and 283 cases remained negative. For each case, two bilateral cranio-caudal view mammograms acquired from the ‘prior’ negative screenings were selected and processed by a computer-aided image processing scheme, which segmented the entire breast area into nine strip-based local regions, extracted the element regions using difference of Gaussian filters, and computed both global- and local-based bilateral asymmetrical image features. An initial feature pool included 190 features related to the spatial distribution and structural similarity of grayscale values, as well as of the magnitude and phase responses of multidirectional Gabor filters. Next, a short-term breast cancer risk prediction model based on a generalized linear model was built using an embedded stepwise regression analysis method to select features and a leave-one-case-out cross-validation method to predict the likelihood of each woman having image-detectable cancer in the next sequential mammography screening. The area under the receiver operating characteristic curve (AUC) values significantly increased from 0.5863 ± 0.0237 to 0.6870 ± 0.0220 when the model trained by the image features extracted from the global regions and by the features extracted from both the global and the matched local regions (p = 0.0001). The odds ratio values monotonically increased from 1.00-8.11 with a significantly increasing trend in slope (p = 0.0028) as the model-generated risk score increased. In addition, the AUC values were 0.6555 ± 0.0437, 0.6958 ± 0.0290, and 0.7054 ± 0.0529 for the three age groups of 37-49, 50-65, and 66-87 years old, respectively. AUC values of 0.6529 ± 0.1100, 0.6820 ± 0.0353, 0.6836 ± 0.0302 and 0.8043 ± 0.1067 were yielded for the four mammography density sub-groups (BIRADS from 1-4), respectively. This study demonstrated that bilateral asymmetry features extracted from local regions combined with the global region in bilateral negative mammograms could be used as a new imaging marker to assist in the prediction of short-term breast cancer risk.
Measurement of dielectric constant of organic solvents by indigenously developed dielectric probe
NASA Astrophysics Data System (ADS)
Keshari, Ajay Kumar; Rao, J. Prabhakar; Rao, C. V. S. Brahmmananda; Ramakrishnan, R.; Ramanarayanan, R. R.
2018-04-01
The extraction, separation and purification of actinides (uranium and plutonium) from various matrices are an important step in nuclear fuel cycle. One of the separation process adopted in an industrial scale is the liquid-liquid extraction or solvent extraction. Liquid-liquid extraction uses a specific ligand/extractant in conjunction with suitable diluent. Solvent extraction or liquid-liquid extraction, involves the partitioning of the solute between two immiscible phases. In most cases, one of the phases is aqueous, and the other one is an organic solvent. The solvent used in solvent extraction should be selective for the metal of interest, it should have optimum distribution ratio, and the loaded metal from the organic phase should be easily stripped under suitable experimental conditions. Some of the important physical properties which are important for the solvent are density, viscosity, phase separation time, interfacial surface tension and the polarity of the extractant.
Anderson, M A; Wachs, T; Henion, J D
1997-02-01
A method based on ionspray liquid chromatography/tandem mass spectrometry (LC/MS/MS) was developed for the determination of reserpine in equine plasma. A comparison was made of the isolation of reserpine from plasma by liquid-liquid extraction and by solid-phase extraction. A structural analog, rescinnamine, was used as the internal standard. The reconstituted extracts were analyzed by ionspray LC/MS/MS in the selected reaction monitoring (SRM) mode. The calibration graph for reserpine extracted from equine plasma obtained using liquid-liquid extraction was linear from 10 to 5000 pg ml-1 and that using solid-phase extraction from 100 to 5000 pg ml-1. The lower level of quantitation (LLQ) using liquid-liquid and solid-phase extraction was 50 and 200 pg ml-1, respectively. The lower level of detection for reserpine by LC/MS/MS was 10 pg ml-1. The intra-assay accuracy did not exceed 13% for liquid-liquid and 12% for solid-phase extraction. The recoveries for the LLQ were 68% for liquid-liquid and 58% for solid-phase extraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Hyun-Ah; Engle, Nancy L.; Bonnesen Peter V.
2004-03-29
In the present work, it has been the aim to examine extraction efficiencies of nine proton-ionizable alcohols (HAs) in 1-octanol and to identify both the controlling equilibria and predominant species involved in the extraction process within a thermochemical model. Distribution ratios for sodium (DNa) extraction were measured as a function of organic-phase HA and aqueous-phase NaOH molarity at 25 °C. Extraction efficiency follows the expected order of acidity of the HAs, 4-(tert-octyl) phenol (HA 1a) and 4-noctyl- a,a-bis-(trifluoromethyl)benzyl alcohol (HA 2a) being the most efficient extractants among the compounds tested. By use of the equilibrium-modeling program SXLSQI, a model formore » the extraction of NaOH has been advanced based on an ion-pair extraction by the diluent to give organic-phase Na+OH- and corresponding free ions and cation exchange by the weak acids to form monomeric organic-phase Na+A- and corresponding free organic-phase ions.« less
A multiple maximum scatter difference discriminant criterion for facial feature extraction.
Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei
2007-12-01
Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.
Fabrication of Intermetallic Titanium Alloy Based on Ti2AlNb by Rapid Quenching of Melt
NASA Astrophysics Data System (ADS)
Senkevich, K. S.; Serov, M. M.; Umarova, O. Z.
2017-11-01
The possibility of fabrication of rapidly quenched fibers from alloy Ti - 22Al - 27Nb by extracting a hanging melt drop is studied. The special features of the production of electrodes for spraying the fibers by sintering mechanically alloyed powdered components of the alloy, i.e., titanium hydride, niobium, and aluminum dust, are studied. The rapidly quenched fibers with homogeneous phase composition and fine-grained structure produced from alloy Ti - 22Al - 27Nb are suitable for manufacturing compact semiproducts by hot pressing.
High-Resolution Remote Sensing Image Building Extraction Based on Markov Model
NASA Astrophysics Data System (ADS)
Zhao, W.; Yan, L.; Chang, Y.; Gong, L.
2018-04-01
With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.
Qi, Tingting; Huang, Chenchen; Yan, Shan; Li, Xiu-Juan; Pan, Si-Yi
2015-11-01
Three kinds of magnetite/reduced graphene oxide (MRGO) nanocomposites were prepared by solvothermal, hydrothermal and co-precipitation methods. The as-prepared nanocomposites were characterized and compared by Fourier transform infrared spectroscopy, scanning electron microscopy, transmission electron microscopy, X-ray diffraction and zeta potential. The results showed that MRGO made by different methods differed in surface functional groups, crystal structure, particle sizes, surface morphology and surface charge. Due to their unlike features, these nanocomposites displayed dissimilar performances when they were used to adsorb drugs, dyes and metal ions. The MRGO prepared by the co-precipitation method showed special adsorption ability to negative ions, but those synthesized by the solvothermal method obtained the best extraction ability and reusability to the others and showed a good prospective in magnetic solid-phase extraction. Therefore, it is highly recommended to use the right preparation method before application in order to attain the best extraction performance. Copyright © 2015 Elsevier B.V. All rights reserved.
Aprea, Eugenio; Gika, Helen; Carlin, Silvia; Theodoridis, Georgios; Vrhovsek, Urska; Mattivi, Fulvio
2011-07-15
A headspace SPME GC-TOF-MS method was developed for the acquisition of metabolite profiles of apple volatiles. As a first step, an experimental design was applied to find out the most appropriate conditions for the extraction of apple volatile compounds by SPME. The selected SPME method was applied in profiling of four different apple varieties by GC-EI-TOF-MS. Full scan GC-MS data were processed by MarkerLynx software for peak picking, normalisation, alignment and feature extraction. Advanced chemometric/statistical techniques (PCA and PLS-DA) were used to explore data and extract useful information. Characteristic markers of each variety were successively identified using the NIST library thus providing useful information for variety classification. The developed HS-SPME sampling method is fully automated and proved useful in obtaining the fingerprint of the volatile content of the fruit. The described analytical protocol can aid in further studies of the apple metabolome. Copyright © 2011 Elsevier B.V. All rights reserved.
Tissue Multiplatform-Based Metabolomics/Metabonomics for Enhanced Metabolome Coverage.
Vorkas, Panagiotis A; Abellona U, M R; Li, Jia V
2018-01-01
The use of tissue as a matrix to elucidate disease pathology or explore intervention comes with several advantages. It allows investigation of the target alteration directly at the focal location and facilitates the detection of molecules that could become elusive after secretion into biofluids. However, tissue metabolomics/metabonomics comes with challenges not encountered in biofluid analyses. Furthermore, tissue heterogeneity does not allow for tissue aliquoting. Here we describe a multiplatform, multi-method workflow which enables metabolic profiling analysis of tissue samples, while it can deliver enhanced metabolome coverage. After applying a dual consecutive extraction (organic followed by aqueous), tissue extracts are analyzed by reversed-phase (RP-) and hydrophilic interaction liquid chromatography (HILIC-) ultra-performance liquid chromatography coupled to mass spectrometry (UPLC-MS) and nuclear magnetic resonance (NMR) spectroscopy. This pipeline incorporates the required quality control features, enhances versatility, allows provisional aliquoting of tissue extracts for future guided analyses, expands the range of metabolites robustly detected, and supports data integration. It has been successfully employed for the analysis of a wide range of tissue types.
Non-negative matrix factorization in texture feature for classification of dementia with MRI data
NASA Astrophysics Data System (ADS)
Sarwinda, D.; Bustamam, A.; Ardaneswari, G.
2017-07-01
This paper investigates applications of non-negative matrix factorization as feature selection method to select the features from gray level co-occurrence matrix. The proposed approach is used to classify dementia using MRI data. In this study, texture analysis using gray level co-occurrence matrix is done to feature extraction. In the feature extraction process of MRI data, we found seven features from gray level co-occurrence matrix. Non-negative matrix factorization selected three features that influence of all features produced by feature extractions. A Naïve Bayes classifier is adapted to classify dementia, i.e. Alzheimer's disease, Mild Cognitive Impairment (MCI) and normal control. The experimental results show that non-negative factorization as feature selection method able to achieve an accuracy of 96.4% for classification of Alzheimer's and normal control. The proposed method also compared with other features selection methods i.e. Principal Component Analysis (PCA).
Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.
2017-01-01
Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335
Detection and classification of retinal lesions for grading of diabetic retinopathy.
Usman Akram, M; Khalid, Shehzad; Tariq, Anam; Khan, Shoab A; Azam, Farooque
2014-02-01
Diabetic Retinopathy (DR) is an eye abnormality in which the human retina is affected due to an increasing amount of insulin in blood. The early detection and diagnosis of DR is vital to save the vision of diabetes patients. The early signs of DR which appear on the surface of the retina are microaneurysms, haemorrhages, and exudates. In this paper, we propose a system consisting of a novel hybrid classifier for the detection of retinal lesions. The proposed system consists of preprocessing, extraction of candidate lesions, feature set formulation, and classification. In preprocessing, the system eliminates background pixels and extracts the blood vessels and optic disc from the digital retinal image. The candidate lesion detection phase extracts, using filter banks, all regions which may possibly have any type of lesion. A feature set based on different descriptors, such as shape, intensity, and statistics, is formulated for each possible candidate region: this further helps in classifying that region. This paper presents an extension of the m-Mediods based modeling approach, and combines it with a Gaussian Mixture Model in an ensemble to form a hybrid classifier to improve the accuracy of the classification. The proposed system is assessed using standard fundus image databases with the help of performance parameters, such as, sensitivity, specificity, accuracy, and the Receiver Operating Characteristics curves for statistical analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.
Adventitious sounds identification and extraction using temporal-spectral dominance-based features.
Jin, Feng; Krishnan, Sridhar Sri; Sattar, Farook
2011-11-01
Respiratory sound (RS) signals carry significant information about the underlying functioning of the pulmonary system by the presence of adventitious sounds (ASs). Although many studies have addressed the problem of pathological RS classification, only a limited number of scientific works have focused on the analysis of the evolution of symptom-related signal components in joint time-frequency (TF) plane. This paper proposes a new signal identification and extraction method for various ASs based on instantaneous frequency (IF) analysis. The presented TF decomposition method produces a noise-resistant high definition TF representation of RS signals as compared to the conventional linear TF analysis methods, yet preserving the low computational complexity as compared to those quadratic TF analysis methods. The discarded phase information in conventional spectrogram has been adopted for the estimation of IF and group delay, and a temporal-spectral dominance spectrogram has subsequently been constructed by investigating the TF spreads of the computed time-corrected IF components. The proposed dominance measure enables the extraction of signal components correspond to ASs from noisy RS signal at high noise level. A new set of TF features has also been proposed to quantify the shapes of the obtained TF contours, and therefore strongly, enhances the identification of multicomponents signals such as polyphonic wheezes. An overall accuracy of 92.4±2.9% for the classification of real RS recordings shows the promising performance of the presented method.
NASA Technical Reports Server (NTRS)
Wang, Alian; Haskin, Larry A.; Jolliff, Bradley; Wdowiak, Tom; Agresti, David; Lane, Arthur L.
2000-01-01
Raman spectroscopy provides a powerful tool for in situ mineralogy, petrology, and detection of water and carbon. The Athena Raman spectrometer is a microbeam instrument intended for close-up analyses of targets (rock or soils) selected by the Athena Pancam and Mini-TES. It will take 100 Raman spectra along a linear traverse of approximately one centimeter (point-counting procedure) in one to four hours during the Mars' night. From these spectra, the following information about the target will extracted: (1) the identities of major, minor, and trace mineral phases, organic species (e.g., PAH or kerogen-like polymers), reduced inorganic carbon, and water-bearing phases; (2) chemical features (e.g. Mg/Fe ratio) of major minerals; and (3) rock textural features (e.g., mineral clusters, amygdular filling and veins). Part of the Athena payload, the miniaturized Raman spectrometer has been under development in a highly interactive collaboration of a science team at Washington University and the University of Alabama at Birmingham, and an engineering team at the Jet Propulsion Laboratory. The development has completed the brassboard stage and has produced the design for the engineering model.
a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image
NASA Astrophysics Data System (ADS)
Li, L.; Yang, H.; Chen, Q.; Liu, X.
2018-04-01
Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.
Toward direct pore-scale modeling of three-phase displacements
NASA Astrophysics Data System (ADS)
Mohammadmoradi, Peyman; Kantzas, Apostolos
2017-12-01
A stable spreading film between water and gas can extract a significant amount of bypassed non-aqueous phase liquid (NAPL) through immiscible three-phase gas/water injection cycles. In this study, the pore-scale displacement mechanisms by which NAPL is mobilized are incorporated into a three-dimensional pore morphology-based model under water-wet and capillary equilibrium conditions. The approach is pixel-based and the sequence of invasions is determined by the fluids' connectivity and the threshold capillary pressure of the advancing interfaces. In addition to the determination of three-phase spatial saturation profiles, residuals, and capillary pressure curves, dynamic finite element simulations are utilized to predict the effective permeabilities of the rock microtomographic images as reasonable representations of the geological formations under study. All the influential features during immiscible fluid flow in pore-level domains including wetting and spreading films, saturation hysteresis, capillary trapping, connectivity, and interface development strategies are taken into account. The capabilities of the model are demonstrated by the successful prediction of saturation functions for Berea sandstone and the accurate reconstruction of three-phase fluid occupancies through a micromodel.
NASA Astrophysics Data System (ADS)
Sharma, Neeraj; Peterson, Vanessa K.; Elcombe, Margaret M.; Avdeev, Maxim; Studer, Andrew J.; Blagojevic, Ned; Yusoff, Rozila; Kamarulzaman, Norlida
The structural response to electrochemical cycling of the components within a commercial Li-ion battery (LiCoO 2 cathode, graphite anode) is shown through in situ neutron diffraction. Lithuim insertion and extraction is observed in both the cathode and anode. In particular, reversible Li incorporation into both layered and spinel-type LiCoO 2 phases that comprise the cathode is shown and each of these components features several phase transitions attributed to Li content and correlated with the state-of-charge of the battery. At the anode, a constant cell voltage correlates with a stable lithiated graphite phase. Transformation to de-lithiated graphite at the discharged state is characterised by a sharp decrease in both structural cell parameters and cell voltage. In the charged state, a two-phase region exists and is composed of the lithiated graphite phase and about 64% LiC 6. It is postulated that trapping Li in the solid|electrolyte interface layer results in minimal structural changes to the lithiated graphite anode across the constant cell voltage regions of the electrochemical cycle.
Novel Features for Brain-Computer Interfaces
Woon, W. L.; Cichocki, A.
2007-01-01
While conventional approaches of BCI feature extraction are based on the power spectrum, we have tried using nonlinear features for classifying BCI data. In this paper, we report our test results and findings, which indicate that the proposed method is a potentially useful addition to current feature extraction techniques. PMID:18364991
NASA Astrophysics Data System (ADS)
Post, Anouk L.; Zhang, Xu; Bosschaart, Nienke; Van Leeuwen, Ton G.; Sterenborg, Henricus J. C. M.; Faber, Dirk J.
2016-03-01
Both Optical Coherence Tomography (OCT) and Single Fiber Reflectance Spectroscopy (SFR) are used to determine various optical properties of tissue. We developed a method combining these two techniques to measure the scattering anisotropy (g1) and γ (=1-g2/1-g1), related to the 1st and 2nd order moments of the phase function. The phase function is intimately associated with the cellular organization and ultrastructure of tissue, physical parameters that may change during disease onset and progression. Quantification of these parameters may therefore allow for improved non-invasive, in vivo discrimination between healthy and diseased tissue. With SFR the reduced scattering coefficient and γ can be extracted from the reflectance spectrum (Kanick et al., Biomedical Optics Express 2(6), 2011). With OCT the scattering coefficient can be extracted from the signal as a function of depth (Faber et al., Optics Express 12(19), 2004). Consequently, by combining SFR and OCT measurements at the same wavelengths, the scattering anisotropy (g) can be resolved using µs'= µs*(1-g). We performed measurements on a suspension of silica spheres as a proof of principle. The SFR model for the reflectance as a function of the reduced scattering coefficient and γ is based on semi-empirical modelling. These models feature Monte-Carlo (MC) based model constants. The validity of these constants - and thus the accuracy of the estimated parameters - depends on the phase function employed in the MC simulations. Since the phase function is not known when measuring in tissue, we will investigate the influence of assuming an incorrect phase function on the accuracy of the derived parameters.
Wang, Lei; Zhang, Huimao; He, Kan; Chang, Yan; Yang, Xiaodong
2015-01-01
Active contour models are of great importance for image segmentation and can extract smooth and closed boundary contours of the desired objects with promising results. However, they cannot work well in the presence of intensity inhomogeneity. Hence, a novel region-based active contour model is proposed by taking image intensities and 'vesselness values' from local phase-based vesselness enhancement into account simultaneously to define a novel multi-feature Gaussian distribution fitting energy in this paper. This energy is then incorporated into a level set formulation with a regularization term for accurate segmentations. Experimental results based on publicly available STructured Analysis of the Retina (STARE) demonstrate our model is more accurate than some existing typical methods and can successfully segment most small vessels with varying width.
A review on solid phase extraction of actinides and lanthanides with amide based extractants.
Ansari, Seraj A; Mohapatra, Prasanta K
2017-05-26
Solid phase extraction is gaining attention from separation scientists due to its high chromatographic utility. Though both grafted and impregnated forms of solid phase extraction resins are popular, the later is easy to make by impregnating a given organic extractant on to an inert solid support. Solid phase extraction on an impregnated support, also known as extraction chromatography, combines the advantages of liquid-liquid extraction and the ion exchange chromatography methods. On the flip side, the impregnated extraction chromatographic resins are less stable against leaching out of the organic extractant from the pores of the support material. Grafted resins, on the other hand, have a higher stability, which allows their prolong use. The goal of this article is a brief literature review on reported actinide and lanthanide separation methods based on solid phase extractants of both the types, i.e., (i) ligand impregnation on the solid support or (ii) ligand functionalized polymers (chemically bonded resins). Though the literature survey reveals an enormous volume of studies on the extraction chromatographic separation of actinides and lanthanides using several extractants, the focus of the present article is limited to the work carried out with amide based ligands, viz. monoamides, diamides and diglycolamides. The emphasis will be on reported applied experimental results rather than on data pertaining fundamental metal complexation. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, C; Yin, Y
Purpose: The purpose of this research is investigating which texture features extracted from FDG-PET images by gray-level co-occurrence matrix(GLCM) have a higher prognostic value than the other texture features. Methods: 21 non-small cell lung cancer(NSCLC) patients were approved in the study. Patients underwent 18F-FDG PET/CT scans with both pre-treatment and post-treatment. Firstly, the tumors were extracted by our house developed software. Secondly, the clinical features including the maximum SUV and tumor volume were extracted by MIM vista software, and texture features including angular second moment, contrast, inverse different moment, entropy and correlation were extracted using MATLAB.The differences can be calculatedmore » by using post-treatment features to subtract pre-treatment features. Finally, the SPSS software was used to get the Pearson correlation coefficients and Spearman rank correlation coefficients between the change ratios of texture features and change ratios of clinical features. Results: The Pearson and Spearman rank correlation coefficient between contrast and SUV maximum is 0.785 and 0.709. The P and S value between inverse difference moment and tumor volume is 0.953 and 0.942. Conclusion: This preliminary study showed that the relationships between different texture features and the same clinical feature are different. Finding the prognostic value of contrast and inverse difference moment were higher than the other three textures extracted by GLCM.« less
Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat
2015-06-01
Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine.
Research of facial feature extraction based on MMC
NASA Astrophysics Data System (ADS)
Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun
2017-07-01
Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.
Capability of geometric features to classify ships in SAR imagery
NASA Astrophysics Data System (ADS)
Lang, Haitao; Wu, Siwen; Lai, Quan; Ma, Li
2016-10-01
Ship classification in synthetic aperture radar (SAR) imagery has become a new hotspot in remote sensing community for its valuable potential in many maritime applications. Several kinds of ship features, such as geometric features, polarimetric features, and scattering features have been widely applied on ship classification tasks. Compared with polarimetric features and scattering features, which are subject to SAR parameters (e.g., sensor type, incidence angle, polarization, etc.) and environment factors (e.g., sea state, wind, wave, current, etc.), geometric features are relatively independent of SAR and environment factors, and easy to be extracted stably from SAR imagery. In this paper, the capability of geometric features to classify ships in SAR imagery with various resolution has been investigated. Firstly, the relationship between the geometric feature extraction accuracy and the SAR imagery resolution is analyzed. It shows that the minimum bounding rectangle (MBR) of ship can be extracted exactly in terms of absolute precision by the proposed automatic ship-sea segmentation method. Next, six simple but effective geometric features are extracted to build a ship representation for the subsequent classification task. These six geometric features are composed of length (f1), width (f2), area (f3), perimeter (f4), elongatedness (f5) and compactness (f6). Among them, two basic features, length (f1) and width (f2), are directly extracted based on the MBR of ship, the other four are derived from those two basic features. The capability of the utilized geometric features to classify ships are validated on two data set with different image resolutions. The results show that the performance of ship classification solely by geometric features is close to that obtained by the state-of-the-art methods, which obtained by a combination of multiple kinds of features, including scattering features and geometric features after a complex feature selection process.
Adaptive Water Sampling based on Unsupervised Clustering
NASA Astrophysics Data System (ADS)
Py, F.; Ryan, J.; Rajan, K.; Sherman, A.; Bird, L.; Fox, M.; Long, D.
2007-12-01
Autonomous Underwater Vehicles (AUVs) are widely used for oceanographic surveys, during which data is collected from a number of on-board sensors. Engineers and scientists at MBARI have extended this approach by developing a water sampler specialy for the AUV, which can sample a specific patch of water at a specific time. The sampler, named the Gulper, captures 2 liters of seawater in less than 2 seconds on a 21" MBARI Odyssey AUV. Each sample chamber of the Gulper is filled with seawater through a one-way valve, which protrudes through the fairing of the AUV. This new kind of device raises a new problem: when to trigger the gulper autonomously? For example, scientists interested in studying the mobilization and transport of shelf sediments would like to detect intermediate nepheloïd layers (INLs). To be able to detect this phenomenon we need to extract a model based on AUV sensors that can detect this feature in-situ. The formation of such a model is not obvious as identification of this feature is generally based on data from multiple sensors. We have developed an unsupervised data clustering technique to extract the different features which will then be used for on-board classification and triggering of the Gulper. We use a three phase approach: 1) use data from past missions to learn the different classes of data from sensor inputs. The clustering algorithm will then extract the set of features that can be distinguished within this large data set. 2) Scientists on shore then identify these features and point out which correspond to those of interest (e.g. nepheloïd layer, upwelling material etc) 3) Embed the corresponding classifier into the AUV control system to indicate the most probable feature of the water depending on sensory input. The triggering algorithm looks to this result and triggers the Gulper if the classifier indicates that we are within the feature of interest with a predetermined threshold of confidence. We have deployed this method of online classification and sampling based on AUV depth and HOBI Labs Hydroscat-2 sensor data. Using approximately 20,000 data samples the clustering algorithm generated 14 clusters with one identified as corresponding to a nepheloïd layer. We demonstrate that such a technique can be used to reliably and efficiently sample water based on multiple sources of data in real-time.
Duester, Lars; Fabricius, Anne-Lena; Jakobtorweihen, Sven; Philippe, Allan; Weigl, Florian; Wimmer, Andreas; Schuster, Michael; Nazar, Muhammad Faizan
2016-11-01
Coacervate-based techniques are intensively used in environmental analytical chemistry to enrich and extract different kinds of analytes. Most methods focus on the total content or the speciation of inorganic and organic substances. Size fractionation is less commonly addressed. Within coacervate-based techniques, cloud point extraction (CPE) is characterized by a phase separation of non-ionic surfactants dispersed in an aqueous solution when the respective cloud point temperature is exceeded. In this context, the feature article raises the following question: May CPE in future studies serve as a key tool (i) to enrich and extract nanoparticles (NPs) from complex environmental matrices prior to analyses and (ii) to preserve the colloidal status of unstable environmental samples? With respect to engineered NPs, a significant gap between environmental concentrations and size- and element-specific analytical capabilities is still visible. CPE may support efforts to overcome this "concentration gap" via the analyte enrichment. In addition, most environmental colloidal systems are known to be unstable, dynamic, and sensitive to changes of the environmental conditions during sampling and sample preparation. This delivers a so far unsolved "sample preparation dilemma" in the analytical process. The authors are of the opinion that CPE-based methods have the potential to preserve the colloidal status of these instable samples. Focusing on NPs, this feature article aims to support the discussion on the creation of a convention called the "CPE extractable fraction" by connecting current knowledge on CPE mechanisms and on available applications, via the uncertainties visible and modeling approaches available, with potential future benefits from CPE protocols.
Question analysis for Indonesian comparative question
NASA Astrophysics Data System (ADS)
Saelan, A.; Purwarianti, A.; Widyantoro, D. H.
2017-01-01
Information seeking is one of human needs today. Comparing things using search engine surely take more times than search only one thing. In this paper, we analyzed comparative questions for comparative question answering system. Comparative question is a question that comparing two or more entities. We grouped comparative questions into 5 types: selection between mentioned entities, selection between unmentioned entities, selection between any entity, comparison, and yes or no question. Then we extracted 4 types of information from comparative questions: entity, aspect, comparison, and constraint. We built classifiers for classification task and information extraction task. Features used for classification task are bag of words, whether for information extraction, we used lexical, 2 previous and following words lexical, and previous label as features. We tried 2 scenarios: classification first and extraction first. For classification first, we used classification result as a feature for extraction. Otherwise, for extraction first, we used extraction result as features for classification. We found that the result would be better if we do extraction first before classification. For the extraction task, classification using SMO gave the best result (88.78%), while for classification, it is better to use naïve bayes (82.35%).
NASA Astrophysics Data System (ADS)
Nagarajan, Mahesh B.; Coan, Paola; Huber, Markus B.; Yang, Chien-Chun; Glaser, Christian; Reiser, Maximilian F.; Wismüller, Axel
2012-03-01
The current approach to evaluating cartilage degeneration at the knee joint requires visualization of the joint space on radiographic images where indirect cues such as joint space narrowing serve as markers for osteoarthritis. A recent novel approach to visualizing the knee cartilage matrix using phase contrast CT imaging (PCI-CT) was shown to allow direct examination of chondrocyte cell patterns and their subsequent correlation to osteoarthritis. This study aims to characterize chondrocyte cell patterns in the radial zone of the knee cartilage matrix in the presence and absence of osteoarthritic damage through both gray-level co-occurrence matrix (GLCM) derived texture features as well as Minkowski Functionals (MF). Thirteen GLCM and three MF texture features were extracted from 404 regions of interest (ROI) annotated on PCI images of healthy and osteoarthritic specimens of knee cartilage. These texture features were then used in a machine learning task to classify ROIs as healthy or osteoarthritic. A fuzzy k-nearest neighbor classifier was used and its performance was evaluated using the area under the ROC curve (AUC). The best classification performance was observed with the MF features 'perimeter' and 'Euler characteristic' and with GLCM correlation features (f3 and f13). With the experimental conditions used in this study, both Minkowski Functionals and GLCM achieved a high classification performance (AUC value of 0.97) in the task of distinguishing between health and osteoarthritic ROIs. These results show that such quantitative analysis of chondrocyte patterns in the knee cartilage matrix can distinguish between healthy and osteoarthritic tissue with high accuracy.
Yang, Zhi; Wu, Youqian; Wu, Shihua
2016-01-29
Despite of substantial developments of extraction and separation techniques, isolation of natural products from natural resources is still a challenging task. In this work, an efficient strategy for extraction and isolation of multi-component natural products has been successfully developed by combination of systematic two-phase liquid-liquid extraction-(13)C NMR pattern recognition and following conical counter-current chromatography separation. A small-scale crude sample was first distributed into 9 systematic hexane-ethyl acetate-methanol-water (HEMWat) two-phase solvent systems for determination of the optimum extraction solvents and partition coefficients of the prominent components. Then, the optimized solvent systems were used in succession to enrich the hydrophilic and lipophilic components from the large-scale crude sample. At last, the enriched components samples were further purified by a new conical counter-current chromatography (CCC). Due to the use of (13)C NMR pattern recognition, the kinds and structures of major components in the solvent extracts could be predicted. Therefore, the method could collect simultaneously the partition coefficients and the structural information of components in the selected two-phase solvents. As an example, a cytotoxic extract of podophyllotoxins and flavonoids from Dysosma versipellis (Hance) was selected. After the systematic HEMWat system solvent extraction and (13)C NMR pattern recognition analyses, the crude extract of D. versipellis was first degreased by the upper phase of HEMWat system (9:1:9:1, v/v), and then distributed in the two phases of the system of HEMWat (2:8:2:8, v/v) to obtain the hydrophilic lower phase extract and lipophilic upper phase extract, respectively. These extracts were further separated by conical CCC with the HEMWat systems (1:9:1:9 and 4:6:4:6, v/v). As results, total 17 cytotoxic compounds were isolated and identified. In general, whole results suggested that the strategy was very efficient for the systematic extraction and isolation of biological active components from the complex biomaterials. Copyright © 2016 Elsevier B.V. All rights reserved.
Nonlinear features for classification and pose estimation of machined parts from single views
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-10-01
A new nonlinear feature extraction method is presented for classification and pose estimation of objects from single views. The feature extraction method is called the maximum representation and discrimination feature (MRDF) method. The nonlinear MRDF transformations to use are obtained in closed form, and offer significant advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We consider MRDFs on image data, provide a new 2-stage nonlinear MRDF solution, and show it specializes to well-known linear and nonlinear image processing transforms under certain conditions. We show the use of MRDF in estimating the class and pose of images of rendered solid CAD models of machine parts from single views using a feature-space trajectory neural network classifier. We show new results with better classification and pose estimation accuracy than are achieved by standard principal component analysis and Fukunaga-Koontz feature extraction methods.
Information based universal feature extraction
NASA Astrophysics Data System (ADS)
Amiri, Mohammad; Brause, Rüdiger
2015-02-01
In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.
NASA Astrophysics Data System (ADS)
Wang, Ke; Guo, Ping; Luo, A.-Li
2017-03-01
Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.
Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao
2017-01-01
To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181
Houshyarifar, Vahid; Chehel Amirani, Mehdi
2016-08-12
In this paper we present a method to predict Sudden Cardiac Arrest (SCA) with higher order spectral (HOS) and linear (Time) features extracted from heart rate variability (HRV) signal. Predicting the occurrence of SCA is important in order to avoid the probability of Sudden Cardiac Death (SCD). This work is a challenge to predict five minutes before SCA onset. The method consists of four steps: pre-processing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In second step, bispectrum features of HRV signal and time-domain features are obtained. Six features are extracted from bispectrum and two features from time-domain. In the next step, these features are reduced to one feature by the linear discriminant analysis (LDA) technique. Finally, KNN and support vector machine-based classifiers are used to classify the HRV signals. We used two database named, MIT/BIH Sudden Cardiac Death (SCD) Database and Physiobank Normal Sinus Rhythm (NSR). In this work we achieved prediction of SCD occurrence for six minutes before the SCA with the accuracy over 91%.
Automated Image Registration Using Morphological Region of Interest Feature Extraction
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2005-01-01
With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.
NASA Astrophysics Data System (ADS)
Schumacher, R.; Schimpf, H.; Schiller, J.
2011-06-01
The most challenging problem of Automatic Target Recognition (ATR) is the extraction of robust and independent target features which describe the target unambiguously. These features have to be robust and invariant in different senses: in time, between aspect views (azimuth and elevation angle), between target motion (translation and rotation) and between different target variants. Especially for ground moving targets in military applications an irregular target motion is typical, so that a strong variation of the backscattered radar signal with azimuth and elevation angle makes the extraction of stable and robust features most difficult. For ATR based on High Range Resolution (HRR) profiles and / or Inverse Synthetic Aperture Radar (ISAR) images it is crucial that the reference dataset consists of stable and robust features, which, among others, will depend on the target aspect and depression angle amongst others. Here it is important to find an adequate data grid for an efficient data coverage in the reference dataset for ATR. In this paper the variability of the backscattered radar signals of target scattering centers is analyzed for different HRR profiles and ISAR images from measured turntable datasets of ground targets under controlled conditions. Especially the dependency of the features on the elevation angle is analyzed regarding to the ATR of large strip SAR data with a large range of depression angles by using available (I)SAR datasets as reference. In this work the robustness of these scattering centers is analyzed by extracting their amplitude, phase and position. Therefore turntable measurements under controlled conditions were performed targeting an artificial military reference object called STANDCAM. Measures referring to variability, similarity, robustness and separability regarding the scattering centers are defined. The dependency of the scattering behaviour with respect to azimuth and elevation variations is analyzed. Additionally generic types of features (geometrical, statistical), which can be derived especially from (I)SAR images, are applied to the ATR-task. Therefore subsequently the dependence of individual feature values as well as the feature statistics on aspect (i.e. azimuth and elevation) are presented. The Kolmogorov-Smirnov distance will be used to show how the feature statistics is influenced by varying elevation angles. Finally, confusion matrices are computed between the STANDCAM target at all eleven elevation angles. This helps to assess the robustness of ATR performance under the influence of aspect angle deviations between training set and test set.
NASA Astrophysics Data System (ADS)
Yi, Faliu; Moon, Inkyu; Lee, Yeon H.
2015-01-01
Counting morphologically normal cells in human red blood cells (RBCs) is extremely beneficial in the health care field. We propose a three-dimensional (3-D) classification method of automatically determining the morphologically normal RBCs in the phase image of multiple human RBCs that are obtained by off-axis digital holographic microscopy (DHM). The RBC holograms are first recorded by DHM, and then the phase images of multiple RBCs are reconstructed by a computational numerical algorithm. To design the classifier, the three typical RBC shapes, which are stomatocyte, discocyte, and echinocyte, are used for training and testing. Nonmain or abnormal RBC shapes different from the three normal shapes are defined as the fourth category. Ten features, including projected surface area, average phase value, mean corpuscular hemoglobin, perimeter, mean corpuscular hemoglobin surface density, circularity, mean phase of center part, sphericity coefficient, elongation, and pallor, are extracted from each RBC after segmenting the reconstructed phase images by using a watershed transform algorithm. Moreover, four additional properties, such as projected surface area, perimeter, average phase value, and elongation, are measured from the inner part of each cell, which can give significant information beyond the previous 10 features for the separation of the RBC groups; these are verified in the experiment by the statistical method of Hotelling's T-square test. We also apply the principal component analysis algorithm to reduce the dimension number of variables and establish the Gaussian mixture densities using the projected data with the first eight principal components. Consequently, the Gaussian mixtures are used to design the discriminant functions based on Bayesian decision theory. To improve the performance of the Bayes classifier and the accuracy of estimation of its error rate, the leaving-one-out technique is applied. Experimental results show that the proposed method can yield good results for calculating the percentage of each typical normal RBC shape in a reconstructed phase image of multiple RBCs that will be favorable to the analysis of RBC-related diseases. In addition, we show that the discrimination performance for the counting of normal shapes of RBCs can be improved by using 3-D features of an RBC.
Krüger, Hans
2010-05-01
A new method for complete separation of steam-volatile organic compounds is described using the example of chamomile flowers. This method is based on the direct combination of hydrodistillation and solid-phase extraction in a circulation apparatus. In contrast to hydrodistillation and simultaneous distillation extraction (SDE), an RP-18 solid phase as adsorptive material is used rather than a water-insoluble solvent. Therefore, a prompt and complete fixation of all volatiles takes place, and the circulation of water-soluble bisabololoxides as well as water-soluble and thermolabile en-yne-spiroethers is inhibited. This so-called simultaneous distillation solid-phase extraction (SD-SPE) provides extracts that better characterise the real composition of the vapour phase, as well as the composition of inhalation vapours, than do SDE extracts or essential oils obtained by hydrodistillation. The data indicate that during inhalation therapy with chamomile, the bisabololoxides and spiroethers are more strongly involved in the inhaling activity than so far assumed. Georg Thieme Verlag KG Stuttgart New York.
NASA Astrophysics Data System (ADS)
Jusman, Yessi; Ng, Siew-Cheok; Hasikin, Khairunnisa; Kurnia, Rahmadi; Osman, Noor Azuan Bin Abu; Teoh, Kean Hooi
2016-10-01
The capability of field emission scanning electron microscopy and energy dispersive x-ray spectroscopy (FE-SEM/EDX) to scan material structures at the microlevel and characterize the material with its elemental properties has inspired this research, which has developed an FE-SEM/EDX-based cervical cancer screening system. The developed computer-aided screening system consisted of two parts, which were the automatic features of extraction and classification. For the automatic features extraction algorithm, the image and spectra of cervical cells features extraction algorithm for extracting the discriminant features of FE-SEM/EDX data was introduced. The system automatically extracted two types of features based on FE-SEM/EDX images and FE-SEM/EDX spectra. Textural features were extracted from the FE-SEM/EDX image using a gray level co-occurrence matrix technique, while the FE-SEM/EDX spectra features were calculated based on peak heights and corrected area under the peaks using an algorithm. A discriminant analysis technique was employed to predict the cervical precancerous stage into three classes: normal, low-grade intraepithelial squamous lesion (LSIL), and high-grade intraepithelial squamous lesion (HSIL). The capability of the developed screening system was tested using 700 FE-SEM/EDX spectra (300 normal, 200 LSIL, and 200 HSIL cases). The accuracy, sensitivity, and specificity performances were 98.2%, 99.0%, and 98.0%, respectively.
A method for quickly and exactly extracting hepatic vein
NASA Astrophysics Data System (ADS)
Xiong, Qing; Yuan, Rong; Wang, Luyao; Wang, Yanchun; Li, Zhen; Hu, Daoyu; Xie, Qingguo
2013-02-01
It is of vital importance that providing detailed and accurate information about hepatic vein (HV) for liver surgery planning, such as pre-operative planning of living donor liver transplantation (LDLT). Due to the different blood flow rate of intra-hepatic vascular systems and the restrictions of CT scan, it is common that HV and hepatic portal vein (HPV) are both filled with contrast medium during the scan and in high intensity in the hepatic venous phase images. As a result, the HV segmentation result obtained from the hepatic venous phase images is always contaminated by HPV which makes accurate HV modeling difficult. In this paper, we proposed a method for quick and accurate HV extraction. Based on the topological structure of intra-hepatic vessels, we analyzed the anatomical features of HV and HPV. According to the analysis, three conditions were presented to identify the nodes that connect HV with HPV in the topological structure, and thus to distinguish HV from HPV. The method costs less than one minute to extract HV and provides a correct and detailed HV model even with variations in vessels. Evaluated by two experienced radiologists, the accuracy of the HV model obtained from our method is over 97%. In the following work, we will extend our work to a comprehensive clinical evaluation and apply this method to actual LDLT surgical planning.
Automatic Extraction of Planetary Image Features
NASA Technical Reports Server (NTRS)
Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.
2009-01-01
With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.
Zhang, Yuchi; Liu, Chunming; Li, Jing; Qi, Yanjuan; Li, Yuchun; Li, Sainan
2015-09-01
A new method for the extraction of medicinal herbs termed ultrasonic-assisted dynamic extraction (UADE) was designed and evaluated. This technique was coupled with counter-current chromatography (CCC) and centrifugal partition chromatography (CPC) and then applied to the continuous extraction and online isolation of chemical constituents from Paeonia lactiflora Pall (white peony) roots. The mechanical parameters, including the pitch and diameter of the shaft, were optimized by means of mathematical modeling. Furthermore, the configuration and mechanism of online UADE coupled with CCC and CPC were elaborated. The stationary phases of the two-phase solvent systems from CCC and CPC were utilized as the UADE solution. The extraction solution was pumped into the sample loop and then introduced into the CCC column; the target compounds were eluted with the lower aqueous phase of the two-phase solvent system. During the CCC separation, the extraction solution was continuously fed in the sample loop by turning the ten-port valve; the extraction solution was then pumped into the CPC column and eluted by the mobile phase of the two-phase solvent system mentioned above. When the first cycle of the UADE/CCC/CPC was completed, the second cycle experiment could be carried out, and so on. Four target compounds (albiflorin, benzoylpaeoniflorin, paeoniflorin, and galloylpaeoniflorin) with purities above 94.96% were successfully extracted and isolated online using the two-phase solvent system comprising ethyl acetate-n-butanol-ethanol-water (1:3.5:2:4.5, v/v/v/v). Compared with conventional extraction methods, the instrumental setup of the present method offers the advantages of automation and systematic extraction and isolation of natural products. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Waveform fitting and geometry analysis for full-waveform lidar feature extraction
NASA Astrophysics Data System (ADS)
Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu
2016-10-01
This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.
Hussain, Lal; Ahmed, Adeel; Saeed, Sharjil; Rathore, Saima; Awan, Imtiaz Ahmed; Shah, Saeed Arif; Majid, Abdul; Idris, Adnan; Awan, Anees Ahmed
2018-02-06
Prostate is a second leading causes of cancer deaths among men. Early detection of cancer can effectively reduce the rate of mortality caused by Prostate cancer. Due to high and multiresolution of MRIs from prostate cancer require a proper diagnostic systems and tools. In the past researchers developed Computer aided diagnosis (CAD) systems that help the radiologist to detect the abnormalities. In this research paper, we have employed novel Machine learning techniques such as Bayesian approach, Support vector machine (SVM) kernels: polynomial, radial base function (RBF) and Gaussian and Decision Tree for detecting prostate cancer. Moreover, different features extracting strategies are proposed to improve the detection performance. The features extracting strategies are based on texture, morphological, scale invariant feature transform (SIFT), and elliptic Fourier descriptors (EFDs) features. The performance was evaluated based on single as well as combination of features using Machine Learning Classification techniques. The Cross validation (Jack-knife k-fold) was performed and performance was evaluated in term of receiver operating curve (ROC) and specificity, sensitivity, Positive predictive value (PPV), negative predictive value (NPV), false positive rate (FPR). Based on single features extracting strategies, SVM Gaussian Kernel gives the highest accuracy of 98.34% with AUC of 0.999. While, using combination of features extracting strategies, SVM Gaussian kernel with texture + morphological, and EFDs + morphological features give the highest accuracy of 99.71% and AUC of 1.00.
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.
2015-06-01
Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.
Sans study of reverse micelles formed upon extraction of inorganic acids by TBP in n-octane.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiarizia, R.; Briand, A.; Jensen, M. P.
2008-01-01
Small-angle neutron scattering (SANS) data for n-octane solutions of TBP loaded with progressively larger amounts of HNO{sub 3}, HClO{sub 4}, H{sub 2}SO{sub 4}, and H{sub 3}PO{sub 4} up to and beyond the LOC (limiting organic concentration of acid) condition, were interpreted using the Baxter model for hard spheres with surface adhesion. The coherent picture of the behavior of the TBP solutions derived from the SANS investigation discussed in this paper confirmed our recently developed model for third phase formation. This model analyses the features of the scattering data in the low Q region as arising from van der Waals interactionsmore » between the polar cores of reverse micelles. Our SANS data indicated that the TBP micelles swell when acid and water are extracted into their polar core. The swollen micelles have critical diameters ranging from 15 to 22 {angstrom}, and polar core diameters between 10 and 15 {angstrom}, depending on the specific system. At the respective LOC conditions, the TBP weight-average aggregation numbers are -4 for HClO{sub 4}, -6 for H2SO{sub 4}, -7 for HCl, and -10 for H{sub 3}PO{sub 4}. The comparison between the behavior of HNO{sub 3}, a non-third phase forming acid, and the other acids provided an explanation of the effect of the water molecules present in the polar core of the micelles on third phase formation. The thickness of the lipophilic shell of the micelles indicated that the butyl groups of TBP lie at an angle of -25 degrees relative to a plane tangent to the micellar core. The critical energy of intermicellar attraction, U(r), was about -2 k{sub B}T for all the acids investigated. This value is the same as that reported in our previous publications on the extraction of metal nitrates by TBP, confirming that the same mechanism and energetics are operative in the formation of a third phase, independent of whether the chemical species extracted are metal nitrate salts or inorganic acids.« less
Sudarshan, Vidya K; Acharya, U Rajendra; Ng, E Y K; Tan, Ru San; Chou, Siaw Meng; Ghista, Dhanjoo N
2016-04-01
Early expansion of infarcted zone after Acute Myocardial Infarction (AMI) has serious short and long-term consequences and contributes to increased mortality. Thus, identification of moderate and severe phases of AMI before leading to other catastrophic post-MI medical condition is most important for aggressive treatment and management. Advanced image processing techniques together with robust classifier using two-dimensional (2D) echocardiograms may aid for automated classification of the extent of infarcted myocardium. Therefore, this paper proposes novel algorithms namely Curvelet Transform (CT) and Local Configuration Pattern (LCP) for an automated detection of normal, moderately infarcted and severely infarcted myocardium using 2D echocardiograms. The methodology extracts the LCP features from CT coefficients of echocardiograms. The obtained features are subjected to Marginal Fisher Analysis (MFA) dimensionality reduction technique followed by fuzzy entropy based ranking method. Different classifiers are used to differentiate ranked features into three classes normal, moderate and severely infarcted based on the extent of damage to myocardium. The developed algorithm has achieved an accuracy of 98.99%, sensitivity of 98.48% and specificity of 100% for Support Vector Machine (SVM) classifier using only six features. Furthermore, we have developed an integrated index called Myocardial Infarction Risk Index (MIRI) to detect the normal, moderately and severely infarcted myocardium using a single number. The proposed system may aid the clinicians in faster identification and quantification of the extent of infarcted myocardium using 2D echocardiogram. This system may also aid in identifying the person at risk of developing heart failure based on the extent of infarcted myocardium. Copyright © 2016 Elsevier Ltd. All rights reserved.
A method for automatic feature points extraction of human vertebrae three-dimensional model
NASA Astrophysics Data System (ADS)
Wu, Zhen; Wu, Junsheng
2017-05-01
A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.
Extraction of linear features on SAR imagery
NASA Astrophysics Data System (ADS)
Liu, Junyi; Li, Deren; Mei, Xin
2006-10-01
Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.
NASA Astrophysics Data System (ADS)
Jiang, Li; Xuan, Jianping; Shi, Tielin
2013-12-01
Generally, the vibration signals of faulty machinery are non-stationary and nonlinear under complicated operating conditions. Therefore, it is a big challenge for machinery fault diagnosis to extract optimal features for improving classification accuracy. This paper proposes semi-supervised kernel Marginal Fisher analysis (SSKMFA) for feature extraction, which can discover the intrinsic manifold structure of dataset, and simultaneously consider the intra-class compactness and the inter-class separability. Based on SSKMFA, a novel approach to fault diagnosis is put forward and applied to fault recognition of rolling bearings. SSKMFA directly extracts the low-dimensional characteristics from the raw high-dimensional vibration signals, by exploiting the inherent manifold structure of both labeled and unlabeled samples. Subsequently, the optimal low-dimensional features are fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories and severities of bearings. The experimental results demonstrate that the proposed approach improves the fault recognition performance and outperforms the other four feature extraction methods.
Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram
Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi
2016-01-01
Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171
Wang, Jinjia; Zhang, Yanna
2015-02-01
Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups of IV-III and IV - I. The experimental results proved that the method proposed in this paper was feasible.
Nie, Haitao; Long, Kehui; Ma, Jun; Yue, Dan; Liu, Jinguo
2015-01-01
Partial occlusions, large pose variations, and extreme ambient illumination conditions generally cause the performance degradation of object recognition systems. Therefore, this paper presents a novel approach for fast and robust object recognition in cluttered scenes based on an improved scale invariant feature transform (SIFT) algorithm and a fuzzy closed-loop control method. First, a fast SIFT algorithm is proposed by classifying SIFT features into several clusters based on several attributes computed from the sub-orientation histogram (SOH), in the feature matching phase only features that share nearly the same corresponding attributes are compared. Second, a feature matching step is performed following a prioritized order based on the scale factor, which is calculated between the object image and the target object image, guaranteeing robust feature matching. Finally, a fuzzy closed-loop control strategy is applied to increase the accuracy of the object recognition and is essential for autonomous object manipulation process. Compared to the original SIFT algorithm for object recognition, the result of the proposed method shows that the number of SIFT features extracted from an object has a significant increase, and the computing speed of the object recognition processes increases by more than 40%. The experimental results confirmed that the proposed method performs effectively and accurately in cluttered scenes. PMID:25714094
What’s in a URL? Genre Classification from URLs
2012-01-01
webpages with access to the content of a document and feature extraction from URLs alone. Feature Extraction from Webpages Stylistic and structural...2010). Character n-grams (sequence of n characters) are attractive because of their simplicity and because they encapsulate both lexical and stylistic ...report might be stylistic . Feature Extraction from URLs The syntactic characteristics of URLs have been fairly sta- ble over the years. URL terms are
Detection of goal events in soccer videos
NASA Astrophysics Data System (ADS)
Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas
2005-01-01
In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.
Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong
2018-05-11
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN
Cheng, Gang; Chen, Xihui
2018-01-01
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671
Tahara, Tatsuki; Mori, Ryota; Kikunaga, Shuhei; Arai, Yasuhiko; Takaki, Yasuhiro
2015-06-15
Dual-wavelength phase-shifting digital holography that selectively extracts wavelength information from five wavelength-multiplexed holograms is presented. Specific phase shifts for respective wavelengths are introduced to remove the crosstalk components and extract only the object wave at the desired wavelength from the holograms. Object waves in multiple wavelengths are selectively extracted by utilizing 2π ambiguity and the subtraction procedures based on phase-shifting interferometry. Numerical results show the validity of the proposed technique. The proposed technique is also experimentally demonstrated.
Automated feature extraction and classification from image sources
,
1995-01-01
The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roth, R.; Bianco, P. Rizzo, M.; Pressly, N.
1995-12-31
Soil and groundwater contaminated with jet fuel at Terminal One of the JFK International Airport in New York have been remediated using dual phase extraction (DPE) and bioventing. Two areas were remediated using 51 DPE wells and 20 air sparging/air injection wells. The total area remediated by the DPE wells is estimated to be 4.8 acres. Groundwater was extracted to recover nonaqueous phase and aqueous phase jet fuel from the shallow aquifer and treated above ground by the following processes; oil/water separation, iron-oxidation, flocculation, sedimentation, filtration, air stripping and liquid-phase granular activated carbon (LPGAC) adsorption. The extracted vapors were treatedmore » by vapor-phase granular activated carbon (VPGAC) adsorption in one area, and catalytic oxidation and VPGAC adsorption in another area. After 6 months of remediation, approximately 5,490 lbs. of volatile organic compounds (VOCs) were removed by soil vapor extraction (SVE), 109,650 lbs. of petroleum hydrocarbons were removed from the extracted groundwater, and 60,550 lbs. of petroleum hydrocarbons were biologically oxidized by subsurface microorganisms. Of these three mechanisms, the rate of petroleum hydrocarbon removal was the highest for biological oxidation in one area and by groundwater extraction in another area.« less
Rout, Alok; Binnemans, Koen
2014-02-28
The solvent extraction of trivalent rare-earth ions and their separation from divalent transition metal ions using molten salt hydrates as the feed phase and an undiluted fluorine-free ionic liquid as the extracting phase were investigated in detail. The extractant was tricaprylmethylammonium nitrate, [A336][NO3], and the hydrated melt was calcium nitrate tetrahydrate, Ca(NO3)2·4H2O. The extraction behavior of rare-earth ions was studied for solutions of individual elements, as well as for mixtures of rare earths in the hydrated melt. The influence of different extraction parameters was investigated: the initial metal loading in the feed phase, percentage of water in the feed solution, equilibration time, and the type of hydrated melt. The extraction of rare earths from Ca(NO3)2·4H2O was compared with extraction from CaCl2·4H2O by [A336][Cl] (Aliquat 336). The nitrate system was found to be the better one. The extraction and separation of rare earths from the transition metals nickel, cobalt and zinc were also investigated. Remarkably high separation factors of rare-earth ions over transition metal ions were observed for extraction from Ca(NO3)2·4H2O by the [A336][NO3] extracting phase. Furthermore, rare-earth ions could be separated efficiently from transition metal ions, even in melts with very high concentrations of transition metal ions. Rare-earth oxides could be directly dissolved in the Ca(NO3)2·4H2O phase in the presence of small amounts of Al(NO3)3·9H2O or concentrated nitric acid. The efficiency of extraction after dissolving the rare-earth oxides in the hydrated nitrate melt was identical to extraction from solutions with rare-earth nitrates dissolved in the molten phase. The stripping of the rare-earth ions from the loaded ionic liquid phase and the reuse of the recycled ionic liquid were also investigated in detail.
Prominent feature extraction for review analysis: an empirical study
NASA Astrophysics Data System (ADS)
Agarwal, Basant; Mittal, Namita
2016-05-01
Sentiment analysis (SA) research has increased tremendously in recent times. SA aims to determine the sentiment orientation of a given text into positive or negative polarity. Motivation for SA research is the need for the industry to know the opinion of the users about their product from online portals, blogs, discussion boards and reviews and so on. Efficient features need to be extracted for machine-learning algorithm for better sentiment classification. In this paper, initially various features are extracted such as unigrams, bi-grams and dependency features from the text. In addition, new bi-tagged features are also extracted that conform to predefined part-of-speech patterns. Furthermore, various composite features are created using these features. Information gain (IG) and minimum redundancy maximum relevancy (mRMR) feature selection methods are used to eliminate the noisy and irrelevant features from the feature vector. Finally, machine-learning algorithms are used for classifying the review document into positive or negative class. Effects of different categories of features are investigated on four standard data-sets, namely, movie review and product (book, DVD and electronics) review data-sets. Experimental results show that composite features created from prominent features of unigram and bi-tagged features perform better than other features for sentiment classification. mRMR is a better feature selection method as compared with IG for sentiment classification. Boolean Multinomial Naïve Bayes) algorithm performs better than support vector machine classifier for SA in terms of accuracy and execution time.
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei
2015-03-01
A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.
Line fitting based feature extraction for object recognition
NASA Astrophysics Data System (ADS)
Li, Bing
2014-06-01
Image feature extraction plays a significant role in image based pattern applications. In this paper, we propose a new approach to generate hierarchical features. This new approach applies line fitting to adaptively divide regions based upon the amount of information and creates line fitting features for each subsequent region. It overcomes the feature wasting drawback of the wavelet based approach and demonstrates high performance in real applications. For gray scale images, we propose a diffusion equation approach to map information-rich pixels (pixels near edges and ridge pixels) into high values, and pixels in homogeneous regions into small values near zero that form energy map images. After the energy map images are generated, we propose a line fitting approach to divide regions recursively and create features for each region simultaneously. This new feature extraction approach is similar to wavelet based hierarchical feature extraction in which high layer features represent global characteristics and low layer features represent local characteristics. However, the new approach uses line fitting to adaptively focus on information-rich regions so that we avoid the feature waste problems of the wavelet approach in homogeneous regions. Finally, the experiments for handwriting word recognition show that the new method provides higher performance than the regular handwriting word recognition approach.
Artificially intelligent recognition of Arabic speaker using voice print-based local features
NASA Astrophysics Data System (ADS)
Mahmood, Awais; Alsulaiman, Mansour; Muhammad, Ghulam; Akram, Sheeraz
2016-11-01
Local features for any pattern recognition system are based on the information extracted locally. In this paper, a local feature extraction technique was developed. This feature was extracted in the time-frequency plain by taking the moving average on the diagonal directions of the time-frequency plane. This feature captured the time-frequency events producing a unique pattern for each speaker that can be viewed as a voice print of the speaker. Hence, we referred to this technique as voice print-based local feature. The proposed feature was compared to other features including mel-frequency cepstral coefficient (MFCC) for speaker recognition using two different databases. One of the databases used in the comparison is a subset of an LDC database that consisted of two short sentences uttered by 182 speakers. The proposed feature attained 98.35% recognition rate compared to 96.7% for MFCC using the LDC subset.
Feasibility of Surfactant-Free Supported Emulsion Liquid Membrane Extraction
NASA Technical Reports Server (NTRS)
Hu, Shih-Yao B.; Li, Jin; Wiencek, John M.
2001-01-01
Supported emulsion liquid membrane (SELM) is an effective means to conduct liquid-liquid extraction. SELM extraction is particularly attractive for separation tasks in the microgravity environment where density difference between the solvent and the internal phase of the emulsion is inconsequential and a stable dispersion can be maintained without surfactant. In this research, dispersed two-phase flow in SELM extraction is modeled using the Lagrangian method. The results show that SELM extraction process in the microgravity environment can be simulated on earth by matching the density of the solvent and the stripping phase. Feasibility of surfactant-free SELM (SFSELM) extraction is assessed by studying the coalescence behavior of the internal phase in the absence of the surfactant. Although the contacting area between the solvent and the internal phase in SFSELM extraction is significantly less than the area provided by regular emulsion due to drop coalescence, it is comparable to the area provided by a typical hollow-fiber membrane. Thus, the stripping process is highly unlikely to become the rate-limiting step in SFSELM extraction. SFSELM remains an effective way to achieve simultaneous extraction and stripping and is able to eliminate the equilibrium limitation in the typical solvent extraction processes. The SFSELM design is similar to the supported liquid membrane design in some aspects.
Sun, Ying-Ying; Liu, Xiao-Xiao; Wang, Chang-Hai
2010-06-01
To study the effects of extracts of Enteromorpha prolifera on the growth of the four species of red tide microalgae (Amphidinium hoefleri, Karenia mikimitoi, Alexandrium tamarense and Skeletonema costatum), the extracts were extracted with five solvents (methanol, acetone, ethyl acetate, chloroform and petroleum ether), respectively. Based on the observation of algal morphology and the measurement of algal density, cell size and the contents of physiological indicators (chlorophyll, protein and polysaccharide), the results showed methanol extracts of E. prolifera had the strongest action. The inhibitory effects of A. hoefleri, K. mikimitoi, A. tamarense and S. costatum by the methanol extracts were 54.0%, 48.1%, 44.0% and 37.5% in day 10, respectively. The extracts of E. prolifera extracted with methanol, acetone and ethyl acetate caused cavities, pieces and pigment reduction in cells, and those with chloroform and petroleum ether caused goffers on cells. The extracts of E. prolifera extracted with all the five solvents decreased athletic ability of the cells, among which those extracted with ethyl acetate, chloroform and petroleum ether decreased cell size of test microalgae. The further investigation found that the methanol extracts significantly decreased contents of chlorophyll, protein and polysaccharide in the cells of those microalgae. The inhibitory effect of chlorophyll, protein and polysaccharide contents of four species of microalgae by the methanol extracts was about 51%. On the basis of the above experiments, dry powder of E. prolifera were extracts with methanol, and extracts were obtained. The methanol extracts were partitioned to petroleum ether phase, ethyl acetate phase, n-butanol phase and distilled water phase by liquid-liquid fractionation, and those with petroleum ether and ethyl acetate significantly inhibited the growth of all test microalgae, and the inhibitory effect of four species of microalgae by those two extracts was above 25% in day 10. Our researches expressed that antialgal substances in E. prolifera extracted with methanol were obtained. And two fractions (petroleum ether phase and ethyl acetate phase) that inhibited the growth of all test microalgae were obtained when the methanol extracts was fractionated by liquid-liquid fractionation.
Jiang, Ling-Feng; Chen, Bo-Cheng; Chen, Ben; Li, Xue-Jian; Liao, Hai-Lin; Zhang, Wen-Yan; Wu, Lin
2017-07-01
The extraction adsorbent was fabricated by immobilizing the highly specific recognition and binding of aptamer onto the surface of Fe 3 O 4 magnetic nanoparticles, which not only acted as recognition elements to recognize and capture the target molecule berberine from the extract of Cortex phellodendri, but also could favor the rapid separation and purification of the bound berberine by using an external magnet. The developed solid-phase extraction method in this work was useful for the selective extraction and determination of berberine in Cortex phellodendri extracts. Various conditions such as the amount of aptamer-functionalized Fe 3 O 4 magnetic nanoparticles, extraction time, temperature, pH value, Mg 2+ concentration, elution time and solvent were optimized for the solid-phase extraction of berberine. Under optimal conditions, the purity of berberine extracted from Cortex phellodendri was as high as 98.7% compared with that of 4.85% in the extract, indicating that aptamer-functionalized Fe 3 O 4 magnetic nanoparticles-based solid-phase extraction method was very effective for berberine enrichment and separation from a complex herb extract. The applicability and reliability of the developed solid-phase extraction method were demonstrated by separating berberine from nine different concentrations of one Cortex phellodendri extract. The relative recoveries of the spiked solutions of all the samples were between 95.4 and 111.3%, with relative standard deviations ranging between 0.57 and 1.85%. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lou, Chaoyan; Wu, Can; Zhang, Kai; Guo, Dandan; Jiang, Lei; Lu, Yang; Zhu, Yan
2018-05-18
Allergenic disperse dyes are a group of environmental contaminants, which are toxic and mutagenic to human beings. In this work, a method of dispersive solid-phase extraction (d-SPE) using graphene-coated polystyrene-divinylbenzene (G@PS-DVB) microspheres coupled with supercritical fluid chromatography (SFC) was proposed for the rapid determination of 10 allergenic disperse dyes in industrial wastewater samples. G@PS-DVB microspheres were synthesized by coating graphene (G) sheets onto polystyrene-divinylbenzene (PS-DVB) polymers. Such novel sorbents were employed in d-SPE for the purification and concentration of allergenic disperse dyes in wastewater samples prior to the determination by SFC with UV detection. To achieve the maximum extraction efficiency for the target dyes, several parameters influencing d-SPE process such as sorbent dosage, extraction time, desorption conditions were investigated. SFC conditions including stationary phase, modifier composition and percentage, column temperature, backpressure and flow rate were optimized to well separate the allergenic disperse dyes. Under the optimum conditions, satisfactory linear relationship (R ≥ 0.9989) was observed with the concentration of dyes ranging from 0.02 to 10.0 μg/mL. The limits of detection (LOD, S/N = 3) for the ten dyes were in the range of 1.1-15.6 ng/mL. Recoveries for the spiked samples were between 89.1% and 99.7% with relative standard deviations (RSD) lower than 10.5% in all cases. The proposed method is time-saving, green, precise and repeatable for the analysis of the target dyes. Furthermore, the application of G@PS-DVB based d-SPE process can be potentially expanded to isolate and concentrate other aromatic compounds in various matrices and supercritical fluid chromatography methodology featuring rapidity, accuracy and green will be an ideal candidate for the analysis of these compounds. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki
2017-09-01
Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.
Novel ultrasonic real-time scanner featuring servo controlled transducers displaying a sector image.
Matzuk, T; Skolnick, M L
1978-07-01
This paper describes a new real-time servo controlled sector scanner that produces high resolution images and has functionally programmable features similar to phased array systems, but possesses the simplicity of design and low cost best achievable in a mechanical sector scanner. The unique feature is the transducer head which contains a single moving part--the transducer--enclosed within a light-weight, hand held, and vibration free case. The frame rate, sector width, stop action angle, are all operator programmable. The frame rate can be varied from 12 to 30 frames s-1 and the sector width from 0 degrees to 60 degrees. Conversion from sector to time motion (T/M) modes are instant and two options are available, a freeze position high density T/M and a low density T/M obtainable simultaneously during sector visualization. Unusual electronic features are: automatic gain control, electronic recording of images on video tape in rf format, and ability to post-process images during video playback to extract T/M display and to change time gain control (tgc) and image size.
NASA Astrophysics Data System (ADS)
Paino, A.; Keller, J.; Popescu, M.; Stone, K.
2014-06-01
In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.
VHR satellite imagery for humanitarian crisis management: a case study
NASA Astrophysics Data System (ADS)
Bitelli, Gabriele; Eleias, Magdalena; Franci, Francesca; Mandanici, Emanuele
2017-09-01
During the last years, remote sensing data along with GIS have been largely employed for supporting emergency management activities. In this context, the use of satellite images and derived map products has become more common also in the different phases of humanitarian crisis response. In this work very high resolution satellite imagery was processed to assess the evolution of Za'atari Refugee Camp, built in Jordan in 2012 by the UN Refugee Agency to host Syrian refugees. Multispectral satellite scenes of the Za'atari area were processed by means of object-based classifications. The main aim of the present work is the development of a semiautomated procedure for multi-temporal camp monitoring with particular reference to the dwellings detection. Whilst in the emergency mapping domain automation of feature extraction is widely investigated, in the field of humanitarian missions the information is often extracted by means of photointerpretation of the satellite data. This approach requires time for the interpretation; moreover, it is not reliable enough in complex situations, where features of interest are often small, heterogeneous and inconsistent. Therefore, the present paper discusses a methodology to obtain information for assisting humanitarian crisis management, using a semi-automatic classification approach applied to satellite imagery.
NASA Astrophysics Data System (ADS)
Li, S.; Zhang, S.; Yang, D.
2017-09-01
Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.
Does It Really Matter Where You Look When Walking on Stairs? Insights from a Dual-Task Study
Miyasike-daSilva, Veronica; McIlroy, William E.
2012-01-01
Although the visual system is known to provide relevant information to guide stair locomotion, there is less understanding of the specific contributions of foveal and peripheral visual field information. The present study investigated the specific role of foveal vision during stair locomotion and ground-stairs transitions by using a dual-task paradigm to influence the ability to rely on foveal vision. Fifteen healthy adults (26.9±3.3 years; 8 females) ascended a 7-step staircase under four conditions: no secondary tasks (CONTROL); gaze fixation on a fixed target located at the end of the pathway (TARGET); visual reaction time task (VRT); and auditory reaction time task (ART). Gaze fixations towards stair features were significantly reduced in TARGET and VRT compared to CONTROL and ART. Despite the reduced fixations, participants were able to successfully ascend stairs and rarely used the handrail. Step time was increased during VRT compared to CONTROL in most stair steps. Navigating on the transition steps did not require more gaze fixations than the middle steps. However, reaction time tended to increase during locomotion on transitions suggesting additional executive demands during this phase. These findings suggest that foveal vision may not be an essential source of visual information regarding stair features to guide stair walking, despite the unique control challenges at transition phases as highlighted by phase-specific challenges in dual-tasking. Instead, the tendency to look at the steps in usual conditions likely provides a stable reference frame for extraction of visual information regarding step features from the entire visual field. PMID:22970297
NASA Astrophysics Data System (ADS)
Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.
2018-04-01
A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.
NASA Astrophysics Data System (ADS)
Thomaz, Ricardo L.; Carneiro, Pedro C.; Patrocinio, Ana C.
2017-03-01
Breast cancer is the leading cause of death for women in most countries. The high levels of mortality relate mostly to late diagnosis and to the direct proportionally relationship between breast density and breast cancer development. Therefore, the correct assessment of breast density is important to provide better screening for higher risk patients. However, in modern digital mammography the discrimination among breast densities is highly complex due to increased contrast and visual information for all densities. Thus, a computational system for classifying breast density might be a useful tool for aiding medical staff. Several machine-learning algorithms are already capable of classifying small number of classes with good accuracy. However, machinelearning algorithms main constraint relates to the set of features extracted and used for classification. Although well-known feature extraction techniques might provide a good set of features, it is a complex task to select an initial set during design of a classifier. Thus, we propose feature extraction using a Convolutional Neural Network (CNN) for classifying breast density by a usual machine-learning classifier. We used 307 mammographic images downsampled to 260x200 pixels to train a CNN and extract features from a deep layer. After training, the activation of 8 neurons from a deep fully connected layer are extracted and used as features. Then, these features are feedforward to a single hidden layer neural network that is cross-validated using 10-folds to classify among four classes of breast density. The global accuracy of this method is 98.4%, presenting only 1.6% of misclassification. However, the small set of samples and memory constraints required the reuse of data in both CNN and MLP-NN, therefore overfitting might have influenced the results even though we cross-validated the network. Thus, although we presented a promising method for extracting features and classifying breast density, a greater database is still required for evaluating the results.
Feature extraction applied to agricultural crops as seen by LANDSAT
NASA Technical Reports Server (NTRS)
Kauth, R. J.; Lambeck, P. F.; Richardson, W.; Thomas, G. S.; Pentland, A. P. (Principal Investigator)
1979-01-01
The physical interpretation of the spectral-temporal structure of LANDSAT data can be conveniently described in terms of a graphic descriptive model called the Tassled Cap. This model has been a source of development not only in crop-related feature extraction, but also for data screening and for haze effects correction. Following its qualitative description and an indication of its applications, the model is used to analyze several feature extraction algorithms.
Optical character recognition with feature extraction and associative memory matrix
NASA Astrophysics Data System (ADS)
Sasaki, Osami; Shibahara, Akihito; Suzuki, Takamasa
1998-06-01
A method is proposed in which handwritten characters are recognized using feature extraction and an associative memory matrix. In feature extraction, simple processes such as shifting and superimposing patterns are executed. A memory matrix is generated with singular value decomposition and by modifying small singular values. The method is optically implemented with two liquid crystal displays. Experimental results for the recognition of 25 handwritten alphabet characters clearly shows the effectiveness of the method.
Spectral Analysis of Breast Cancer on Tissue Microarrays: Seeing Beyond Morphology
2005-04-01
Harvey N., Szymanski J.J., Bloch J.J., Mitchell M. investigation of image feature extraction by a genetic algorithm. Proc. SPIE 1999;3812:24-31. 11...automated feature extraction using multiple data sources. Proc. SPIE 2003;5099:190-200. 15 4 Spectral-Spatial Analysis of Urine Cytology Angeletti et al...Appendix Contents: 1. Harvey, N.R., Levenson, R.M., Rimm, D.L. (2003) Investigation of Automated Feature Extraction Techniques for Applications in
A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification
Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.
2015-01-01
In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898
Thomaz, Ricardo de Lima; Carneiro, Pedro Cunha; Bonin, João Eliton; Macedo, Túlio Augusto Alves; Patrocinio, Ana Claudia; Soares, Alcimar Barbosa
2018-05-01
Detection of early hepatocellular carcinoma (HCC) is responsible for increasing survival rates in up to 40%. One-class classifiers can be used for modeling early HCC in multidetector computed tomography (MDCT), but demand the specific knowledge pertaining to the set of features that best describes the target class. Although the literature outlines several features for characterizing liver lesions, it is unclear which is most relevant for describing early HCC. In this paper, we introduce an unconstrained GA feature selection algorithm based on a multi-objective Mahalanobis fitness function to improve the classification performance for early HCC. We compared our approach to a constrained Mahalanobis function and two other unconstrained functions using Welch's t-test and Gaussian Data Descriptors. The performance of each fitness function was evaluated by cross-validating a one-class SVM. The results show that the proposed multi-objective Mahalanobis fitness function is capable of significantly reducing data dimensionality (96.4%) and improving one-class classification of early HCC (0.84 AUC). Furthermore, the results provide strong evidence that intensity features extracted at the arterial to portal and arterial to equilibrium phases are important for classifying early HCC.
Liu, Bo; Wu, Huayi; Wang, Yandong; Liu, Wenming
2015-01-01
Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS) databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1) using directional mathematical morphology to enhance the contrast between roads and non-roads; (2) using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction. PMID:26397832
Automation of lidar-based hydrologic feature extraction workflows using GIS
NASA Astrophysics Data System (ADS)
Borlongan, Noel Jerome B.; de la Cruz, Roel M.; Olfindo, Nestor T.; Perez, Anjillyn Mae C.
2016-10-01
With the advent of LiDAR technology, higher resolution datasets become available for use in different remote sensing and GIS applications. One significant application of LiDAR datasets in the Philippines is in resource features extraction. Feature extraction using LiDAR datasets require complex and repetitive workflows which can take a lot of time for researchers through manual execution and supervision. The Development of the Philippine Hydrologic Dataset for Watersheds from LiDAR Surveys (PHD), a project under the Nationwide Detailed Resources Assessment Using LiDAR (Phil-LiDAR 2) program, created a set of scripts, the PHD Toolkit, to automate its processes and workflows necessary for hydrologic features extraction specifically Streams and Drainages, Irrigation Network, and Inland Wetlands, using LiDAR Datasets. These scripts are created in Python and can be added in the ArcGIS® environment as a toolbox. The toolkit is currently being used as an aid for the researchers in hydrologic feature extraction by simplifying the workflows, eliminating human errors when providing the inputs, and providing quick and easy-to-use tools for repetitive tasks. This paper discusses the actual implementation of different workflows developed by Phil-LiDAR 2 Project 4 in Streams, Irrigation Network and Inland Wetlands extraction.
Laboratory Spectroscopy of Ices of Astrophysical Interest
NASA Technical Reports Server (NTRS)
Hudson, Reggie; Moore, M. H.
2011-01-01
Ongoing and future NASA and ESA astronomy missions need detailed information on the spectra of a variety of molecular ices to help establish the identity and abundances of molecules observed in astronomical data. Examples of condensed-phase molecules already detected on cold surfaces include H2O, CO, CO2, N2, NH3, CH4, SO2, O2, and O3. In addition, strong evidence exists for the solid-phase nitriles HCN, HC3N, and C2N2 in Titan's atmosphere. The wavelength region over which these identifications have been made is roughly 0.5 to 100 micron. Searches for additional features of complex carbon-containing species are in progress. Existing and future observations often impose special requirements on the information that comes from the laboratory. For example, the measurement of spectra, determination of integrated band strengths, and extraction of complex refractive indices of ices (and icy mixtures) in both amorphous and crystalline phases at relevant temperatures are all important tasks. In addition, the determination of the index of refraction of amorphous and crystalline ices in the visible region is essential for the extraction of infrared optical constants. Similarly, the measurement of spectra of ions and molecules embedded in relevant ices is important. This laboratory review will examine some of the existing experimental work and capabilities in these areas along with what more may be needed to meet current and future NASA and ESA planetary needs.
Feature Extraction and Selection Strategies for Automated Target Recognition
NASA Technical Reports Server (NTRS)
Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin
2010-01-01
Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.
Feature extraction and selection strategies for automated target recognition
NASA Astrophysics Data System (ADS)
Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin
2010-04-01
Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory regionof- interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.
He, Man; Huang, Lijin; Zhao, Bingshan; Chen, Beibei; Hu, Bin
2017-06-22
For the determination of trace elements and their species in various real samples by inductively coupled plasma mass spectrometry (ICP-MS), solid phase extraction (SPE) is a commonly used sample pretreatment technique to remove complex matrix, pre-concentrate target analytes and make the samples suitable for subsequent sample introduction and measurements. The sensitivity, selectivity/anti-interference ability, sample throughput and application potential of the methodology of SPE-ICP-MS are greatly dependent on SPE adsorbents. This article presents a general overview of the use of advanced functional materials (AFMs) in SPE for ICP-MS determination of trace elements and their species in the past decade. Herein the AFMs refer to the materials featuring with high adsorption capacity, good selectivity, fast adsorption/desorption dynamics and satisfying special requirements in real sample analysis, including nanometer-sized materials, porous materials, ion imprinting polymers, restricted access materials and magnetic materials. Carbon/silica/metal/metal oxide nanometer-sized adsorbents with high surface area and plenty of adsorption sites exhibit high adsorption capacity, and porous adsorbents would provide more adsorption sites and faster adsorption dynamics. The selectivity of the materials for target elements/species can be improved by using physical/chemical modification, ion imprinting and restricted accessed technique. Magnetic adsorbents in conventional batch operation offer unique magnetic response and high surface area-volume ratio which provide a very easy phase separation, greater extraction capacity and efficiency over conventional adsorbents, and chip-based magnetic SPE provides a versatile platform for special requirement (e.g. cell analysis). The performance of these adsorbents for the determination of trace elements and their species in different matrices by ICP-MS is discussed in detail, along with perspectives and possible challenges in the future development. Copyright © 2017 Elsevier B.V. All rights reserved.
Ensemble methods with simple features for document zone classification
NASA Astrophysics Data System (ADS)
Obafemi-Ajayi, Tayo; Agam, Gady; Xie, Bingqing
2012-01-01
Document layout analysis is of fundamental importance for document image understanding and information retrieval. It requires the identification of blocks extracted from a document image via features extraction and block classification. In this paper, we focus on the classification of the extracted blocks into five classes: text (machine printed), handwriting, graphics, images, and noise. We propose a new set of features for efficient classifications of these blocks. We present a comparative evaluation of three ensemble based classification algorithms (boosting, bagging, and combined model trees) in addition to other known learning algorithms. Experimental results are demonstrated for a set of 36503 zones extracted from 416 document images which were randomly selected from the tobacco legacy document collection. The results obtained verify the robustness and effectiveness of the proposed set of features in comparison to the commonly used Ocropus recognition features. When used in conjunction with the Ocropus feature set, we further improve the performance of the block classification system to obtain a classification accuracy of 99.21%.
A method of vehicle license plate recognition based on PCANet and compressive sensing
NASA Astrophysics Data System (ADS)
Ye, Xianyi; Min, Feng
2018-03-01
The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.
Selecting relevant 3D image features of margin sharpness and texture for lung nodule retrieval.
Ferreira, José Raniery; de Azevedo-Marques, Paulo Mazzoncini; Oliveira, Marcelo Costa
2017-03-01
Lung cancer is the leading cause of cancer-related deaths in the world. Its diagnosis is a challenge task to specialists due to several aspects on the classification of lung nodules. Therefore, it is important to integrate content-based image retrieval methods on the lung nodule classification process, since they are capable of retrieving similar cases from databases that were previously diagnosed. However, this mechanism depends on extracting relevant image features in order to obtain high efficiency. The goal of this paper is to perform the selection of 3D image features of margin sharpness and texture that can be relevant on the retrieval of similar cancerous and benign lung nodules. A total of 48 3D image attributes were extracted from the nodule volume. Border sharpness features were extracted from perpendicular lines drawn over the lesion boundary. Second-order texture features were extracted from a cooccurrence matrix. Relevant features were selected by a correlation-based method and a statistical significance analysis. Retrieval performance was assessed according to the nodule's potential malignancy on the 10 most similar cases and by the parameters of precision and recall. Statistical significant features reduced retrieval performance. Correlation-based method selected 2 margin sharpness attributes and 6 texture attributes and obtained higher precision compared to all 48 extracted features on similar nodule retrieval. Feature space dimensionality reduction of 83 % obtained higher retrieval performance and presented to be a computationaly low cost method of retrieving similar nodules for the diagnosis of lung cancer.
Chinese character recognition based on Gabor feature extraction and CNN
NASA Astrophysics Data System (ADS)
Xiong, Yudian; Lu, Tongwei; Jiang, Yongyuan
2018-03-01
As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.
Human listening studies reveal insights into object features extracted by echolocating dolphins
NASA Astrophysics Data System (ADS)
Delong, Caroline M.; Au, Whitlow W. L.; Roitblat, Herbert L.
2004-05-01
Echolocating dolphins extract object feature information from the acoustic parameters of object echoes. However, little is known about which object features are salient to dolphins or how they extract those features. To gain insight into how dolphins might be extracting feature information, human listeners were presented with echoes from objects used in a dolphin echoic-visual cross-modal matching task. Human participants performed a task similar to the one the dolphin had performed; however, echoic samples consisting of 23-echo trains were presented via headphones. The participants listened to the echoic sample and then visually selected the correct object from among three alternatives. The participants performed as well as or better than the dolphin (M=88.0% correct), and reported using a combination of acoustic cues to extract object features (e.g., loudness, pitch, timbre). Participants frequently reported using the pattern of aural changes in the echoes across the echo train to identify the shape and structure of the objects (e.g., peaks in loudness or pitch). It is likely that dolphins also attend to the pattern of changes across echoes as objects are echolocated from different angles.
Qin, Benlin; Liu, Xuecong; Cui, Haiming; Ma, Yue; Wang, Zimin; Han, Jing
2017-10-21
In this study, an efficient ultrasound-assisted aqueous two-phase extraction method was used for the extraction of anthocyanins from Lycium ruthenicum Murr. An ethanol/ammonium sulfate system was chosen for the aqueous two-phase system due to its fine partitioning and recycling behaviors. Single-factor experiments were conducted to determine the optimized composition of the system, and the response surface methodology was used for the further optimization of the ultrasound-assisted aqueous two-phase extraction. The optimal conditions were as follows: a salt concentration of 20%, an ethanol concentration of 25%, an extraction time of 33.7 min, an extraction temperature of 25°C, a liquid/solid ratio of 50:1 w/w, pH value of 3.98, and an ultrasound power of 600 W. Under the above conditions, the yields of anthocyanins reached 4.71 mg/g dry sample. For the further purification, D-101 resin was used, and the purity of anthocyanins reached 25.3%. In conclusion, ultrasound-assisted aqueous two-phase extraction was an efficient, ecofriendly, and economical method, and it may be a promising technique for extracting bioactive components from plants.
Zhamanbaeva, G T; Murzakhmetova, M K; Tuleukhanov, S T; Danilenko, M P
2014-12-01
We studied the effects of ethanol extract from Hippophae rhamnoides L. leaves on the growth and differentiation of human acute myeloid leukemia cells (KG-1a, HL60, and U937). The extract of Hippophae rhamnoides L. leaves inhibited cell growth depending on the cell strain and extract dose. In a high concentration (100 μg/ml), the extract also exhibited a cytotoxic effect on HL60 cells. Hippophae rhamnoides L. leaves extract did not affect cell differentiation and did not modify the differentiating effect of calcitriol, active vitamin D metabolite. Inhibition of cell proliferation was paralleled by paradoxical accumulation of phase S cells (synthetic phase) with a reciprocal decrease in the count of G1 cells (presynthetic phase). The extract in a concentration of 100 μg/ml induced the appearance of cells with a subdiploid DNA content (sub-G1 phase cells), which indicated induction of apoptosis. The antiproliferative effect of Hippophae rhamnoides L. extract on acute myeloid leukemia cells was at least partially determined by activation of the S phase checkpoint, which probably led to deceleration of the cell cycle and apoptosis induction.
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
NASA Astrophysics Data System (ADS)
Wang, Ximing; Kim, Bokkyu; Park, Ji Hoon; Wang, Erik; Forsyth, Sydney; Lim, Cody; Ravi, Ragini; Karibyan, Sarkis; Sanchez, Alexander; Liu, Brent
2017-03-01
Quantitative imaging biomarkers are used widely in clinical trials for tracking and evaluation of medical interventions. Previously, we have presented a web based informatics system utilizing quantitative imaging features for predicting outcomes in stroke rehabilitation clinical trials. The system integrates imaging features extraction tools and a web-based statistical analysis tool. The tools include a generalized linear mixed model(GLMM) that can investigate potential significance and correlation based on features extracted from clinical data and quantitative biomarkers. The imaging features extraction tools allow the user to collect imaging features and the GLMM module allows the user to select clinical data and imaging features such as stroke lesion characteristics from the database as regressors and regressands. This paper discusses the application scenario and evaluation results of the system in a stroke rehabilitation clinical trial. The system was utilized to manage clinical data and extract imaging biomarkers including stroke lesion volume, location and ventricle/brain ratio. The GLMM module was validated and the efficiency of data analysis was also evaluated.
Feature extraction from multiple data sources using genetic programming
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Brumby, Steven P.; Pope, Paul A.; Eads, Damian R.; Esch-Mosher, Diana M.; Galassi, Mark C.; Harvey, Neal R.; McCulloch, Hersey D.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Bloch, Jeffrey J.; David, Nancy A.
2002-08-01
Feature extraction from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. We use the GENetic Imagery Exploitation (GENIE) software for this purpose, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land cover features including towns, wildfire burnscars, and forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.
Smith, Lori L; Francis, Kyle A; Johnson, Joseph T; Gaskill, Cynthia L
2017-11-01
Pre-column derivatization with 9-fluorenylmethyl chloroformate (FMOC-Cl) was determined to be effective for quantitation of fumonisins B 1 and B 2 in feed. Liquid-solid extraction, clean-up using immunoaffinity solid phase extraction chromatography, and FMOC-derivatization preceded analysis by reverse phase HPLC with fluorescence. Instrument response was unchanged in the presence of matrix, indicating no need to use matrix-matched calibrants. Furthermore, high method recoveries indicated calibrants do not need to undergo clean-up to account for analyte loss. Established method features include linear instrument response from 0.04-2.5µg/mL and stable derivatized calibrants over 7days. Fortified cornmeal method recoveries from 0.1-30.0μg/g were determined for FB 1 (75.1%-109%) and FB 2 (96.0%-115.2%). Inter-assay precision ranged from 1.0%-16.7%. Method accuracy was further confirmed using certified reference material. Inter-laboratory comparison with naturally-contaminated field corn demonstrated equivalent results with conventional derivatization. These results indicate FMOC derivatization is a suitable alternative for fumonisins B 1 and B 2 quantitation in corn-based feeds. Copyright © 2017 Elsevier Ltd. All rights reserved.
Simulation of TanDEM-X interferograms for urban change detection
NASA Astrophysics Data System (ADS)
Welte, Amelie; Hammer, Horst; Thiele, Antje; Hinz, Stefan
2017-10-01
Damage detection after natural disasters is one of the remote sensing tasks in which Synthetic Aperture Radar (SAR) sensors play an important role. Since SAR is an active sensor, it can record images at all times of day and in all weather conditions, making it ideally suited for this task. While with the newer generation of SAR satellites such as TerraSAR-X or COSMOSkyMed amplitude change detection has become possible even for urban areas, interferometric phase change detection has not been published widely. This is mainly because of the long revisit times of common SAR sensors leading to temporal decorrelation. This situation has changed dramatically with the advent of the TanDEM-X constellation, which can create single-pass interferograms from space at very high resolutions, avoiding temporal decorrelation almost completely. In this paper the basic structures that are present for any building in InSAR phases, i.e. layover, shadow, and roof areas, are examined. Approaches for their extraction from TanDEM-X interferograms are developed using simulated SAR interferograms. The extracted features of the building signature will in the future be used for urban change detection in real TanDEM-X High Resolution Spotlight interferograms.
Chen, Hsiu-Liang; Chang, Shuo-Kai; Lee, Chia-Ying; Chuang, Li-Lin; Wei, Guor-Tzo
2012-09-12
In this study, we employed the room-temperature ionic liquid [bmim][PF(6)] as both ion-pair agent and an extractant in the phase-transfer liquid-phase microextraction (PTLPME) of aqueous dyes. In the PTLPME method, a dye solution was added to the extraction solution, comprising a small amount of [bmim][PF(6)] in a relatively large amount of CH(2)Cl(2), which serves as the disperser solvent to an extraction solution. Following extraction, CH(2)Cl(2) was evaporated from the extractant, resulting in the extracted dyes being concentrated in a small volume of the ionic liquid phase to increase the enrichment factor. The enrichment factors of for the dye Methylene Blue, Neutral Red, and Methyl Red were approximately 500, 550 and 400, respectively; their detection limits were 0.014, 0.43, and 0.02 μg L(-1), respectively, with relative standard deviations of 4.72%, 4.20%, and 6.10%, respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
Supporting the Growing Needs of the GIS Industry
NASA Technical Reports Server (NTRS)
2003-01-01
Visual Learning Systems, Inc. (VLS), of Missoula, Montana, has developed a commercial software application called Feature Analyst. Feature Analyst was conceived under a Small Business Innovation Research (SBIR) contract with NASA's Stennis Space Center, and through the Montana State University TechLink Center, an organization funded by NASA and the U.S. Department of Defense to link regional companies with Federal laboratories for joint research and technology transfer. The software provides a paradigm shift to automated feature extraction, as it utilizes spectral, spatial, temporal, and ancillary information to model the feature extraction process; presents the ability to remove clutter; incorporates advanced machine learning techniques to supply unparalleled levels of accuracy; and includes an exceedingly simple interface for feature extraction.
Behrens, Beate; Engelen, Jeannine; Tiso, Till; Blank, Lars Mathias; Hayen, Heiko
2016-04-01
Rhamnolipids are surface-active agents with a broad application potential that are produced in complex mixtures by bacteria of the genus Pseudomonas. Analysis from fermentation broth is often characterized by laborious sample preparation and requires hyphenated analytical techniques like liquid chromatography coupled to mass spectrometry (LC-MS) to obtain detailed information about sample composition. In this study, an analytical procedure based on chromatographic method development and characterization of rhamnolipid sample material by LC-MS as well as a comparison of two sample preparation methods, i.e., liquid-liquid extraction and solid-phase extraction, is presented. Efficient separation was achieved under reversed-phase conditions using a mixed propylphenyl and octadecylsilyl-modified silica gel stationary phase. LC-MS/MS analysis of a supernatant from Pseudomonas putida strain KT2440 pVLT33_rhlABC grown on glucose as sole carbon source and purified by solid-phase extraction revealed a total of 20 congeners of di-rhamnolipids, mono-rhamnolipids, and their biosynthetic precursors 3-(3-hydroxyalkanoyloxy)alkanoic acids (HAAs) with different carbon chain lengths from C8 to C14, including three rhamnolipids with uncommon C9 and C11 fatty acid residues. LC-MS and the orcinol assay were used to evaluate the developed solid-phase extraction method in comparison with the established liquid-liquid extraction. Solid-phase extraction exhibited higher yields and reproducibility as well as lower experimental effort.
Geng, Ping; Fang, Yingtong; Xie, Ronglong; Hu, Weilun; Xi, Xingjun; Chu, Qiao; Dong, Genlai; Shaheen, Nusrat; Wei, Yun
2017-02-01
Sugarcane rind contains some functional phenolic acids. The separation of these compounds from sugarcane rind is able to realize the integrated utilization of the crop and reduce environment pollution. In this paper, a novel protocol based on interfacing online solid-phase extraction with high-speed counter-current chromatography (HSCCC) was established, aiming at improving and simplifying the process of phenolic acids separation from sugarcane rind. The conditions of online solid-phase extraction with HSCCC involving solvent system, flow rate of mobile phase as well as saturated extent of absorption of solid-phase extraction were optimized to improve extraction efficiency and reduce separation time. The separation of phenolic acids was performed with a two-phase solvent system composed of butanol/acetic acid/water at a volume ratio of 4:1:5, and the developed online solid-phase extraction with HSCCC method was validated and successfully applied for sugarcane rind, and three phenolic acids including 6.73 mg of gallic acid, 10.85 mg of p-coumaric acid, and 2.78 mg of ferulic acid with purities of 60.2, 95.4, and 84%, respectively, were obtained from 150 mg sugarcane rind crude extracts. In addition, the three different elution methods of phenolic acids purification including HSCCC, elution-extrusion counter-current chromatography and back-extrusion counter-current chromatography were compared. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Estimating cognitive workload using wavelet entropy-based features during an arithmetic task.
Zarjam, Pega; Epps, Julien; Chen, Fang; Lovell, Nigel H
2013-12-01
Electroencephalography (EEG) has shown promise as an indicator of cognitive workload; however, precise workload estimation is an ongoing research challenge. In this investigation, seven levels of workload were induced using an arithmetic task, and the entropy of wavelet coefficients extracted from EEG signals is shown to distinguish all seven levels. For a subject-independent multi-channel classification scheme, the entropy features achieved high accuracy, up to 98% for channels from the frontal lobes, in the delta frequency band. This suggests that a smaller number of EEG channels in only one frequency band can be deployed for an effective EEG-based workload classification system. Together with analysis based on phase locking between channels, these results consistently suggest increased synchronization of neural responses for higher load levels. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogacz, Alex
Baseline design of the JLEIC booster synchrotron is presented. Its aim is to inject and accumulate heavy ions and protons at 285 MeV, to accelerate them to about 7 GeV, and finally to extract the beam into the ion collider ring. The Figure-8 ring features two 2600 achromatic arcs configured with negative momentum compaction optics, designed to avoid transition crossing for all ion species during the course of acceleration. The lattice also features a specialized high dispersion injection insert optimized to facilitate the transverse phase-space painting in both planes for multi-turn ion injection. Furthermore, the lattice has been optimized tomore » ease chromaticity correction with two families of sextupoles in each plane. The booster ring is configured with super-ferric, 3 Tesla bends. We are presently launching optimization of the booster synchrotron design to operate in the extreme space-charge dominated regime.« less
Karakülah, Gökhan; Dicle, Oğuz; Koşaner, Ozgün; Suner, Aslı; Birant, Çağdaş Can; Berber, Tolga; Canbek, Sezin
2014-01-01
The lack of laboratory tests for the diagnosis of most of the congenital anomalies renders the physical examination of the case crucial for the diagnosis of the anomaly; and the cases in the diagnostic phase are mostly being evaluated in the light of the literature knowledge. In this respect, for accurate diagnosis, ,it is of great importance to provide the decision maker with decision support by presenting the literature knowledge about a particular case. Here, we demonstrated a methodology for automated scanning and determining of the phenotypic features from the case reports related to congenital anomalies in the literature with text and natural language processing methods, and we created a framework of an information source for a potential diagnostic decision support system for congenital anomalies.
Lian, Ziru; Li, Hai-Bei; Wang, Jiangtao
2016-08-01
An innovative and effective extraction procedure based on molecularly imprinted solid-phase extraction (MISPE) was developed for the isolation of gonyautoxins 2,3 (GTX2,3) from Alexandrium minutum sample. Molecularly imprinted polymer microspheres were prepared by suspension polymerization and and were employed as sorbents for the solid-phase extraction of GTX2,3. An off-line MISPE protocol was optimized. Subsequently, the extract samples from A. minutum were analyzed. The results showed that the interference matrices in the extract were obviously cleaned up by MISPE procedures. This outcome enabled the direct extraction of GTX2,3 in A. minutum samples with extraction efficiency as high as 83 %, rather significantly, without any need for a cleanup step prior to the extraction. Furthermore, computational approach also provided direct evidences of the high selective isolation of GTX2,3 from the microalgal extracts.
The Chemistry of Separations Ligand Degradation by Organic Radical Cations
Mezyk, Stephen P.; Horne, Gregory P.; Mincher, Bruce J.; ...
2016-12-01
Solvent based extractions of used nuclear fuel use designer ligands in an organic phase extracting ligand complexed metal ions from an acidic aqueous phase. These extractions will be performed in highly radioactive environments, and the radiation chemistry of all these complexants and their diluents will play a major role in determining extraction efficiency, separation factors, and solvent-recycle longevity. Although there has been considerable effort in investigating ligand damage occurring in acidic water radiolysis conditions, only minimal fundamental kinetic and mechanistic data has been reported for the degradation of extraction ligands in the organic phase. Extraction solvent phases typically use normalmore » alkanes such as dodecane, TPH, and kerosene as diluents. The radiolysis of such diluents produce a mixture of radical cations (R •+), carbon-centered radicals (R •), solvated electrons, and molecular products such as hydrogen. Typically, the radical species will preferentially react with the dissolved oxygen present to produce relatively inert peroxyl radicals. This isolates the alkane radical cation species, R •+ as the major radiolytically-induced organic species that can react with, and degrade, extraction agents in this phase. Here we report on our recent studies of organic radical cation reactions with various ligands. Elucidating these parameters, and combining them with the known acidic aqueous phase chemistry, will allow a full, fundamental, understanding of the impact of radiation on solvent extraction based separation processes to be achieved.« less
Mirzaei, Mohamad; Dinpanah, Hossein
2011-07-01
In the present work, the applicability of hollow fiber-based liquid phase microextraction (HF-LPME) was evaluated for the extraction and preconcentration of valerenic acid prior to its determination by reversed-phase HPLC/UV. The target drug was extracted from 5.0 mL of aqueous solution with pH 3.5 into an organic extracting solvent (dihexyl ether) impregnated in the pores of a hollow fiber and finally back extracted into 10 μ L of aqueous solution with pH 9.5 located inside the lumen of the hollow fiber. In order to obtain high extraction efficiency, the parameters affecting the HF-LPME, including pH of the donor and acceptor phases, type of organic phase, ionic strength, the volume ratio of donor to acceptor phase, stirring rate and extraction time were studied and optimized. Under the optimized conditions, enrichment factor up to 446 was achieved and the relative standard deviation (RSD) of the method was 4.36% (n = 9). The linear range was 7.5-850 μg L⁻¹ with correlation coefficient (r²=0.999), detection limits was 2.5 μg L⁻¹ and the LOQ was 7.5 μg L⁻¹. The proposed method was evaluated by extraction and determination of valerenic acid in some Iranian wild species of Valerianaceae. Copyright © 2011 Elsevier B.V. All rights reserved.
The Chemistry of Separations Ligand Degradation by Organic Radical Cations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mezyk, Stephen P.; Horne, Gregory P.; Mincher, Bruce J.
Solvent based extractions of used nuclear fuel use designer ligands in an organic phase extracting ligand complexed metal ions from an acidic aqueous phase. These extractions will be performed in highly radioactive environments, and the radiation chemistry of all these complexants and their diluents will play a major role in determining extraction efficiency, separation factors, and solvent-recycle longevity. Although there has been considerable effort in investigating ligand damage occurring in acidic water radiolysis conditions, only minimal fundamental kinetic and mechanistic data has been reported for the degradation of extraction ligands in the organic phase. Extraction solvent phases typically use normalmore » alkanes such as dodecane, TPH, and kerosene as diluents. The radiolysis of such diluents produce a mixture of radical cations (R •+), carbon-centered radicals (R •), solvated electrons, and molecular products such as hydrogen. Typically, the radical species will preferentially react with the dissolved oxygen present to produce relatively inert peroxyl radicals. This isolates the alkane radical cation species, R •+ as the major radiolytically-induced organic species that can react with, and degrade, extraction agents in this phase. Here we report on our recent studies of organic radical cation reactions with various ligands. Elucidating these parameters, and combining them with the known acidic aqueous phase chemistry, will allow a full, fundamental, understanding of the impact of radiation on solvent extraction based separation processes to be achieved.« less
Composite, ordered material having sharp surface features
D'Urso, Brian R.; Simpson, John T.
2006-12-19
A composite material having sharp surface features includes a recessive phase and a protrusive phase, the recessive phase having a higher susceptibility to a preselected etchant than the protrusive phase, the composite material having an etched surface wherein the protrusive phase protrudes from the surface to form a sharp surface feature. The sharp surface features can be coated to make the surface super-hydrophobic.
Intercomparison of Carbonate Deposits on Mars: VNIR Spectral Character and Geologic Context
NASA Astrophysics Data System (ADS)
Wiseman, S.; Mustard, J. F.; Ehlmann, B. L.
2012-12-01
Carbonate-bearing deposits were identified on Mars at multiple locations using CRISM VNIR spectral data [1,2,3,4,5]. Carbonates exhibit distinctive C-O related absorption features near 2300, 2500, 3400 and 3900nm that can be used to identify specific carbonate phases (e.g., Mg-carbonates have band minima at 2300/2500nm and Fe-carbonates have minima at 2330/2530nm [6]). The features at 2300 and 2500nm are the focus of most CRISM analyses because this part of the spectral range is well calibrated, lacks strong contributions from thermal emission, and is not impacted by strong water-related absorptions near 3000nm (e.g., in Fe/Mg phyllosilicates). However, multiple other phases also exhibit features near 2300 and 2500nm.For carbonates, the depth of the 2500nm feature is stronger than at 2300nm as opposed to most Fe/Mg phyllosilicates. Mixing of the carbonate with other phases in CRISM pixels impacts the band centers and strengths of the 2300 and 2500nm features and therefore complicates identification of the carbonate phase(s) responsible for observed CRISM spectral features. In this study we analyze CRISM data fully corrected for the atmosphere using DISORT radiative transfer modeling [7,8] to evaluate CRISM spectra of multiple carbonate-bearing deposits. Rigorous intercomparison of CRISM spectra extracted from different images is affected by variable aerosol, CO2 and water vapor features left by the standard volcano scan empirical atmospheric correction [9]. While residual gas absorptions are commonly suppressed by ratioing, the appearance of spectral features in ratio spectra is impacted by spectral features in the dominator spectrum compromising detailed assessments of ratio spectra derived from different images. Atmospheric correction is particularly important for interpreting carbonate deposits because the 2500nm carbonate feature overlaps with atmospheric water vapor absorptions. In Nili Fossae, carbonates occur in association with olivine, smectite, serpentine [1,10], and possibly talc [11].These carbonates are hypothesized to have formed via alteration of olivine and/or serpentine under surface or low temperature hydrothermal conditions [1,11,12] Laboratory spectra of Mg carbonates (magnesite/hydromagnesite) are the closest matches to the Nili Fossae carbonates [1]. CRISM spectra of carbonates in and around Huygens basin are interpreted to be Fe and/or Ca carbonates [3], similar to carbonate spectra described by [2]. However, the CRISM carbonate-bearing spectra are mixed with Fe/Mg phyllosilicates [1,2,3], making a one to one comparison among Martian and laboratory carbonate spectra challenging. [1] Ehlmann et al. (2008), Sci., 322, 1828-1831, [2] Michalski and. Niles (2010), Nat. Geo., 3, 751-55, [3] Wray et al. (2011), LPSC, #2635, [4] Bishop et al. (2012), LPSC, #2330, [5] Carter and Poulet (2012), Icarus, [6] Gaffey (1987), JGR, 92, 1429-1440, [7] Stamnes et al. (1999), Appl. Opt., 27, 2502-2509, [8] Wolff et al. (2009), JGR, 11, [9] Wiseman et al., 2010, LPSC , #2461, [10] Ehlmann et al. (2010), GRL, 37, [11] Brown et al. (2010), EPSL, 297, 174-182. [12] Ehlmann et al. (2009), JGR, 114.
Automated Recognition of 3D Features in GPIR Images
NASA Technical Reports Server (NTRS)
Park, Han; Stough, Timothy; Fijany, Amir
2007-01-01
A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-10-20
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-01-01
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596
Method for separating water soluble organics from a process stream by aqueous biphasic extraction
Chaiko, David J.; Mego, William A.
1999-01-01
A method for separating water-miscible organic species from a process stream by aqueous biphasic extraction is provided. An aqueous biphase system is generated by contacting a process stream comprised of water, salt, and organic species with an aqueous polymer solution. The organic species transfer from the salt-rich phase to the polymer-rich phase, and the phases are separated. Next, the polymer is recovered from the loaded polymer phase by selectively extracting the polymer into an organic phase at an elevated temperature, while the organic species remain in a substantially salt-free aqueous solution. Alternatively, the polymer is recovered from the loaded polymer by a temperature induced phase separation (cloud point extraction), whereby the polymer and the organic species separate into two distinct solutions. The method for separating water-miscible organic species is applicable to the treatment of industrial wastewater streams, including the extraction and recovery of complexed metal ions from salt solutions, organic contaminants from mineral processing streams, and colorants from spent dye baths.
Loconto, Paul R; Isenga, David; O'Keefe, Michael; Knottnerus, Mark
2008-01-01
Polybrominated diphenyl ethers (PBDEs) are isolated and recovered with acceptable percent recoveries from human serum via liquid-liquid extraction and column chromatographic cleanup and fractionation with quantitation using capillary gas chromatography-mass spectrometry with electron capture negative ion and selected ion monitoring. PBDEs are found in unspiked serum. An alternative sample preparation approach is developed using sheep serum that utilizes a formic acid pre-treatment followed by reversed-phase solid-phase disk extraction and normal-phase solid-phase cleanup using acidified silica gel that yields>50% recoveries. When these percent recoveries are combined with a minimized phase ratio for human serum and very low instrument detection limits, method detection limits below 500 parts-per-trillion are realized.
Nonlinear features for product inspection
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1999-03-01
Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non-invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data.
Gilbert-López, Bienvenida; García-Reyes, Juan F; Lozano, Ana; Fernández-Alba, Amadeo R; Molina-Díaz, Antonio
2010-09-24
In this work we have evaluated the performance of two sample preparation methodologies for the large-scale multiresidue analysis of pesticides in olives using liquid chromatography-electrospray tandem mass spectrometry (LC-MS/MS). The tested sample treatment methodologies were: (1) liquid-liquid partitioning with acetonitrile followed by dispersive solid-phase extraction clean-up using GCB, PSA and C18 sorbents (QuEChERS method - modified for fatty vegetables) and (2) matrix solid-phase dispersion (MSPD) using aminopropyl as sorbent material and a final clean-up performed in the elution step using Florisil. An LC-MS/MS method covering 104 multiclass pesticides was developed to examine the performance of these two protocols. The separation of the compounds from the olive extracts was achieved using a short C18 column (50 mm x 4.6 mm i.d.) with 1.8 microm particle size. The identification and confirmation of the compounds was based on retention time matching along with the presence (and ratio) of two typical MRM transitions. Limits of detection obtained were lower than 10 microgkg(-1) for 89% analytes using both sample treatment protocols. Recoveries studies performed on olives samples spiked at two concentration levels (10 and 100 microgkg(-1)) yielded average recoveries in the range 70-120% for most analytes when QuEChERS procedure is employed. When MSPD was the choice for sample extraction, recoveries obtained were in the range 50-70% for most of target compounds. The proposed methods were successfully applied to the analysis of real olives samples, revealing the presence of some of the target species in the microgkg(-1) range. Besides the evaluation of the sample preparation approaches, we also discuss the use of advanced software features associated to MRM method development that overcome several limitations and drawbacks associated to MS/MS methods (time segments boundaries, tedious method development/manual scheduling and acquisition limitations). This software feature recently offered by different vendors is based on an algorithm that associates retention time data for each individual MS/MS transition, so that the number of simultaneously traced transitions throughout the entire chromatographic run (dwell times and sensitivity) is maximized. Copyright 2010 Elsevier B.V. All rights reserved.
2D DOST based local phase pattern for face recognition
NASA Astrophysics Data System (ADS)
Moniruzzaman, Md.; Alam, Mohammad S.
2017-05-01
A new two dimensional (2-D) Discrete Orthogonal Stcokwell Transform (DOST) based Local Phase Pattern (LPP) technique has been proposed for efficient face recognition. The proposed technique uses 2-D DOST as preliminary preprocessing and local phase pattern to form robust feature signature which can effectively accommodate various 3D facial distortions and illumination variations. The S-transform, is an extension of the ideas of the continuous wavelet transform (CWT), is also known for its local spectral phase properties in time-frequency representation (TFR). It provides a frequency dependent resolution of the time-frequency space and absolutely referenced local phase information while maintaining a direct relationship with the Fourier spectrum which is unique in TFR. After utilizing 2-D Stransform as the preprocessing and build local phase pattern from extracted phase information yield fast and efficient technique for face recognition. The proposed technique shows better correlation discrimination compared to alternate pattern recognition techniques such as wavelet or Gabor based face recognition. The performance of the proposed method has been tested using the Yale and extended Yale facial database under different environments such as illumination variation and 3D changes in facial expressions. Test results show that the proposed technique yields better performance compared to alternate time-frequency representation (TFR) based face recognition techniques.
Automated detection of videotaped neonatal seizures of epileptic origin.
Karayiannis, Nicolaos B; Xiong, Yaohua; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M
2006-06-01
This study aimed at the development of a seizure-detection system by training neural networks with quantitative motion information extracted from short video segments of neonatal seizures of the myoclonic and focal clonic types and random infant movements. The motion of the infants' body parts was quantified by temporal motion-strength signals extracted from video segments by motion-segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The motion of the infants' body parts also was quantified by temporal motion-trajectory signals extracted from video recordings by robust motion trackers based on block-motion models. These motion trackers were developed to adjust autonomously to illumination and contrast changes that may occur during the video-frame sequence. Video segments were represented by quantitative features obtained by analyzing motion-strength and motion-trajectory signals in both the time and frequency domains. Seizure recognition was performed by conventional feed-forward neural networks, quantum neural networks, and cosine radial basis function neural networks, which were trained to detect neonatal seizures of the myoclonic and focal clonic types and to distinguish them from random infant movements. The computational tools and procedures developed for automated seizure detection were evaluated on a set of 240 video segments of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). Regardless of the decision scheme used for interpreting the responses of the trained neural networks, all the neural network models exhibited sensitivity and specificity>90%. For one of the decision schemes proposed for interpreting the responses of the trained neural networks, the majority of the trained neural-network models exhibited sensitivity>90% and specificity>95%. In particular, cosine radial basis function neural networks achieved the performance targets of this phase of the project (i.e., sensitivity>95% and specificity>95%). The best among the motion segmentation and tracking methods developed in this study produced quantitative features that constitute a reliable basis for detecting neonatal seizures. The performance targets of this phase of the project were achieved by combining the quantitative features obtained by analyzing motion-strength signals with those produced by analyzing motion-trajectory signals. The computational procedures and tools developed in this study to perform off-line analysis of short video segments will be used in the next phase of this project, which involves the integration of these procedures and tools into a system that can process and analyze long video recordings of infants monitored for seizures in real time.
Feature extraction inspired by V1 in visual cortex
NASA Astrophysics Data System (ADS)
Lv, Chao; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Xin, Peng; Zhu, Mingning; Ma, Hongqiang
2018-04-01
Target feature extraction plays an important role in pattern recognition. It is the most complicated activity in the brain mechanism of biological vision. Inspired by high properties of primary visual cortex (V1) in extracting dynamic and static features, a visual perception model was raised. Firstly, 28 spatial-temporal filters with different orientations, half-squaring operation and divisive normalization were adopted to obtain the responses of V1 simple cells; then, an adjustable parameter was added to the output weight so that the response of complex cells was got. Experimental results indicate that the proposed V1 model can perceive motion information well. Besides, it has a good edge detection capability. The model inspired by V1 has good performance in feature extraction and effectively combines brain-inspired intelligence with computer vision.
Lee, Scott J; Zea, Ryan; Kim, David H; Lubner, Meghan G; Deming, Dustin A; Pickhardt, Perry J
2018-04-01
To determine if identifiable hepatic textural features are present at abdominal CT in patients with colorectal cancer (CRC) prior to the development of CT-detectable hepatic metastases. Four filtration-histogram texture features (standard deviation, skewness, entropy and kurtosis) were extracted from the liver parenchyma on portal venous phase CT images at staging and post-treatment surveillance. Surveillance scans corresponded to the last scan prior to the development of CT-detectable CRC liver metastases in 29 patients (median time interval, 6 months), and these were compared with interval-matched surveillance scans in 60 CRC patients who did not develop liver metastases. Predictive models of liver metastasis-free survival and overall survival were built using regularised Cox proportional hazards regression. Texture features did not significantly differ between cases and controls. For Cox models using all features as predictors, all coefficients were shrunk to zero, suggesting no association between any CT texture features and outcomes. Prognostic indices derived from entropy features at surveillance CT incorrectly classified patients into risk groups for future liver metastases (p < 0.001). On surveillance CT scans immediately prior to the development of CRC liver metastases, we found no evidence suggesting that changes in identifiable hepatic texture features were predictive of their development. • No correlation between liver texture features and metastasis-free survival was observed. • Liver texture features incorrectly classified patients into risk groups for liver metastases. • Standardised texture analysis workflows need to be developed to improve research reproducibility.
Variogram-based feature extraction for neural network recognition of logos
NASA Astrophysics Data System (ADS)
Pham, Tuan D.
2003-03-01
This paper presents a new approach for extracting spatial features of images based on the theory of regionalized variables. These features can be effectively used for automatic recognition of logo images using neural networks. Experimental results on a public-domain logo database show the effectiveness of the proposed approach.
Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin
2017-09-16
In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF₂) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach.
Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin
2017-01-01
In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF2) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach. PMID:28926953
Physically incorporated extraction phase of solid-phase microextraction by sol-gel technology.
Liu, Wenmin; Hu, Yuan; Zhao, Jinghong; Xu, Yuan; Guan, Yafeng
2006-01-13
A sol-gel method for the preparation of solid-phase microextraction (SPME) fiber was described and evaluated. The extraction phase of poly(dimethysiloxane) (PDMS) containing 3% vinyl group was physically incorporated into the sol-gel network without chemical bonding. The extraction phase itself is then partly crosslinked at 320 degrees C, forming an independent polymer network and can withstand desorption temperature of 290 degrees C. The headspace extraction of BTX by the fiber SPME was evaluated and the detection limit of o-xylene was down to 0.26 ng/l. Extraction and determination of organophosphorus pesticides (OPPs) in water, orange juice and red wine by the SPME-GC thermionic specified detector (TSD) was validated. Limits of detection of the method for OPPs were below 10 ng/l except methidathion. Relative standard deviations (RSDs) were in the range of 1-20% for pesticides being tested.
Face-iris multimodal biometric scheme based on feature level fusion
NASA Astrophysics Data System (ADS)
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei
2015-11-01
Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.
Wen, Tingxi; Zhang, Zhongnan; Qiu, Ming; Zeng, Ming; Luo, Weizhen
2017-01-01
The computer mouse is an important human-computer interaction device. But patients with physical finger disability are unable to operate this device. Surface EMG (sEMG) can be monitored by electrodes on the skin surface and is a reflection of the neuromuscular activities. Therefore, we can control limbs auxiliary equipment by utilizing sEMG classification in order to help the physically disabled patients to operate the mouse. To develop a new a method to extract sEMG generated by finger motion and apply novel features to classify sEMG. A window-based data acquisition method was presented to extract signal samples from sEMG electordes. Afterwards, a two-dimensional matrix image based feature extraction method, which differs from the classical methods based on time domain or frequency domain, was employed to transform signal samples to feature maps used for classification. In the experiments, sEMG data samples produced by the index and middle fingers at the click of a mouse button were separately acquired. Then, characteristics of the samples were analyzed to generate a feature map for each sample. Finally, the machine learning classification algorithms (SVM, KNN, RBF-NN) were employed to classify these feature maps on a GPU. The study demonstrated that all classifiers can identify and classify sEMG samples effectively. In particular, the accuracy of the SVM classifier reached up to 100%. The signal separation method is a convenient, efficient and quick method, which can effectively extract the sEMG samples produced by fingers. In addition, unlike the classical methods, the new method enables to extract features by enlarging sample signals' energy appropriately. The classical machine learning classifiers all performed well by using these features.
Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.
Segovia, F; Górriz, J M; Ramírez, J; Phillips, C
2016-01-01
Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods.
Changes Caused by Fruit Extracts in the Lipid Phase of Biological and Model Membranes
Pruchnik, Hanna; Oszmiański, Jan; Sarapuk, Janusz; Kleszczyńska, Halina
2010-01-01
The aim of the study was to determine changes incurred by polyphenolic compounds from selected fruits in the lipid phase of the erythrocyte membrane, in liposomes formed of erythrocyte lipids and phosphatidylcholine liposomes. In particular, the effect of extracts from apple, chokeberry, and strawberry on the red blood cell morphology, on packing order in the lipid hydrophilic phase, on fluidity of the hydrophobic phase, as well as on the temperature of phase transition in DPPC liposomes was studied. In the erythrocyte population, the proportions of echinocytes increased due to incorporation of polyphenolic compounds. Fluorimetry with a laurdan probe indicated increased packing density in the hydrophilic phase of the membrane in presence of polyphenolic extracts, the highest effect being observed for the apple extract. Using the fluorescence probes DPH and TMA-DPH, no effect was noted inside the hydrophobic phase of the membrane, as the lipid bilayer fluidity was not modified. The polyphenolic extracts slightly lowered the phase transition temperature of phosphatidylcholine liposomes. The studies have shown that the phenolic compounds contained in the extracts incorporate into the outer region of the erythrocyte membrane, affecting its shape and lipid packing order, which is reflected in the increasing number of echinocytes. The compounds also penetrate the outer part of the external lipid layer of liposomes formed of natural and DPPC lipids, changing its packing order. PMID:21423329
DARHT Multi-intelligence Seismic and Acoustic Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Garrison Nicole; Van Buren, Kendra Lu; Hemez, Francois M.
The purpose of this report is to document the analysis of seismic and acoustic data collected at the Dual-Axis Radiographic Hydrodynamic Test (DARHT) facility at Los Alamos National Laboratory for robust, multi-intelligence decision making. The data utilized herein is obtained from two tri-axial seismic sensors and three acoustic sensors, resulting in a total of nine data channels. The goal of this analysis is to develop a generalized, automated framework to determine internal operations at DARHT using informative features extracted from measurements collected external of the facility. Our framework involves four components: (1) feature extraction, (2) data fusion, (3) classification, andmore » finally (4) robustness analysis. Two approaches are taken for extracting features from the data. The first of these, generic feature extraction, involves extraction of statistical features from the nine data channels. The second approach, event detection, identifies specific events relevant to traffic entering and leaving the facility as well as explosive activities at DARHT and nearby explosive testing sites. Event detection is completed using a two stage method, first utilizing signatures in the frequency domain to identify outliers and second extracting short duration events of interest among these outliers by evaluating residuals of an autoregressive exogenous time series model. Features extracted from each data set are then fused to perform analysis with a multi-intelligence paradigm, where information from multiple data sets are combined to generate more information than available through analysis of each independently. The fused feature set is used to train a statistical classifier and predict the state of operations to inform a decision maker. We demonstrate this classification using both generic statistical features and event detection and provide a comparison of the two methods. Finally, the concept of decision robustness is presented through a preliminary analysis where uncertainty is added to the system through noise in the measurements.« less
A neural joint model for entity and relation extraction from biomedical text.
Li, Fei; Zhang, Meishan; Fu, Guohong; Ji, Donghong
2017-03-31
Extracting biomedical entities and their relations from text has important applications on biomedical research. Previous work primarily utilized feature-based pipeline models to process this task. Many efforts need to be made on feature engineering when feature-based models are employed. Moreover, pipeline models may suffer error propagation and are not able to utilize the interactions between subtasks. Therefore, we propose a neural joint model to extract biomedical entities as well as their relations simultaneously, and it can alleviate the problems above. Our model was evaluated on two tasks, i.e., the task of extracting adverse drug events between drug and disease entities, and the task of extracting resident relations between bacteria and location entities. Compared with the state-of-the-art systems in these tasks, our model improved the F1 scores of the first task by 5.1% in entity recognition and 8.0% in relation extraction, and that of the second task by 9.2% in relation extraction. The proposed model achieves competitive performances with less work on feature engineering. We demonstrate that the model based on neural networks is effective for biomedical entity and relation extraction. In addition, parameter sharing is an alternative method for neural models to jointly process this task. Our work can facilitate the research on biomedical text mining.
NASA Astrophysics Data System (ADS)
Feng, Ke; Wang, Kesheng; Ni, Qing; Zuo, Ming J.; Wei, Dongdong
2017-11-01
Planetary gearbox is a critical component for rotating machinery. It is widely used in wind turbines, aerospace and transmission systems in heavy industry. Thus, it is important to monitor planetary gearboxes, especially for fault diagnostics, during its operational conditions. However, in practice, operational conditions of planetary gearbox are often characterized by variations of rotational speeds and loads, which may bring difficulties for fault diagnosis through the measured vibrations. In this paper, phase angle data extracted from measured planetary gearbox vibrations is used for fault detection under non-stationary operational conditions. Together with sample entropy, fault diagnosis on planetary gearbox is implemented. The proposed scheme is explained and demonstrated in both simulation and experimental studies. The scheme proves to be effective and features advantages on fault diagnosis of planetary gearboxes under non-stationary operational conditions.
Mid-Infrared Spectroscopy of Carbon Stars in the Small Magellanic Cloud
2006-07-10
nod. Before extracting spectra from fit a variety of spectral feature shapes using MgS considerably the images, we used the imclean software package...mined from neighboring pixels. In addition to the dust features , the IRS wavelength range also To extract spectra from the cleaned and differenced...Example of the extraction of the molecular bands and the SiC dust 24 jIm, and they avoid any potential problems at the joint be- feature from the spectrum
NASA Astrophysics Data System (ADS)
Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun
2012-10-01
Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.
Fast and Efficient Feature Engineering for Multi-Cohort Analysis of EHR Data.
Ozery-Flato, Michal; Yanover, Chen; Gottlieb, Assaf; Weissbrod, Omer; Parush Shear-Yashuv, Naama; Goldschmidt, Yaara
2017-01-01
We present a framework for feature engineering, tailored for longitudinal structured data, such as electronic health records (EHRs). To fast-track feature engineering and extraction, the framework combines general-use plug-in extractors, a multi-cohort management mechanism, and modular memoization. Using this framework, we rapidly extracted thousands of features from diverse and large healthcare data sources in multiple projects.
Feature generation using genetic programming with application to fault classification.
Guo, Hong; Jack, Lindsay B; Nandi, Asoke K
2005-02-01
One of the major challenges in pattern recognition problems is the feature extraction process which derives new features from existing features, or directly from raw data in order to reduce the cost of computation during the classification process, while improving classifier efficiency. Most current feature extraction techniques transform the original pattern vector into a new vector with increased discrimination capability but lower dimensionality. This is conducted within a predefined feature space, and thus, has limited searching power. Genetic programming (GP) can generate new features from the original dataset without prior knowledge of the probabilistic distribution. In this paper, a GP-based approach is developed for feature extraction from raw vibration data recorded from a rotating machine with six different conditions. The created features are then used as the inputs to a neural classifier for the identification of six bearing conditions. Experimental results demonstrate the ability of GP to discover autimatically the different bearing conditions using features expressed in the form of nonlinear functions. Furthermore, four sets of results--using GP extracted features with artificial neural networks (ANN) and support vector machines (SVM), as well as traditional features with ANN and SVM--have been obtained. This GP-based approach is used for bearing fault classification for the first time and exhibits superior searching power over other techniques. Additionaly, it significantly reduces the time for computation compared with genetic algorithm (GA), therefore, makes a more practical realization of the solution.
An ensemble method for extracting adverse drug events from social media.
Liu, Jing; Zhao, Songzheng; Zhang, Xiaodi
2016-06-01
Because adverse drug events (ADEs) are a serious health problem and a leading cause of death, it is of vital importance to identify them correctly and in a timely manner. With the development of Web 2.0, social media has become a large data source for information on ADEs. The objective of this study is to develop a relation extraction system that uses natural language processing techniques to effectively distinguish between ADEs and non-ADEs in informal text on social media. We develop a feature-based approach that utilizes various lexical, syntactic, and semantic features. Information-gain-based feature selection is performed to address high-dimensional features. Then, we evaluate the effectiveness of four well-known kernel-based approaches (i.e., subset tree kernel, tree kernel, shortest dependency path kernel, and all-paths graph kernel) and several ensembles that are generated by adopting different combination methods (i.e., majority voting, weighted averaging, and stacked generalization). All of the approaches are tested using three data sets: two health-related discussion forums and one general social networking site (i.e., Twitter). When investigating the contribution of each feature subset, the feature-based approach attains the best area under the receiver operating characteristics curve (AUC) values, which are 78.6%, 72.2%, and 79.2% on the three data sets. When individual methods are used, we attain the best AUC values of 82.1%, 73.2%, and 77.0% using the subset tree kernel, shortest dependency path kernel, and feature-based approach on the three data sets, respectively. When using classifier ensembles, we achieve the best AUC values of 84.5%, 77.3%, and 84.5% on the three data sets, outperforming the baselines. Our experimental results indicate that ADE extraction from social media can benefit from feature selection. With respect to the effectiveness of different feature subsets, lexical features and semantic features can enhance the ADE extraction capability. Kernel-based approaches, which can stay away from the feature sparsity issue, are qualified to address the ADE extraction problem. Combining different individual classifiers using suitable combination methods can further enhance the ADE extraction effectiveness. Copyright © 2016 Elsevier B.V. All rights reserved.
Palmprint verification using Lagrangian decomposition and invariant interest points
NASA Astrophysics Data System (ADS)
Gupta, P.; Rattani, A.; Kisku, D. R.; Hwang, C. J.; Sing, J. K.
2011-06-01
This paper presents a palmprint based verification system using SIFT features and Lagrangian network graph technique. We employ SIFT for feature extraction from palmprint images whereas the region of interest (ROI) which has been extracted from wide palm texture at the preprocessing stage, is considered for invariant points extraction. Finally, identity is established by finding permutation matrix for a pair of reference and probe palm graphs drawn on extracted SIFT features. Permutation matrix is used to minimize the distance between two graphs. The propsed system has been tested on CASIA and IITK palmprint databases and experimental results reveal the effectiveness and robustness of the system.
New nonlinear features for inspection, robotics, and face recognition
NASA Astrophysics Data System (ADS)
Casasent, David P.; Talukder, Ashit
1999-10-01
Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data. Other applications of these new feature spaces in robotics and face recognition are also noted.
ECG Based Heart Arrhythmia Detection Using Wavelet Coherence and Bat Algorithm
NASA Astrophysics Data System (ADS)
Kora, Padmavathi; Sri Rama Krishna, K.
2016-12-01
Atrial fibrillation (AF) is a type of heart abnormality, during the AF electrical discharges in the atrium are rapid, results in abnormal heart beat. The morphology of ECG changes due to the abnormalities in the heart. This paper consists of three major steps for the detection of heart diseases: signal pre-processing, feature extraction and classification. Feature extraction is the key process in detecting the heart abnormality. Most of the ECG detection systems depend on the time domain features for cardiac signal classification. In this paper we proposed a wavelet coherence (WTC) technique for ECG signal analysis. The WTC calculates the similarity between two waveforms in frequency domain. Parameters extracted from WTC function is used as the features of the ECG signal. These features are optimized using Bat algorithm. The Levenberg Marquardt neural network classifier is used to classify the optimized features. The performance of the classifier can be improved with the optimized features.
Rapid matching of stereo vision based on fringe projection profilometry
NASA Astrophysics Data System (ADS)
Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei
2016-09-01
As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.
Wiese, Holger; Schweinberger, Stefan R; Neumann, Markus F
2008-11-01
We used repetition priming to investigate implicit and explicit processes of unfamiliar face categorization. During prime and test phases, participants categorized unfamiliar faces according to either age or gender. Faces presented at test were either new or primed in a task-congruent (same task during priming and test) or incongruent (different tasks) condition. During age categorization, reaction times revealed significant priming for both priming conditions, and event-related potentials yielded an increased N170 over the left hemisphere as a result of priming. During gender categorization, congruent faces elicited priming and a latency decrease in the right N170. Accordingly, information about age is extracted irrespective of processing demands, and priming facilitates the extraction of feature information reflected in the left N170 effect. By contrast, priming of gender categorization may depend on whether the task at initial presentation requires configural processing.
Formation of siliceous sediments in brandy after diatomite filtration.
Gómez, J; Gil, M L A; de la Rosa-Fox, N; Alguacil, M
2015-03-01
Brandy is quite a stable spirit but sometimes light sediment appears. This sediment was separated and analysed by IR and SEM-EDX. It was revealed that the sediment is composed mostly of silica and residual organic matter. Silica was present as an amorphous phase and as microparticles. In an attempt to reproduce the formation of the sediment, a diatomite extract was prepared with an ethanol/water mixture (36% vol.) and a calcined diatomite similar to that used in brandy filtration. This extract was added to unfiltered brandy in different amounts. After 1 month, the Si concentration decreased in all samples and sediments with similar compositions and features to those found in the unstable brandy appeared. The amounts of sediment obtained were directly related to the decrease in Si concentration in solution. Consequently, it can be concluded that siliceous sediment in brandy originates from Si released during diatomite filtration. Copyright © 2014 Elsevier Ltd. All rights reserved.
The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.
Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng
2017-01-01
Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.
Tsai, Hung-Sheng; Tsai, Teh-Hua
2012-01-04
The extraction equilibrium of indium(III) from a nitric acid solution using di(2-ethylhexyl) phosphoric acid (D2EHPA) as an acidic extractant of organophosphorus compounds dissolved in kerosene was studied. By graphical and numerical analysis, the compositions of indium-D2EHPA complexes in organic phase and stoichiometry of the extraction reaction were examined. Nitric acid solutions with various indium concentrations at 25 °C were used to obtain the equilibrium constant of InR₃ in the organic phase. The experimental results showed that the extraction distribution ratios of indium(III) between the organic phase and the aqueous solution increased when either the pH value of the aqueous solution and/or the concentration of the organic phase extractant increased. Finally, the recovery efficiency of indium(III) in nitric acid was measured.
Tan, Zhi-Jian; Yang, Zi-Zhen; Yi, Yong-Jian; Wang, Hong-Ying; Zhou, Wan-Lai; Li, Fen-Fang; Wang, Chao-Yun
2016-08-01
In this study, enzyme-assisted three-phase partitioning (EATPP) was used to extract oil from flaxseed. The whole procedure is composed of two parts: the enzymolysis procedure in which the flaxseed was hydrolyzed using an enzyme solution (the influencing parameters such as the type and concentration of enzyme, temperature, and pH were optimized) and three-phase partitioning (TPP), which was conducted by adding salt and t-butanol to the crude flaxseed slurry, resulting in the extraction of flaxseed oil into alcohol-rich upper phase. The concentration of t-butanol, concentration of salt, and the temperature were optimized to maximize the extraction yield. Under optimized conditions of a 49.29 % t-butanol concentration, 30.43 % ammonium sulfate concentration, and 35 °C extraction temperature, a maximum extraction yield of 71.68 % was obtained. This simple and effective EATPP can be used to achieve high extraction yields and oil quality, and thus, it is potential for large-scale oil production.
Yang, Yanqin; Chu, Guohai; Zhou, Guojun; Jiang, Jian; Yuan, Kailong; Pan, Yuanjiang; Song, Zhiyu; Li, Zuguang; Xia, Qian; Lu, Xinbo; Xiao, Weiqiang
2016-03-01
An ultrasound-microwave synergistic extraction coupled to headspace solid-phase microextraction was first employed to determine the volatile components in tobacco samples. The method combined the advantages of ultrasound, microwave, and headspace solid-phase microextraction. The extraction, separation, and enrichment were performed in a single step, which could greatly simplify the operation and reduce the whole pretreatment time. In the developed method, several experimental parameters, such as fiber type, ultrasound power, and irradiation time, were optimized to improve sampling efficiency. Under the optimal conditions, there were 37, 36, 34, and 36 components identified in tobacco from Guizhou, Hunan, Yunnan, and Zimbabwe, respectively, including esters, heterocycles, alkanes, ketones, terpenoids, acids, phenols, and alcohols. The compound types were roughly the same while the contents were varied from different origins due to the disparity of their growing conditions, such as soil, water, and climate. In addition, the ultrasound-microwave synergistic extraction coupled to headspace solid-phase microextraction method was compared with the microwave-assisted extraction coupled to headspace solid-phase microextraction and headspace solid-phase microextraction methods. More types of volatile components were obtained by using the ultrasound-microwave synergistic extraction coupled to headspace solid-phase microextraction method, moreover, the contents were high. The results indicated that the ultrasound-microwave synergistic extraction coupled to headspace solid-phase microextraction technique was a simple, time-saving and highly efficient approach, which was especially suitable for analysis of the volatile components in tobacco. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Pang, Guanghua; Feng, Jikun; Lin, Jun
2016-11-01
We imaged the crust structure beneath Jilin Province and Liaoning Province in China with fundamental mode Rayleigh waves recorded by 60 broadband stations deployed in the region. Surface-wave empirical Green's functions were retrieved from cross-correlations of inter-station data and phase velocity dispersions were measured using a frequency-time analysis method. Dispersion measurements were then utilized to construct 2D phase velocity maps for periods between 5 and 35 s. Subsequently, the phase-dispersion curves extracted from each cell of the 2D phase velocity maps were inverted to determine the 3D shear wave velocity structures of the crust. The phase velocity maps at different periods reflected the average velocity structures corresponding to different depth ranges. The maps in short periods, in particular, were in excellent agreement with known geological features of the surface. In addition to imaging shear wave velocity structures of the volcanoes, we show that obvious low-velocity anomalies imaged in the Changbaishan-Tianchi Volcano, the Longgang-Jinlongdingzi Volcano, and the system of the Dunmi Fault crossing the Jingbohu Volcano, all of which may be due to geothermal anomalies.
Feature ranking and rank aggregation for automatic sleep stage classification: a comparative study.
Najdi, Shirin; Gharbali, Ali Abdollahi; Fonseca, José Manuel
2017-08-18
Nowadays, sleep quality is one of the most important measures of healthy life, especially considering the huge number of sleep-related disorders. Identifying sleep stages using polysomnographic (PSG) signals is the traditional way of assessing sleep quality. However, the manual process of sleep stage classification is time-consuming, subjective and costly. Therefore, in order to improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. Automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. Since classification accuracy is deeply affected by the extracted features, a poor feature vector will adversely affect the classifier and eventually lead to low classification accuracy. Therefore, special attention should be given to the feature extraction and selection process. In this paper the performance of seven feature selection methods, as well as two feature rank aggregation methods, were compared. Pz-Oz EEG, horizontal EOG and submental chin EMG recordings of 22 healthy males and females were used. A comprehensive feature set including 49 features was extracted from these recordings. The extracted features are among the most common and effective features used in sleep stage classification from temporal, spectral, entropy-based and nonlinear categories. The feature selection methods were evaluated and compared using three criteria: classification accuracy, stability, and similarity. Simulation results show that MRMR-MID achieves the highest classification performance while Fisher method provides the most stable ranking. In our simulations, the performance of the aggregation methods was in the average level, although they are known to generate more stable results and better accuracy. The Borda and RRA rank aggregation methods could not outperform significantly the conventional feature ranking methods. Among conventional methods, some of them slightly performed better than others, although the choice of a suitable technique is dependent on the computational complexity and accuracy requirements of the user.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bu, Wei; Yu, Hao; Luo, Guangming
2014-09-11
Selective extraction of metal ions from a complex aqueous mixture into an organic phase is used to separate toxic or radioactive metals from polluted environments and nuclear waste, as well as to produce industrially relevant metals, such as rare earth ions. Selectivity arises from the choice of an extractant amphiphile, dissolved in the organic phase, which interacts preferentially with the target metal ion. The extractant-mediated process of ion transport from an aqueous to an organic phase takes place at the aqueous–organic interface; nevertheless, little is known about the molecular mechanism of this process despite its importance. Although state-of-the-art X-ray scatteringmore » is uniquely capable of probing molecular ordering at a liquid–liquid interface with subnanometer spatial resolution, utilizing this capability to investigate interfacial dynamical processes of short temporal duration remains a challenge. We show that a temperature-driven adsorption transition can be used to turn the extraction on and off by controlling adsorption and desorption of extractants at the oil–water interface. Lowering the temperature through this transition immobilizes a supramolecular ion–extractant complex at the interface during the extraction of rare earth erbium ions. Under the conditions of these experiments, the ion–extractant complexes condense into a two-dimensional inverted bilayer, which is characterized on the molecular scale with synchrotron X-ray reflectivity and fluorescence measurements. Raising the temperature above the transition leads to Er ion extraction as a result of desorption of ion–extractant complexes from the interface into the bulk organic phase. XAFS measurements of the ion–extractant complexes in the bulk organic phase demonstrate that they are similar to the interfacial complexes.« less
Zhang, Zhen; Liu, Fang; He, Caian; Yu, Yueli; Wang, Min
2017-12-01
Application of an aqueous two-phase system (ATPS) coupled with ultrasonic technology for the extraction of phloridzin from Malus micromalus Makino was evaluated and optimized by response surface methodology (RSM). The ethanol/ammonium sulfate ATPS was selected for detailed investigation, including the phase diagram, effect of phase composition and extract conditions on the partition of phloridzin, and the recycling of ammonium sulfate. In addition, the evaluation of extraction efficiency and the identification of phloridzin were investigated. The optimal partition coefficient (6.55) and recovery (92.86%) of phloridzin were obtained in a system composed of 35% ethanol (w/w) and 16% (NH 4 ) 2 SO 4 (w/w), 51:1 liquid-to-solid ratio, and extraction temperature of 36 °C. Comparing with the traditional solvent extraction with respective 35% and 80% ethanol, ultrasonic-assisted aqueous two-phase extraction (UAATPE) strategy had significant advantages with lower ethanol consumption, less impurity of sugar and protein, and higher extracting efficiency of phloridzin. Our result indicated that UAATPE was a valuable method for the extraction and preliminary purification of phloridzin from the fruit of Malus micromalus Makino, which has great potential in the deep processing of Malus micromalus Makino industry to increase these fruits' additional value and drive the local economic development. © 2017 Institute of Food Technologists®.
Separation of chemical groups from bio-oil aqueous phase via sequential organic solvent extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Shoujie; Ye, Philip; Borole, Abhijeet P
Bio-oil aqueous phase contains a considerable amount of furans, alcohols, ketones, aldehydes and phenolics besides the major components of organic acids and anhydrosugars. The complexity of bio-oil aqueous phase limits its efficient utilization. To improve the efficiency of bio-oil biorefinery, this study focused on the separation of chemical groups from bio-oil aqueous phase via sequential organic solvent extractions. Due to their high recoverability and low solubility in water, four solvents (hexane, petroleum ether, chloroform, and ethyl acetate) with different polarities were evaluated, and the optimum process conditions for chemical extraction were determined. Chloroform had high extraction efficiency for furans, phenolics,more » and ketones. In addition to these chemical groups, ethyl acetate had high extraction efficiency for organic acids. The sequential extraction by using chloroform followed by ethyl acetate rendered that 62.2 wt.% of original furans, ketones, alcohols, and phenolics were extracted to chloroform, over 62 wt.% acetic acid was extracted to ethyl acetate, resulting in a high concentration of levoglucosan (~53.0 wt.%) in the final aqueous phase. Chemicals separated via the sequential extraction could be used as feedstocks in biorefinery using processes such as catalytic upgrading of furans and phenolics to hydrocarbons, fermentation of levoglucosan to produce alcohols and diols, and hydrogen production from organic acids via microbial electrolysis.« less
Separation of chemical groups from bio-oil aqueous phase via sequential organic solvent extraction
Ren, Shoujie; Ye, Philip; Borole, Abhijeet P
2017-01-05
Bio-oil aqueous phase contains a considerable amount of furans, alcohols, ketones, aldehydes and phenolics besides the major components of organic acids and anhydrosugars. The complexity of bio-oil aqueous phase limits its efficient utilization. To improve the efficiency of bio-oil biorefinery, this study focused on the separation of chemical groups from bio-oil aqueous phase via sequential organic solvent extractions. Due to their high recoverability and low solubility in water, four solvents (hexane, petroleum ether, chloroform, and ethyl acetate) with different polarities were evaluated, and the optimum process conditions for chemical extraction were determined. Chloroform had high extraction efficiency for furans, phenolics,more » and ketones. In addition to these chemical groups, ethyl acetate had high extraction efficiency for organic acids. The sequential extraction by using chloroform followed by ethyl acetate rendered that 62.2 wt.% of original furans, ketones, alcohols, and phenolics were extracted to chloroform, over 62 wt.% acetic acid was extracted to ethyl acetate, resulting in a high concentration of levoglucosan (~53.0 wt.%) in the final aqueous phase. Chemicals separated via the sequential extraction could be used as feedstocks in biorefinery using processes such as catalytic upgrading of furans and phenolics to hydrocarbons, fermentation of levoglucosan to produce alcohols and diols, and hydrogen production from organic acids via microbial electrolysis.« less
New Finger Biometric Method Using Near Infrared Imaging
Lee, Eui Chul; Jung, Hyunwoo; Kim, Daeyeoul
2011-01-01
In this paper, we propose a new finger biometric method. Infrared finger images are first captured, and then feature extraction is performed using a modified Gaussian high-pass filter through binarization, local binary pattern (LBP), and local derivative pattern (LDP) methods. Infrared finger images include the multimodal features of finger veins and finger geometries. Instead of extracting each feature using different methods, the modified Gaussian high-pass filter is fully convolved. Therefore, the extracted binary patterns of finger images include the multimodal features of veins and finger geometries. Experimental results show that the proposed method has an error rate of 0.13%. PMID:22163741
Shape Adaptive, Robust Iris Feature Extraction from Noisy Iris Images
Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah
2013-01-01
In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801
Shape adaptive, robust iris feature extraction from noisy iris images.
Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah
2013-10-01
In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.
A low-cost video-oculography system for vestibular function testing.
Jihwan Park; Youngsun Kong; Yunyoung Nam
2017-07-01
In order to remain in focus during head movements, vestibular-ocular reflex causes eyes to move in the opposite direction to head movement. Disorders of vestibular system decrease vision, causing abnormal nystagmus and dizziness. To diagnose abnormal nystagmus, various studies have been reported including the use of rotating chair tests and videonystagmography. However, these tests are unsuitable for home use due to their high costs. Thus, a low-cost video-oculography system is necessary to obtain clinical features at home. In this paper, we present a low-cost video-oculography system using an infrared camera and Raspberry Pi board for tracking the pupils and evaluating a vestibular system. Horizontal eye movement is derived from video data obtained from an infrared camera and infrared light-emitting diodes, and the velocity of head rotation is obtained from a gyroscope sensor. Each pupil was extracted using a morphology operation and a contour detection method. Rotatory chair tests were conducted with our developed device. To evaluate our system, gain, asymmetry, and phase were measured and compared with System 2000. The average IQR errors of gain, phase and asymmetry were 0.81, 2.74 and 17.35, respectively. We showed that our system is able to measure clinical features.
Lin, En-Chiang; Cole, Jesse J; Jacobs, Heiko O
2010-11-10
This article reports and applies a recently discovered programmable multimaterial deposition process to the formation and combinatorial improvement of 3D nanostructured devices. The gas-phase deposition process produces charged <5 nm particles of silver, tungsten, and platinum and uses externally biased electrodes to control the material flux and to turn deposition ON/OFF in selected domains. Domains host nanostructured dielectrics to define arrays of electrodynamic 10 × nanolenses to further control the flux to form <100 nm resolution deposits. The unique feature of the process is that material type, amount, and sequence can be altered from one domain to the next leading to different types of nanostructures including multimaterial bridges, interconnects, or nanowire arrays with 20 nm positional accuracy. These features enable combinatorial nanostructured materials and device discovery. As a first demonstration, we produce and identify in a combinatorial way 3D nanostructured electrode designs that improve light scattering, absorption, and minority carrier extraction of bulk heterojunction photovoltaic cells. Photovoltaic cells from domains with long and dense nanowire arrays improve the relative power conversion efficiency by 47% when compared to flat domains on the same substrate.
Karraker, D.G.
1959-07-14
A liquid-liquid extraction process is presented for the recovery of polonium from lead and bismuth. According to the invention an acidic aqueous chloride phase containing the polonium, lead, and bismuth values is contacted with a tributyl phosphate ether phase. The polonium preferentially enters the organic phase which is then separated and washed with an aqueous hydrochloric solution to remove any lead or bismuth which may also have been extracted. The now highly purified polonium in the organic phase may be transferred to an aqueous solution by extraction with aqueous nitric acid.
Miralles, Pablo; Chisvert, Alberto; Salvador, Amparo
2015-01-01
An analytical method for the simultaneous determination of hydroxytyrosol and tyrosol in different types of olive extract raw materials and cosmetic cream samples has been developed. The determination was performed by liquid chromatography with UV spectrophotometric detection. Different chromatographic parameters, such as mobile phase pH and composition, oven temperature and different sample preparation variables were studied. The best chromatographic separation was obtained under the following conditions: C18 column set at 35°C and isocratic elution of a mixture ethanol: 1% acetic acid solution at pH 5 (5:95, v/v) as mobile phase pumped at 1 mL min(-1). The detection wavelength was set at 280 nm and the total run time required for the chromatographic analysis was 10 min, except for cosmetic cream samples where 20 min runtime was required (including a cleaning step). The method was satisfactorily applied to 23 samples including solid, water-soluble and fat-soluble olive extracts and cosmetic cream samples containing hydroxytyrosol and tyrosol. Good recoveries (95-107%) and repeatability (1.1-3.6%) were obtained, besides of limits of detection values below the μg mL(-1) level. These good analytical features, as well as its environmentally-friendly characteristics, make the presented method suitable to carry out both the control of the whole manufacture process of raw materials containing the target analytes and the quality control of the finished cosmetic products. Copyright © 2014 Elsevier B.V. All rights reserved.
Degradation of Triphenyltin by a Fluorescent Pseudomonad
Inoue, Hiroyuki; Takimura, Osamu; Fuse, Hiroyuki; Murakami, Katsuji; Kamimura, Kazuo; Yamaoka, Yukiho
2000-01-01
Triphenyltin (TPT)-degrading bacteria were screened by a simple technique using a post-column high-performance liquid chromatography using 3,3′,4′,7-tetrahydroxyflavone as a post-column reagent for determination of TPT and its metabolite, diphenyltin (DPT). An isolated strain, strain CNR15, was identified as Pseudomonas chlororaphis on the basis of its morphological and biochemical features. The incubation of strain CNR15 in a medium containing glycerol, succinate, and 130 μM TPT resulted in the rapid degradation of TPT and the accumulation of approximately 40 μM DPT as the only metabolite after 48 h. The culture supernatants of strain CNR15, grown with or without TPT, exhibited a TPT degradation activity, whereas the resting cells were not capable of degrading TPT. TPT was stoichiometrically degraded to DPT by the solid-phase extract of the culture supernatant, and benzene was detected as another degradation product. We found that the TPT degradation was catalyzed by low-molecular-mass substances (approximately 1,000 Da) in the extract, termed the TPT-degrading factor. The other fluorescent pseudomonads, P. chlororaphis ATCC 9446, Pseudomonas fluorescens ATCC 13525, and Pseudomonas aeruginosa ATCC 15692, also showed TPT degradation activity similar to strain CNR15 in the solid-phase extracts of their culture supernatants. These results suggest that the extracellular low-molecular-mass substance that is universally produced by the fluorescent pseudomonad could function as a potent catalyst to cometabolite TPT in the environment. PMID:10919812
Liu, Jun-Guo; Xing, Jian-Min; Chang, Tian-Shi; Liu, Hui-Zhou
2006-03-01
Nattokinase is a novel fibrinolytic enzyme that is considered to be a promising agent for thrombosis therapy. In this study, reverse micelles extraction was applied to purify and concentrate nattokinase from fermentation broth. The effects of temperature and phase volume ratio used for the forward and backward extraction on the extraction process were examined. The optimal temperature for forward and backward extraction were 25 degrees C and 35 degrees C respectively. Nattokinase became more thermosensitive during reverse micelles extraction. And it could be enriched in the stripping phase eight times during backward extraction. It was found that nattokinase could be purified by AOT reverse micelles with up to 80% activity recovery and with a purification factor of 3.9.
DOT National Transportation Integrated Search
2011-06-01
This report describes an accuracy assessment of extracted features derived from three : subsets of Quickbird pan-sharpened high resolution satellite image for the area of the : Port of Los Angeles, CA. Visual Learning Systems Feature Analyst and D...
Driver drowsiness classification using fuzzy wavelet-packet-based feature-extraction algorithm.
Khushaba, Rami N; Kodagoda, Sarath; Lal, Sara; Dissanayake, Gamini
2011-01-01
Driver drowsiness and loss of vigilance are a major cause of road accidents. Monitoring physiological signals while driving provides the possibility of detecting and warning of drowsiness and fatigue. The aim of this paper is to maximize the amount of drowsiness-related information extracted from a set of electroencephalogram (EEG), electrooculogram (EOG), and electrocardiogram (ECG) signals during a simulation driving test. Specifically, we develop an efficient fuzzy mutual-information (MI)- based wavelet packet transform (FMIWPT) feature-extraction method for classifying the driver drowsiness state into one of predefined drowsiness levels. The proposed method estimates the required MI using a novel approach based on fuzzy memberships providing an accurate-information content-estimation measure. The quality of the extracted features was assessed on datasets collected from 31 drivers on a simulation test. The experimental results proved the significance of FMIWPT in extracting features that highly correlate with the different drowsiness levels achieving a classification accuracy of 95%-- 97% on an average across all subjects.
NASA Astrophysics Data System (ADS)
Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang
2014-11-01
The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.
Generalized Feature Extraction for Wrist Pulse Analysis: From 1-D Time Series to 2-D Matrix.
Dimin Wang; Zhang, David; Guangming Lu
2017-07-01
Traditional Chinese pulse diagnosis, known as an empirical science, depends on the subjective experience. Inconsistent diagnostic results may be obtained among different practitioners. A scientific way of studying the pulse should be to analyze the objectified wrist pulse waveforms. In recent years, many pulse acquisition platforms have been developed with the advances in sensor and computer technology. And the pulse diagnosis using pattern recognition theories is also increasingly attracting attentions. Though many literatures on pulse feature extraction have been published, they just handle the pulse signals as simple 1-D time series and ignore the information within the class. This paper presents a generalized method of pulse feature extraction, extending the feature dimension from 1-D time series to 2-D matrix. The conventional wrist pulse features correspond to a particular case of the generalized models. The proposed method is validated through pattern classification on actual pulse records. Both quantitative and qualitative results relative to the 1-D pulse features are given through diabetes diagnosis. The experimental results show that the generalized 2-D matrix feature is effective in extracting both the periodic and nonperiodic information. And it is practical for wrist pulse analysis.
He, Dengchao; Zhang, Hongjun; Hao, Wenning; Zhang, Rui; Cheng, Kai
2017-07-01
Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network. Our approach adopts word-level attention to achieve better data representation for relation extraction without manually designed features to perform distant supervision instead of fully supervised relation extraction, and it utilizes instance-level attention to tackle the problem of false-positive data. Experimental results demonstrate that our proposed approach is effective and achieves better performance than traditional methods.
Gan, Haijiao; Xu, Hui
2018-05-30
In this work, an innovative magnetic aptamer adsorbent (Fe 3 O 4 -aptamer MNPs) was synthesized for the selective extraction of 8-hydroxy-2'-deoxyguanosine (8-OHdG). Amino-functionalized-Fe 3 O 4 was crosslinked with 8-OHdG aptamer by glutaraldehyde and fixed into a steel stainless tube as the sorbent of magnetic solid phase extraction (MSPE). After selective extraction by the aptamer adsorbent, the adsorbed 8-OHdG was desorbed dynamically and online analyzed by high performance liquid chromatography-mass spectrometry (HPLC-MS). The synthesized sorbent presented outstanding features, including specific selectivity, high enrichment capacity, stability and biocompatibility. Moreover, this proposed MSPE-HPLC-MS can achieve adsorption and desorption operation integration, greatly simplify the analysis process and reduce human errors. When compared with offline MSPE, a sensitivity enhancement of 800 times was obtained for the online method. Some experimental parameters such as the amount of the sorbent, sample flow rate and sample volume, were optimized systematically. Under the optimal conditions, low limit of detection (0.01 ng mL -1 , S/N = 3), limit of quantity (0.03 ng mL -1 , S/N = 10) and wide linear range with a satisfactory correlation coefficient (R 2 ≥ 0.9992) were obtained. And the recoveries of 8-OHdG in the urine samples varied from 82% to 116%. All these results revealed that the method is simple, rapid, selective, sensitive and automated, and it could be expected to become a potential approach for the selective determination of trace 8-OHdG in complex urinary samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Chen, Yen-Lin; Liang, Wen-Yew; Chiang, Chuan-Yen; Hsieh, Tung-Ju; Lee, Da-Cheng; Yuan, Shyan-Ming; Chang, Yang-Lang
2011-01-01
This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions. PMID:22163990
Nonredundant sparse feature extraction using autoencoders with receptive fields clustering.
Ayinde, Babajide O; Zurada, Jacek M
2017-09-01
This paper proposes new techniques for data representation in the context of deep learning using agglomerative clustering. Existing autoencoder-based data representation techniques tend to produce a number of encoding and decoding receptive fields of layered autoencoders that are duplicative, thereby leading to extraction of similar features, thus resulting in filtering redundancy. We propose a way to address this problem and show that such redundancy can be eliminated. This yields smaller networks and produces unique receptive fields that extract distinct features. It is also shown that autoencoders with nonnegativity constraints on weights are capable of extracting fewer redundant features than conventional sparse autoencoders. The concept is illustrated using conventional sparse autoencoder and nonnegativity-constrained autoencoders with MNIST digits recognition, NORB normalized-uniform object data and Yale face dataset. Copyright © 2017 Elsevier Ltd. All rights reserved.
The algorithm of fast image stitching based on multi-feature extraction
NASA Astrophysics Data System (ADS)
Yang, Chunde; Wu, Ge; Shi, Jing
2018-05-01
This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.
Tool Wear Feature Extraction Based on Hilbert Marginal Spectrum
NASA Astrophysics Data System (ADS)
Guan, Shan; Song, Weijie; Pang, Hongyang
2017-09-01
In the metal cutting process, the signal contains a wealth of tool wear state information. A tool wear signal’s analysis and feature extraction method based on Hilbert marginal spectrum is proposed. Firstly, the tool wear signal was decomposed by empirical mode decomposition algorithm and the intrinsic mode functions including the main information were screened out by the correlation coefficient and the variance contribution rate. Secondly, Hilbert transform was performed on the main intrinsic mode functions. Hilbert time-frequency spectrum and Hilbert marginal spectrum were obtained by Hilbert transform. Finally, Amplitude domain indexes were extracted on the basis of the Hilbert marginal spectrum and they structured recognition feature vector of tool wear state. The research results show that the extracted features can effectively characterize the different wear state of the tool, which provides a basis for monitoring tool wear condition.
Combined rule extraction and feature elimination in supervised classification.
Liu, Sheng; Patel, Ronak Y; Daga, Pankaj R; Liu, Haining; Fu, Gang; Doerksen, Robert J; Chen, Yixin; Wilkins, Dawn E
2012-09-01
There are a vast number of biology related research problems involving a combination of multiple sources of data to achieve a better understanding of the underlying problems. It is important to select and interpret the most important information from these sources. Thus it will be beneficial to have a good algorithm to simultaneously extract rules and select features for better interpretation of the predictive model. We propose an efficient algorithm, Combined Rule Extraction and Feature Elimination (CRF), based on 1-norm regularized random forests. CRF simultaneously extracts a small number of rules generated by random forests and selects important features. We applied CRF to several drug activity prediction and microarray data sets. CRF is capable of producing performance comparable with state-of-the-art prediction algorithms using a small number of decision rules. Some of the decision rules are biologically significant.
Kumar, Shiu; Sharma, Alok; Tsunoda, Tatsuhiko
2017-12-28
Common spatial pattern (CSP) has been an effective technique for feature extraction in electroencephalography (EEG) based brain computer interfaces (BCIs). However, motor imagery EEG signal feature extraction using CSP generally depends on the selection of the frequency bands to a great extent. In this study, we propose a mutual information based frequency band selection approach. The idea of the proposed method is to utilize the information from all the available channels for effectively selecting the most discriminative filter banks. CSP features are extracted from multiple overlapping sub-bands. An additional sub-band has been introduced that cover the wide frequency band (7-30 Hz) and two different types of features are extracted using CSP and common spatio-spectral pattern techniques, respectively. Mutual information is then computed from the extracted features of each of these bands and the top filter banks are selected for further processing. Linear discriminant analysis is applied to the features extracted from each of the filter banks. The scores are fused together, and classification is done using support vector machine. The proposed method is evaluated using BCI Competition III dataset IVa, BCI Competition IV dataset I and BCI Competition IV dataset IIb, and it outperformed all other competing methods achieving the lowest misclassification rate and the highest kappa coefficient on all three datasets. Introducing a wide sub-band and using mutual information for selecting the most discriminative sub-bands, the proposed method shows improvement in motor imagery EEG signal classification.
Study on identifying deciduous forest by the method of feature space transformation
NASA Astrophysics Data System (ADS)
Zhang, Xuexia; Wu, Pengfei
2009-10-01
The thematic remotely sensed information extraction is always one of puzzling nuts which the remote sensing science faces, so many remote sensing scientists devotes diligently to this domain research. The methods of thematic information extraction include two kinds of the visual interpretation and the computer interpretation, the developing direction of which is intellectualization and comprehensive modularization. The paper tries to develop the intelligent extraction method of feature space transformation for the deciduous forest thematic information extraction in Changping district of Beijing city. The whole Chinese-Brazil resources satellite images received in 2005 are used to extract the deciduous forest coverage area by feature space transformation method and linear spectral decomposing method, and the result from remote sensing is similar to woodland resource census data by Chinese forestry bureau in 2004.
Ohyama, Kunio; Akaike, Takenori; Hirobe, Chieko; Yamakawa, Toshio
2003-01-01
A crude extract was prepared with ethanol from dried ripened Vitex agnus-castus fruits growing in Israel (Vitex extract). Cytotoxicity of the extract against human uterine cervical canal fibroblast (HCF), human embryo fibroblast (HE-21), ovarian cancer (MCF-7), cervical carcinoma (SKG-3a), breast carcinoma (SKOV-3), gastric signet ring carcinoma (KATO-III), colon carcinoma (COLO 201), and small cell lung carcinoma (Lu-134-A-H) cells was examined. After culture for 24 h (logarithmic growth phase) or 72 h (stationary growth phase), the cells were treated with various concentrations of Vitex extract. In both growth phases, higher growth activity of cells and more cytotoxic activity of Vitex extract were seen. The cytotoxic activity against stationary growth-phase cells was less than that against logarithmic growth-phase cells. DNA fragmentation of Vitex extract-treated cells was seen in SKOV-3, KATO-III, COLO 201, and Lu-134-A-H cells. The DNA fragmentation in Vitex extract-treated KATO-III cells was inhibited by the presence of the antioxidative reagent pyrrolidine dithiocarbamate or N-acetyl-L-cysteine (NAC). Western blotting analysis showed that in Vitex extract-treated KATO-III cells, the presence of NAC also inhibited the expression of heme oxygenase-1 and the active forms of caspases-3, -8 and -9. It is concluded that the cytotoxic activity of Vitex extract may be attributed to the effect on cell growth, that cell death occurs through apoptosis, and that this apoptotic cell death may be attributed to increased intracellular oxidation by Vitex extract treatment.
Extraction of steroidal glucosiduronic acids from aqueous solutions by anionic liquid ion-exchangers
Mattox, Vernon R.; Litwiller, Robert D.; Goodrich, June E.
1972-01-01
A pilot study on the extraction of three steroidal glucosiduronic acids from water into organic solutions of liquid ion-exchangers is reported. A single extraction of a 0.5mm aqueous solution of either 11-deoxycorticosterone 21-glucosiduronic acid or cortisone 21-glucosiduronic acid with 0.1m-tetraheptylammonium chloride in chloroform took more than 99% of the conjugate into the organic phase; under the same conditions, the very polar conjugate, β-cortol 3-glucosiduronic acid, was extracted to the extent of 43%. The presence of a small amount of chloride, acetate, or sulphate ion in the aqueous phase inhibited extraction, but making the aqueous phase 4.0m with ammonium sulphate promoted extraction strongly. An increase in the concentration of ion-exchanger in the organic phase also promoted extraction. The amount of cortisone 21-glucosiduronic acid extracted by tetraheptylammonium chloride over the pH range of 3.9 to 10.7 was essentially constant. Chloroform solutions of a tertiary, a secondary, or a primary amine hydrochloride also will extract cortisone 21-glucosiduronic acid from water. The various liquid ion exchangers will extract steroidal glucosiduronic acid methyl esters from water into chloroform, although less completely than the corresponding free acids. The extraction of the glucosiduronic acids from water by tetraheptylammonium chloride occurs by an ion-exchange process; extraction of the esters does not involve ion exchange. PMID:5075264
NASA Astrophysics Data System (ADS)
Zhang, Zhifen; Chen, Huabin; Xu, Yanling; Zhong, Jiyong; Lv, Na; Chen, Shanben
2015-08-01
Multisensory data fusion-based online welding quality monitoring has gained increasing attention in intelligent welding process. This paper mainly focuses on the automatic detection of typical welding defect for Al alloy in gas tungsten arc welding (GTAW) by means of analzing arc spectrum, sound and voltage signal. Based on the developed algorithms in time and frequency domain, 41 feature parameters were successively extracted from these signals to characterize the welding process and seam quality. Then, the proposed feature selection approach, i.e., hybrid fisher-based filter and wrapper was successfully utilized to evaluate the sensitivity of each feature and reduce the feature dimensions. Finally, the optimal feature subset with 19 features was selected to obtain the highest accuracy, i.e., 94.72% using established classification model. This study provides a guideline for feature extraction, selection and dynamic modeling based on heterogeneous multisensory data to achieve a reliable online defect detection system in arc welding.