Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-10-20
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-01-01
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596
A standardised protocol for texture feature analysis of endoscopic images in gynaecological cancer.
Neofytou, Marios S; Tanos, Vasilis; Pattichis, Marios S; Pattichis, Constantinos S; Kyriacou, Efthyvoulos C; Koutsouris, Dimitris D
2007-11-29
In the development of tissue classification methods, classifiers rely on significant differences between texture features extracted from normal and abnormal regions. Yet, significant differences can arise due to variations in the image acquisition method. For endoscopic imaging of the endometrium, we propose a standardized image acquisition protocol to eliminate significant statistical differences due to variations in: (i) the distance from the tissue (panoramic vs close up), (ii) difference in viewing angles and (iii) color correction. We investigate texture feature variability for a variety of targets encountered in clinical endoscopy. All images were captured at clinically optimum illumination and focus using 720 x 576 pixels and 24 bits color for: (i) a variety of testing targets from a color palette with a known color distribution, (ii) different viewing angles, (iv) two different distances from a calf endometrial and from a chicken cavity. Also, human images from the endometrium were captured and analysed. For texture feature analysis, three different sets were considered: (i) Statistical Features (SF), (ii) Spatial Gray Level Dependence Matrices (SGLDM), and (iii) Gray Level Difference Statistics (GLDS). All images were gamma corrected and the extracted texture feature values were compared against the texture feature values extracted from the uncorrected images. Statistical tests were applied to compare images from different viewing conditions so as to determine any significant differences. For the proposed acquisition procedure, results indicate that there is no significant difference in texture features between the panoramic and close up views and between angles. For a calibrated target image, gamma correction provided an acquired image that was a significantly better approximation to the original target image. In turn, this implies that the texture features extracted from the corrected images provided for better approximations to the original images. Within the proposed protocol, for human ROIs, we have found that there is a large number of texture features that showed significant differences between normal and abnormal endometrium. This study provides a standardized protocol for avoiding any significant texture feature differences that may arise due to variability in the acquisition procedure or the lack of color correction. After applying the protocol, we have found that significant differences in texture features will only be due to the fact that the features were extracted from different types of tissue (normal vs abnormal).
Feature extraction for document text using Latent Dirichlet Allocation
NASA Astrophysics Data System (ADS)
Prihatini, P. M.; Suryawan, I. K.; Mandia, IN
2018-01-01
Feature extraction is one of stages in the information retrieval system that used to extract the unique feature values of a text document. The process of feature extraction can be done by several methods, one of which is Latent Dirichlet Allocation. However, researches related to text feature extraction using Latent Dirichlet Allocation method are rarely found for Indonesian text. Therefore, through this research, a text feature extraction will be implemented for Indonesian text. The research method consists of data acquisition, text pre-processing, initialization, topic sampling and evaluation. The evaluation is done by comparing Precision, Recall and F-Measure value between Latent Dirichlet Allocation and Term Frequency Inverse Document Frequency KMeans which commonly used for feature extraction. The evaluation results show that Precision, Recall and F-Measure value of Latent Dirichlet Allocation method is higher than Term Frequency Inverse Document Frequency KMeans method. This shows that Latent Dirichlet Allocation method is able to extract features and cluster Indonesian text better than Term Frequency Inverse Document Frequency KMeans method.
Genetic algorithm for the optimization of features and neural networks in ECG signals classification
NASA Astrophysics Data System (ADS)
Li, Hongqiang; Yuan, Danyang; Ma, Xiangdong; Cui, Dianyin; Cao, Lu
2017-01-01
Feature extraction and classification of electrocardiogram (ECG) signals are necessary for the automatic diagnosis of cardiac diseases. In this study, a novel method based on genetic algorithm-back propagation neural network (GA-BPNN) for classifying ECG signals with feature extraction using wavelet packet decomposition (WPD) is proposed. WPD combined with the statistical method is utilized to extract the effective features of ECG signals. The statistical features of the wavelet packet coefficients are calculated as the feature sets. GA is employed to decrease the dimensions of the feature sets and to optimize the weights and biases of the back propagation neural network (BPNN). Thereafter, the optimized BPNN classifier is applied to classify six types of ECG signals. In addition, an experimental platform is constructed for ECG signal acquisition to supply the ECG data for verifying the effectiveness of the proposed method. The GA-BPNN method with the MIT-BIH arrhythmia database achieved a dimension reduction of nearly 50% and produced good classification results with an accuracy of 97.78%. The experimental results based on the established acquisition platform indicated that the GA-BPNN method achieved a high classification accuracy of 99.33% and could be efficiently applied in the automatic identification of cardiac arrhythmias.
NASA Astrophysics Data System (ADS)
Hildebrandt, Mario; Kiltz, Stefan; Krapyvskyy, Dmytro; Dittmann, Jana; Vielhauer, Claus; Leich, Marcus
2011-11-01
A machine-assisted analysis of traces from crime scenes might be possible with the advent of new high-resolution non-destructive contact-less acquisition techniques for latent fingerprints. This requires reliable techniques for the automatic extraction of fingerprint features from latent and exemplar fingerprints for matching purposes using pattern recognition approaches. Therefore, we evaluate the NIST Biometric Image Software for the feature extraction and verification of contact-lessly acquired latent fingerprints to determine potential error rates. Our exemplary test setup includes 30 latent fingerprints from 5 people in two test sets that are acquired from different surfaces using a chromatic white light sensor. The first test set includes 20 fingerprints on two different surfaces. It is used to determine the feature extraction performance. The second test set includes one latent fingerprint on 10 different surfaces and an exemplar fingerprint to determine the verification performance. This utilized sensing technique does not require a physical or chemical visibility enhancement of the fingerprint residue, thus the original trace remains unaltered for further investigations. No particular feature extraction and verification techniques have been applied to such data, yet. Hence, we see the need for appropriate algorithms that are suitable to support forensic investigations.
Deep feature extraction and combination for synthetic aperture radar target classification
NASA Astrophysics Data System (ADS)
Amrani, Moussa; Jiang, Feng
2017-10-01
Feature extraction has always been a difficult problem in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very important to select discriminative features to train a classifier, which is a prerequisite. Inspired by the great success of convolutional neural network (CNN), we address the problem of SAR target classification by proposing a feature extraction method, which takes advantage of exploiting the extracted deep features from CNNs on SAR images to introduce more powerful discriminative features and robust representation ability for them. First, the pretrained VGG-S net is fine-tuned on moving and stationary target acquisition and recognition (MSTAR) public release database. Second, after a simple preprocessing is performed, the fine-tuned network is used as a fixed feature extractor to extract deep features from the processed SAR images. Third, the extracted deep features are fused by using a traditional concatenation and a discriminant correlation analysis algorithm. Finally, for target classification, K-nearest neighbors algorithm based on LogDet divergence-based metric learning triplet constraints is adopted as a baseline classifier. Experiments on MSTAR are conducted, and the classification accuracy results demonstrate that the proposed method outperforms the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Bhowmik, Mrinal Kanti; Gogoi, Usha Rani; Das, Kakali; Ghosh, Anjan Kumar; Bhattacharjee, Debotosh; Majumdar, Gautam
2016-05-01
The non-invasive, painless, radiation-free and cost-effective infrared breast thermography (IBT) makes a significant contribution to improving the survival rate of breast cancer patients by early detecting the disease. This paper presents a set of standard breast thermogram acquisition protocols to improve the potentiality and accuracy of infrared breast thermograms in early breast cancer detection. By maintaining all these protocols, an infrared breast thermogram acquisition setup has been established at the Regional Cancer Centre (RCC) of Government Medical College (AGMC), Tripura, India. The acquisition of breast thermogram is followed by the breast thermogram interpretation, for identifying the presence of any abnormality. However, due to the presence of complex vascular patterns, accurate interpretation of breast thermogram is a very challenging task. The bilateral symmetry of the thermal patterns in each breast thermogram is quantitatively computed by statistical feature analysis. A series of statistical features are extracted from a set of 20 thermograms of both healthy and unhealthy subjects. Finally, the extracted features are analyzed for breast abnormality detection. The key contributions made by this paper can be highlighted as -- a) the designing of a standard protocol suite for accurate acquisition of breast thermograms, b) creation of a new breast thermogram dataset by maintaining the protocol suite, and c) statistical analysis of the thermograms for abnormality detection. By doing so, this proposed work can minimize the rate of false findings in breast thermograms and thus, it will increase the utilization potentiality of breast thermograms in early breast cancer detection.
Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki
2015-03-10
This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.
Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki
2015-01-01
This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems. PMID:25763645
Stability of deep features across CT scanners and field of view using a physical phantom
NASA Astrophysics Data System (ADS)
Paul, Rahul; Shafiq-ul-Hassan, Muhammad; Moros, Eduardo G.; Gillies, Robert J.; Hall, Lawrence O.; Goldgof, Dmitry B.
2018-02-01
Radiomics is the process of analyzing radiological images by extracting quantitative features for monitoring and diagnosis of various cancers. Analyzing images acquired from different medical centers is confounded by many choices in acquisition, reconstruction parameters and differences among device manufacturers. Consequently, scanning the same patient or phantom using various acquisition/reconstruction parameters as well as different scanners may result in different feature values. To further evaluate this issue, in this study, CT images from a physical radiomic phantom were used. Recent studies showed that some quantitative features were dependent on voxel size and that this dependency could be reduced or removed by the appropriate normalization factor. Deep features extracted from a convolutional neural network, may also provide additional features for image analysis. Using a transfer learning approach, we obtained deep features from three convolutional neural networks pre-trained on color camera images. An we examination of the dependency of deep features on image pixel size was done. We found that some deep features were pixel size dependent, and to remove this dependency we proposed two effective normalization approaches. For analyzing the effects of normalization, a threshold has been used based on the calculated standard deviation and average distance from a best fit horizontal line among the features' underlying pixel size before and after normalization. The inter and intra scanner dependency of deep features has also been evaluated.
NASA Astrophysics Data System (ADS)
Frikha, Mayssa; Fendri, Emna; Hammami, Mohamed
2017-09-01
Using semantic attributes such as gender, clothes, and accessories to describe people's appearance is an appealing modeling method for video surveillance applications. We proposed a midlevel appearance signature based on extracting a list of nameable semantic attributes describing the body in uncontrolled acquisition conditions. Conventional approaches extract the same set of low-level features to learn the semantic classifiers uniformly. Their critical limitation is the inability to capture the dominant visual characteristics for each trait separately. The proposed approach consists of extracting low-level features in an attribute-adaptive way by automatically selecting the most relevant features for each attribute separately. Furthermore, relying on a small training-dataset would easily lead to poor performance due to the large intraclass and interclass variations. We annotated large scale people images collected from different person reidentification benchmarks covering a large attribute sample and reflecting the challenges of uncontrolled acquisition conditions. These annotations were gathered into an appearance semantic attribute dataset that contains 3590 images annotated with 14 attributes. Various experiments prove that carefully designed features for learning the visual characteristics for an attribute provide an improvement of the correct classification accuracy and a reduction of both spatial and temporal complexities against state-of-the-art approaches.
Wire bonding quality monitoring via refining process of electrical signal from ultrasonic generator
NASA Astrophysics Data System (ADS)
Feng, Wuwei; Meng, Qingfeng; Xie, Youbo; Fan, Hong
2011-04-01
In this paper, a technique for on-line quality detection of ultrasonic wire bonding is developed. The electrical signals from the ultrasonic generator supply, namely, voltage and current, are picked up by a measuring circuit and transformed into digital signals by a data acquisition system. A new feature extraction method is presented to characterize the transient property of the electrical signals and further evaluate the bond quality. The method includes three steps. First, the captured voltage and current are filtered by digital bandpass filter banks to obtain the corresponding subband signals such as fundamental signal, second harmonic, and third harmonic. Second, each subband envelope is obtained using the Hilbert transform for further feature extraction. Third, the subband envelopes are, respectively, separated into three phases, namely, envelope rising, stable, and damping phases, to extract the tiny waveform changes. The different waveform features are extracted from each phase of these subband envelopes. The principal components analysis (PCA) method is used for the feature selection in order to remove the relevant information and reduce the dimension of original feature variables. Using the selected features as inputs, an artificial neural network (ANN) is constructed to identify the complex bond fault pattern. By analyzing experimental data with the proposed feature extraction method and neural network, the results demonstrate the advantages of the proposed feature extraction method and the constructed artificial neural network in detecting and identifying bond quality.
Effects of preprocessing Landsat MSS data on derived features
NASA Technical Reports Server (NTRS)
Parris, T. M.; Cicone, R. C.
1983-01-01
Important to the use of multitemporal Landsat MSS data for earth resources monitoring, such as agricultural inventories, is the ability to minimize the effects of varying atmospheric and satellite viewing conditions, while extracting physically meaningful features from the data. In general, the approaches to the preprocessing problem have been derived from either physical or statistical models. This paper compares three proposed algorithms; XSTAR haze correction, Color Normalization, and Multiple Acquisition Mean Level Adjustment. These techniques represent physical, statistical, and hybrid physical-statistical models, respectively. The comparisons are made in the context of three feature extraction techniques; the Tasseled Cap, the Cate Color Cube. and Normalized Difference.
Generalized Feature Extraction for Wrist Pulse Analysis: From 1-D Time Series to 2-D Matrix.
Dimin Wang; Zhang, David; Guangming Lu
2017-07-01
Traditional Chinese pulse diagnosis, known as an empirical science, depends on the subjective experience. Inconsistent diagnostic results may be obtained among different practitioners. A scientific way of studying the pulse should be to analyze the objectified wrist pulse waveforms. In recent years, many pulse acquisition platforms have been developed with the advances in sensor and computer technology. And the pulse diagnosis using pattern recognition theories is also increasingly attracting attentions. Though many literatures on pulse feature extraction have been published, they just handle the pulse signals as simple 1-D time series and ignore the information within the class. This paper presents a generalized method of pulse feature extraction, extending the feature dimension from 1-D time series to 2-D matrix. The conventional wrist pulse features correspond to a particular case of the generalized models. The proposed method is validated through pattern classification on actual pulse records. Both quantitative and qualitative results relative to the 1-D pulse features are given through diabetes diagnosis. The experimental results show that the generalized 2-D matrix feature is effective in extracting both the periodic and nonperiodic information. And it is practical for wrist pulse analysis.
NASA Astrophysics Data System (ADS)
Emaminejad, Nastaran; Wahi-Anwar, Muhammad; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael
2018-02-01
Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.
Heart Sound Biometric System Based on Marginal Spectrum Analysis
Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin
2013-01-01
This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515
NASA Astrophysics Data System (ADS)
Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam
2017-11-01
Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.
Reliability of vascular geometry factors derived from clinical MRA
NASA Astrophysics Data System (ADS)
Bijari, Payam B.; Antiga, Luca; Steinman, David A.
2009-02-01
Recent work from our group has demonstrated that the amount of disturbed flow at the carotid bifurcation, believed to be a local risk factor for carotid atherosclerosis, can be predicted from luminal geometric factors. The next step along the way to a large-scale retrospective or prospective imaging study of such local risk factors for atherosclerosis is to investigate whether these geometric features are reproducible and accurate from routine 3D contrast-enhanced magnetic resonance angiography (CEMRA) using a fast and practical method of extraction. Motivated by this fact, we examined the reproducibility of multiple geometric features that are believed important in atherosclerosis risk assessment. We reconstructed three-dimensional carotid bifurcations from 15 clinical study participants who had previously undergone baseline and repeat CEMRA acquisitions. Certain geometric factors were extracted and compared between the baseline and the repeat scan. As the spatial resolution of the CEMRA data was noticeably coarse and anisotropic, we also investigated whether this might affect the measurement of the same geometric risk factors by simulating the CEMRA acquisition for 15 normal carotid bifurcations previously acquired at high resolution. Our results show that the extracted geometric factors are reproducible and faithful, with intra-subject uncertainties well below inter-subject variabilities. More importantly, these geometric risk factors can be extracted consistently and quickly for potential use as disturbed flow predictors.
Feature extraction for change analysis in SAR time series
NASA Astrophysics Data System (ADS)
Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan
2015-10-01
In remote sensing, the change detection topic represents a broad field of research. If time series data is available, change detection can be used for monitoring applications. These applications require regular image acquisitions at identical time of day along a defined period. Focusing on remote sensing sensors, radar is especially well-capable for applications requiring regularity, since it is independent from most weather and atmospheric influences. Furthermore, regarding the image acquisitions, the time of day plays no role due to the independence from daylight. Since 2007, the German SAR (Synthetic Aperture Radar) satellite TerraSAR-X (TSX) permits the acquisition of high resolution radar images capable for the analysis of dense built-up areas. In a former study, we presented the change analysis of the Stuttgart (Germany) airport. The aim of this study is the categorization of detected changes in the time series. This categorization is motivated by the fact that it is a poor statement only to describe where and when a specific area has changed. At least as important is the statement about what has caused the change. The focus is set on the analysis of so-called high activity areas (HAA) representing areas changing at least four times along the investigated period. As first step for categorizing these HAAs, the matching HAA changes (blobs) have to be identified. Afterwards, operating in this object-based blob level, several features are extracted which comprise shape-based, radiometric, statistic, morphological values and one context feature basing on a segmentation of the HAAs. This segmentation builds on the morphological differential attribute profiles (DAPs). Seven context classes are established: Urban, infrastructure, rural stable, rural unstable, natural, water and unclassified. A specific HA blob is assigned to one of these classes analyzing the CovAmCoh time series signature of the surrounding segments. In combination, also surrounding GIS information is included to verify the CovAmCoh based context assignment. In this paper, the focus is set on the features extracted for a later change categorization procedure.
Herrera, Pedro Javier; Pajares, Gonzalo; Guijarro, Maria; Ruz, José J.; Cruz, Jesús M.; Montes, Fernando
2009-01-01
This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion. PMID:22303134
Talking to Students: Metadiscourse in Introductory Coursebooks.
ERIC Educational Resources Information Center
Hyland, Ken
1999-01-01
Explores role of college textbooks in students' acquisition of special disciplinary literacy, focusing on use of metadiscourse as manifestation of writer's linguistic and rhetorical presence in a text. Features are compared from 21 textbook extracts in microbiology, marketing, and applied linguistics with similar corpus of research articles,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Y; Wang, J; Wang, C
Purpose: To investigate the sensitivity of classic texture features to variations of MRI acquisition parameters. Methods: This study was performed on American College of Radiology (ACR) MRI Accreditation Program Phantom. MR imaging was acquired on a GE 750 3T scanner with XRM explain gradient, employing a T1-weighted images (TR/TE=500/20ms) with the following parameters as the reference standard: number of signal average (NEX) = 1, matrix size = 256×256, flip angle = 90°, slice thickness = 5mm. The effect of the acquisition parameters on texture features with and without non-uniformity correction were investigated respectively, while all the other parameters were keptmore » as reference standard. Protocol parameters were set as follows: (a). NEX = 0.5, 2 and 4; (b).Phase encoding steps = 128, 160 and 192; (c). Matrix size = 128×128, 192×192 and 512×512. 32 classic texture features were generated using the classic gray level run length matrix (GLRLM) and gray level co-occurrence matrix (GLCOM) from each image data set. Normalized range ((maximum-minimum)/mean) was calculated to determine variation among the scans with different protocol parameters. Results: For different NEX, 31 out of 32 texture features’ range are within 10%. For different phase encoding steps, 31 out of 32 texture features’ range are within 10%. For different acquisition matrix size without non-uniformity correction, 14 out of 32 texture features’ range are within 10%; for different acquisition matrix size with non-uniformity correction, 16 out of 32 texture features’ range are within 10%. Conclusion: Initial results indicated that those texture features that range within 10% are less sensitive to variations in T1-weighted MRI acquisition parameters. This might suggest that certain texture features might be more reliable to be used as potential biomarkers in MR quantitative image analysis.« less
NASA Astrophysics Data System (ADS)
Whitney, Heather M.; Drukker, Karen; Edwards, Alexandra; Papaioannou, John; Giger, Maryellen L.
2018-02-01
Radiomics features extracted from breast lesion images have shown potential in diagnosis and prognosis of breast cancer. As clinical institutions transition from 1.5 T to 3.0 T magnetic resonance imaging (MRI), it is helpful to identify robust features across these field strengths. In this study, dynamic contrast-enhanced MR images were acquired retrospectively under IRB/HIPAA compliance, yielding 738 cases: 241 and 124 benign lesions imaged at 1.5 T and 3.0 T and 231 and 142 luminal A cancers imaged at 1.5 T and 3.0 T, respectively. Lesions were segmented using a fuzzy C-means method. Extracted radiomic values for each group of lesions by cancer status and field strength of acquisition were compared using a Kolmogorov-Smirnov test for the null hypothesis that two groups being compared came from the same distribution, with p-values being corrected for multiple comparisons by the Holm-Bonferroni method. Two shape features, one texture feature, and three enhancement variance kinetics features were found to be potentially robust. All potentially robust features had areas under the receiver operating characteristic curve (AUC) statistically greater than 0.5 in the task of distinguishing between lesion types (range of means 0.57-0.78). The significant difference in voxel size between field strength of acquisition limits the ability to affirm more features as robust or not robust according to field strength alone, and inhomogeneities in static field strength and radiofrequency field could also have affected the assessment of kinetic curve features as robust or not. Vendor-specific image scaling could have also been a factor. These findings will contribute to the development of radiomic signatures that use features identified as robust across field strength.
Wen, Tingxi; Zhang, Zhongnan; Qiu, Ming; Zeng, Ming; Luo, Weizhen
2017-01-01
The computer mouse is an important human-computer interaction device. But patients with physical finger disability are unable to operate this device. Surface EMG (sEMG) can be monitored by electrodes on the skin surface and is a reflection of the neuromuscular activities. Therefore, we can control limbs auxiliary equipment by utilizing sEMG classification in order to help the physically disabled patients to operate the mouse. To develop a new a method to extract sEMG generated by finger motion and apply novel features to classify sEMG. A window-based data acquisition method was presented to extract signal samples from sEMG electordes. Afterwards, a two-dimensional matrix image based feature extraction method, which differs from the classical methods based on time domain or frequency domain, was employed to transform signal samples to feature maps used for classification. In the experiments, sEMG data samples produced by the index and middle fingers at the click of a mouse button were separately acquired. Then, characteristics of the samples were analyzed to generate a feature map for each sample. Finally, the machine learning classification algorithms (SVM, KNN, RBF-NN) were employed to classify these feature maps on a GPU. The study demonstrated that all classifiers can identify and classify sEMG samples effectively. In particular, the accuracy of the SVM classifier reached up to 100%. The signal separation method is a convenient, efficient and quick method, which can effectively extract the sEMG samples produced by fingers. In addition, unlike the classical methods, the new method enables to extract features by enlarging sample signals' energy appropriately. The classical machine learning classifiers all performed well by using these features.
Intelligent services for discovery of complex geospatial features from remote sensing imagery
NASA Astrophysics Data System (ADS)
Yue, Peng; Di, Liping; Wei, Yaxing; Han, Weiguo
2013-09-01
Remote sensing imagery has been commonly used by intelligence analysts to discover geospatial features, including complex ones. The overwhelming volume of routine image acquisition requires automated methods or systems for feature discovery instead of manual image interpretation. The methods of extraction of elementary ground features such as buildings and roads from remote sensing imagery have been studied extensively. The discovery of complex geospatial features, however, is still rather understudied. A complex feature, such as a Weapon of Mass Destruction (WMD) proliferation facility, is spatially composed of elementary features (e.g., buildings for hosting fuel concentration machines, cooling towers, transportation roads, and fences). Such spatial semantics, together with thematic semantics of feature types, can be used to discover complex geospatial features. This paper proposes a workflow-based approach for discovery of complex geospatial features that uses geospatial semantics and services. The elementary features extracted from imagery are archived in distributed Web Feature Services (WFSs) and discoverable from a catalogue service. Using spatial semantics among elementary features and thematic semantics among feature types, workflow-based service chains can be constructed to locate semantically-related complex features in imagery. The workflows are reusable and can provide on-demand discovery of complex features in a distributed environment.
Creation of a virtual cutaneous tissue bank
NASA Astrophysics Data System (ADS)
LaFramboise, William A.; Shah, Sujal; Hoy, R. W.; Letbetter, D.; Petrosko, P.; Vennare, R.; Johnson, Peter C.
2000-04-01
Cellular and non-cellular constituents of skin contain fundamental morphometric features and structural patterns that correlate with tissue function. High resolution digital image acquisitions performed using an automated system and proprietary software to assemble adjacent images and create a contiguous, lossless, digital representation of individual microscope slide specimens. Serial extraction, evaluation and statistical analysis of cutaneous feature is performed utilizing an automated analysis system, to derive normal cutaneous parameters comprising essential structural skin components. Automated digital cutaneous analysis allows for fast extraction of microanatomic dat with accuracy approximating manual measurement. The process provides rapid assessment of feature both within individual specimens and across sample populations. The images, component data, and statistical analysis comprise a bioinformatics database to serve as an architectural blueprint for skin tissue engineering and as a diagnostic standard of comparison for pathologic specimens.
NASA Astrophysics Data System (ADS)
Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun
2014-01-01
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
Zhang, Yifan; Gao, Xunzhang; Peng, Xuan; Ye, Jiaqi; Li, Xiang
2018-05-16
The High Resolution Range Profile (HRRP) recognition has attracted great concern in the field of Radar Automatic Target Recognition (RATR). However, traditional HRRP recognition methods failed to model high dimensional sequential data efficiently and have a poor anti-noise ability. To deal with these problems, a novel stochastic neural network model named Attention-based Recurrent Temporal Restricted Boltzmann Machine (ARTRBM) is proposed in this paper. RTRBM is utilized to extract discriminative features and the attention mechanism is adopted to select major features. RTRBM is efficient to model high dimensional HRRP sequences because it can extract the information of temporal and spatial correlation between adjacent HRRPs. The attention mechanism is used in sequential data recognition tasks including machine translation and relation classification, which makes the model pay more attention to the major features of recognition. Therefore, the combination of RTRBM and the attention mechanism makes our model effective for extracting more internal related features and choose the important parts of the extracted features. Additionally, the model performs well with the noise corrupted HRRP data. Experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset show that our proposed model outperforms other traditional methods, which indicates that ARTRBM extracts, selects, and utilizes the correlation information between adjacent HRRPs effectively and is suitable for high dimensional data or noise corrupted data.
Improving the performance of univariate control charts for abnormal detection and classification
NASA Astrophysics Data System (ADS)
Yiakopoulos, Christos; Koutsoudaki, Maria; Gryllias, Konstantinos; Antoniadis, Ioannis
2017-03-01
Bearing failures in rotating machinery can cause machine breakdown and economical loss, if no effective actions are taken on time. Therefore, it is of prime importance to detect accurately the presence of faults, especially at their early stage, to prevent sequent damage and reduce costly downtime. The machinery fault diagnosis follows a roadmap of data acquisition, feature extraction and diagnostic decision making, in which mechanical vibration fault feature extraction is the foundation and the key to obtain an accurate diagnostic result. A challenge in this area is the selection of the most sensitive features for various types of fault, especially when the characteristics of failures are difficult to be extracted. Thus, a plethora of complex data-driven fault diagnosis methods are fed by prominent features, which are extracted and reduced through traditional or modern algorithms. Since most of the available datasets are captured during normal operating conditions, the last decade a number of novelty detection methods, able to work when only normal data are available, have been developed. In this study, a hybrid method combining univariate control charts and a feature extraction scheme is introduced focusing towards an abnormal change detection and classification, under the assumption that measurements under normal operating conditions of the machinery are available. The feature extraction method integrates the morphological operators and the Morlet wavelets. The effectiveness of the proposed methodology is validated on two different experimental cases with bearing faults, demonstrating that the proposed approach can improve the fault detection and classification performance of conventional control charts.
3D palmprint data fast acquisition and recognition
NASA Astrophysics Data System (ADS)
Wang, Xiaoxu; Huang, Shujun; Gao, Nan; Zhang, Zonghua
2014-11-01
This paper presents a fast 3D (Three-Dimension) palmprint capturing system and develops an efficient 3D palmprint feature extraction and recognition method. In order to fast acquire accurate 3D shape and texture of palmprint, a DLP projector triggers a CCD camera to realize synchronization. By generating and projecting green fringe pattern images onto the measured palm surface, 3D palmprint data are calculated from the fringe pattern images. The periodic feature vector can be derived from the calculated 3D palmprint data, so undistorted 3D biometrics is obtained. Using the obtained 3D palmprint data, feature matching test have been carried out by Gabor filter, competition rules and the mean curvature. Experimental results on capturing 3D palmprint show that the proposed acquisition method can fast get 3D shape information of palmprint. Some initial experiments on recognition show the proposed method is efficient by using 3D palmprint data.
A robust close-range photogrammetric target extraction algorithm for size and type variant targets
NASA Astrophysics Data System (ADS)
Nyarko, Kofi; Thomas, Clayton; Torres, Gilbert
2016-05-01
The Photo-G program conducted by Naval Air Systems Command at the Atlantic Test Range in Patuxent River, Maryland, uses photogrammetric analysis of large amounts of real-world imagery to characterize the motion of objects in a 3-D scene. Current approaches involve several independent processes including target acquisition, target identification, 2-D tracking of image features, and 3-D kinematic state estimation. Each process has its own inherent complications and corresponding degrees of both human intervention and computational complexity. One approach being explored for automated target acquisition relies on exploiting the pixel intensity distributions of photogrammetric targets, which tend to be patterns with bimodal intensity distributions. The bimodal distribution partitioning algorithm utilizes this distribution to automatically deconstruct a video frame into regions of interest (ROI) that are merged and expanded to target boundaries, from which ROI centroids are extracted to mark target acquisition points. This process has proved to be scale, position and orientation invariant, as well as fairly insensitive to global uniform intensity disparities.
Digital PCM bit synchronizer and detector
NASA Astrophysics Data System (ADS)
Moghazy, A. E.; Maral, G.; Blanchard, A.
1980-08-01
A theoretical analysis of a digital self-bit synchronizer and detector is presented and supported by the implementation of an experimental model that utilizes standard TTL logic circuits. This synchronizer is based on the generation of spectral line components by nonlinear filtering of the received bit stream, and extracting the line by a digital phase-locked loop (DPLL). The extracted reference signal instructs a digital matched filter (DMF) data detector. This realization features a short acquisition time and an all-digital structure.
The research and application of multi-biometric acquisition embedded system
NASA Astrophysics Data System (ADS)
Deng, Shichao; Liu, Tiegen; Guo, Jingjing; Li, Xiuyan
2009-11-01
The identification technology based on multi-biometric can greatly improve the applicability, reliability and antifalsification. This paper presents a multi-biometric system bases on embedded system, which includes: three capture daughter boards are applied to obtain different biometric: one each for fingerprint, iris and vein of the back of hand; FPGA (Field Programmable Gate Array) is designed as coprocessor, which uses to configure three daughter boards on request and provides data path between DSP (digital signal processor) and daughter boards; DSP is the master processor and its functions include: control the biometric information acquisition, extracts feature as required and responsible for compare the results with the local database or data server through network communication. The advantages of this system were it can acquire three different biometric in real time, extracts complexity feature flexibly in different biometrics' raw data according to different purposes and arithmetic and network interface on the core-board will be the solution of big data scale. Because this embedded system has high stability, reliability, flexibility and fit for different data scale, it can satisfy the demand of multi-biometric recognition.
Recognition of Roasted Coffee Bean Levels using Image Processing and Neural Network
NASA Astrophysics Data System (ADS)
Nasution, T. H.; Andayani, U.
2017-03-01
The coffee beans roast levels have some characteristics. However, some people cannot recognize the coffee beans roast level. In this research, we propose to design a method to recognize the coffee beans roast level of images digital by processing the image and classifying with backpropagation neural network. The steps consist of how to collect the images data with image acquisition, pre-processing, feature extraction using Gray Level Co-occurrence Matrix (GLCM) method and finally normalization of data extraction using decimal scaling features. The values of decimal scaling features become an input of classifying in backpropagation neural network. We use the method of backpropagation to recognize the coffee beans roast levels. The results showed that the proposed method is able to identify the coffee roasts beans level with an accuracy of 97.5%.
[Prosody, speech input and language acquisition].
Jungheim, M; Miller, S; Kühn, D; Ptok, M
2014-04-01
In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.
Germaine, Stephen S.; O'Donnell, Michael S.; Aldridge, Cameron L.; Baer, Lori; Fancher, Tammy; McBeth, Jamie; McDougal, Robert R.; Waltermire, Robert; Bowen, Zachary H.; Diffendorfer, James; Garman, Steven; Hanson, Leanne
2012-01-01
We evaluated how well three leading information-extraction software programs (eCognition, Feature Analyst, Feature Extraction) and manual hand digitization interpreted information from remotely sensed imagery of a visually complex gas field in Wyoming. Specifically, we compared how each mapped the area of and classified the disturbance features present on each of three remotely sensed images, including 30-meter-resolution Landsat, 10-meter-resolution SPOT (Satellite Pour l'Observation de la Terre), and 0.6-meter resolution pan-sharpened QuickBird scenes. Feature Extraction mapped the spatial area of disturbance features most accurately on the Landsat and QuickBird imagery, while hand digitization was most accurate on the SPOT imagery. Footprint non-overlap error was smallest on the Feature Analyst map of the Landsat imagery, the hand digitization map of the SPOT imagery, and the Feature Extraction map of the QuickBird imagery. When evaluating feature classification success against a set of ground-truthed control points, Feature Analyst, Feature Extraction, and hand digitization classified features with similar success on the QuickBird and SPOT imagery, while eCognition classified features poorly relative to the other methods. All maps derived from Landsat imagery classified disturbance features poorly. Using the hand digitized QuickBird data as a reference and making pixel-by-pixel comparisons, Feature Extraction classified features best overall on the QuickBird imagery, and Feature Analyst classified features best overall on the SPOT and Landsat imagery. Based on the entire suite of tasks we evaluated, Feature Extraction performed best overall on the Landsat and QuickBird imagery, while hand digitization performed best overall on the SPOT imagery, and eCognition performed worst overall on all three images. Error rates for both area measurements and feature classification were prohibitively high on Landsat imagery, while QuickBird was time and cost prohibitive for mapping large spatial extents. The SPOT imagery produced map products that were far more accurate than Landsat and did so at a far lower cost than QuickBird imagery. Consideration of degree of map accuracy required, costs associated with image acquisition, software, operator and computation time, and tradeoffs in the form of spatial extent versus resolution should all be considered when evaluating which combination of imagery and information-extraction method might best serve any given land use mapping project. When resources permit, attaining imagery that supports the highest classification and measurement accuracy possible is recommended.
NASA Astrophysics Data System (ADS)
Fauziah; Wibowo, E. P.; Madenda, S.; Hustinawati
2018-03-01
Capturing and recording motion in human is mostly done with the aim for sports, health, animation films, criminality, and robotic applications. In this study combined background subtraction and back propagation neural network. This purpose to produce, find similarity movement. The acquisition process using 8 MP resolution camera MP4 format, duration 48 seconds, 30frame/rate. video extracted produced 1444 pieces and results hand motion identification process. Phase of image processing performed is segmentation process, feature extraction, identification. Segmentation using bakground subtraction, extracted feature basically used to distinguish between one object to another object. Feature extraction performed by using motion based morfology analysis based on 7 invariant moment producing four different classes motion: no object, hand down, hand-to-side and hands-up. Identification process used to recognize of hand movement using seven inputs. Testing and training with a variety of parameters tested, it appears that architecture provides the highest accuracy in one hundred hidden neural network. The architecture is used propagate the input value of the system implementation process into the user interface. The result of the identification of the type of the human movement has been clone to produce the highest acuracy of 98.5447%. The training process is done to get the best results.
Automatic Speech Acquisition and Recognition for Spacesuit Audio Systems
NASA Technical Reports Server (NTRS)
Ye, Sherry
2015-01-01
NASA has a widely recognized but unmet need for novel human-machine interface technologies that can facilitate communication during astronaut extravehicular activities (EVAs), when loud noises and strong reverberations inside spacesuits make communication challenging. WeVoice, Inc., has developed a multichannel signal-processing method for speech acquisition in noisy and reverberant environments that enables automatic speech recognition (ASR) technology inside spacesuits. The technology reduces noise by exploiting differences between the statistical nature of signals (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, ASR accuracy can be improved to the level at which crewmembers will find the speech interface useful. System components and features include beam forming/multichannel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, and ASR decoding. Arithmetic complexity models were developed and will help designers of real-time ASR systems select proper tasks when confronted with constraints in computational resources. In Phase I of the project, WeVoice validated the technology. The company further refined the technology in Phase II and developed a prototype for testing and use by suited astronauts.
A DFT-Based Method of Feature Extraction for Palmprint Recognition
NASA Astrophysics Data System (ADS)
Choge, H. Kipsang; Karungaru, Stephen G.; Tsuge, Satoru; Fukumi, Minoru
Over the last quarter century, research in biometric systems has developed at a breathtaking pace and what started with the focus on the fingerprint has now expanded to include face, voice, iris, and behavioral characteristics such as gait. Palmprint is one of the most recent additions, and is currently the subject of great research interest due to its inherent uniqueness, stability, user-friendliness and ease of acquisition. This paper describes an effective and procedurally simple method of palmprint feature extraction specifically for palmprint recognition, although verification experiments are also conducted. This method takes advantage of the correspondences that exist between prominent palmprint features or objects in the spatial domain with those in the frequency or Fourier domain. Multi-dimensional feature vectors are formed by extracting a GA-optimized set of points from the 2-D Fourier spectrum of the palmprint images. The feature vectors are then used for palmprint recognition, before and after dimensionality reduction via the Karhunen-Loeve Transform (KLT). Experiments performed using palmprint images from the ‘PolyU Palmprint Database’ indicate that using a compact set of DFT coefficients, combined with KLT and data preprocessing, produces a recognition accuracy of more than 98% and can provide a fast and effective technique for personal identification.
ERIC Educational Resources Information Center
Lin, Huifen
2015-01-01
This meta-analysis reports the results of a systematic synthesis of primary studies on the effectiveness of computer-mediated communication (CMC) in second language acquisition (SLA) for the period 2000-2012. By extracting information on 21 features from each primary study, this meta-analysis intends to summarize the CMC research literature for…
NASA Astrophysics Data System (ADS)
Alshehhi, Rasha; Marpu, Prashanth Reddy
2017-04-01
Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.
Online 3D Ear Recognition by Combining Global and Local Features.
Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David
2016-01-01
The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%.
Online 3D Ear Recognition by Combining Global and Local Features
Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David
2016-01-01
The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%. PMID:27935955
Detection of blur artifacts in histopathological whole-slide images of endomyocardial biopsies.
Hang Wu; Phan, John H; Bhatia, Ajay K; Cundiff, Caitlin A; Shehata, Bahig M; Wang, May D
2015-01-01
Histopathological whole-slide images (WSIs) have emerged as an objective and quantitative means for image-based disease diagnosis. However, WSIs may contain acquisition artifacts that affect downstream image feature extraction and quantitative disease diagnosis. We develop a method for detecting blur artifacts in WSIs using distributions of local blur metrics. As features, these distributions enable accurate classification of WSI regions as sharp or blurry. We evaluate our method using over 1000 portions of an endomyocardial biopsy (EMB) WSI. Results indicate that local blur metrics accurately detect blurry image regions.
VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter
NASA Astrophysics Data System (ADS)
Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.; Panda Collaboration
2012-02-01
A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.
Das, Dev Kumar; Ghosh, Madhumala; Pal, Mallika; Maiti, Asok K; Chakraborty, Chandan
2013-02-01
The aim of this paper is to address the development of computer assisted malaria parasite characterization and classification using machine learning approach based on light microscopic images of peripheral blood smears. In doing this, microscopic image acquisition from stained slides, illumination correction and noise reduction, erythrocyte segmentation, feature extraction, feature selection and finally classification of different stages of malaria (Plasmodium vivax and Plasmodium falciparum) have been investigated. The erythrocytes are segmented using marker controlled watershed transformation and subsequently total ninety six features describing shape-size and texture of erythrocytes are extracted in respect to the parasitemia infected versus non-infected cells. Ninety four features are found to be statistically significant in discriminating six classes. Here a feature selection-cum-classification scheme has been devised by combining F-statistic, statistical learning techniques i.e., Bayesian learning and support vector machine (SVM) in order to provide the higher classification accuracy using best set of discriminating features. Results show that Bayesian approach provides the highest accuracy i.e., 84% for malaria classification by selecting 19 most significant features while SVM provides highest accuracy i.e., 83.5% with 9 most significant features. Finally, the performance of these two classifiers under feature selection framework has been compared toward malaria parasite classification. Copyright © 2012 Elsevier Ltd. All rights reserved.
Batalle, Dafnis; Muñoz-Moreno, Emma; Figueras, Francesc; Bargallo, Nuria; Eixarch, Elisenda; Gratacos, Eduard
2013-12-01
Obtaining individual biomarkers for the prediction of altered neurological outcome is a challenge of modern medicine and neuroscience. Connectomics based on magnetic resonance imaging (MRI) stands as a good candidate to exhaustively extract information from MRI by integrating the information obtained in a few network features that can be used as individual biomarkers of neurological outcome. However, this approach typically requires the use of diffusion and/or functional MRI to extract individual brain networks, which require high acquisition times and present an extreme sensitivity to motion artifacts, critical problems when scanning fetuses and infants. Extraction of individual networks based on morphological similarity from gray matter is a new approach that benefits from the power of graph theory analysis to describe gray matter morphology as a large-scale morphological network from a typical clinical anatomic acquisition such as T1-weighted MRI. In the present paper we propose a methodology to normalize these large-scale morphological networks to a brain network with standardized size based on a parcellation scheme. The proposed methodology was applied to reconstruct individual brain networks of 63 one-year-old infants, 41 infants with intrauterine growth restriction (IUGR) and 22 controls, showing altered network features in the IUGR group, and their association with neurodevelopmental outcome at two years of age by means of ordinal regression analysis of the network features obtained with Bayley Scale for Infant and Toddler Development, third edition. Although it must be more widely assessed, this methodology stands as a good candidate for the development of biomarkers for altered neurodevelopment in the pediatric population. © 2013 Elsevier Inc. All rights reserved.
Reproducibility of radiomics for deciphering tumor phenotype with imaging
NASA Astrophysics Data System (ADS)
Zhao, Binsheng; Tan, Yongqiang; Tsai, Wei-Yann; Qi, Jing; Xie, Chuanmiao; Lu, Lin; Schwartz, Lawrence H.
2016-03-01
Radiomics (radiogenomics) characterizes tumor phenotypes based on quantitative image features derived from routine radiologic imaging to improve cancer diagnosis, prognosis, prediction and response to therapy. Although radiomic features must be reproducible to qualify as biomarkers for clinical care, little is known about how routine imaging acquisition techniques/parameters affect reproducibility. To begin to fill this knowledge gap, we assessed the reproducibility of a comprehensive, commonly-used set of radiomic features using a unique, same-day repeat computed tomography data set from lung cancer patients. Each scan was reconstructed at 6 imaging settings, varying slice thicknesses (1.25 mm, 2.5 mm and 5 mm) and reconstruction algorithms (sharp, smooth). Reproducibility was assessed using the repeat scans reconstructed at identical imaging setting (6 settings in total). In separate analyses, we explored differences in radiomic features due to different imaging parameters by assessing the agreement of these radiomic features extracted from the repeat scans reconstructed at the same slice thickness but different algorithms (3 settings in total). Our data suggest that radiomic features are reproducible over a wide range of imaging settings. However, smooth and sharp reconstruction algorithms should not be used interchangeably. These findings will raise awareness of the importance of properly setting imaging acquisition parameters in radiomics/radiogenomics research.
Chen, Li; Mossa-Basha, Mahmud; Balu, Niranjan; Canton, Gador; Sun, Jie; Pimentel, Kristi; Hatsukami, Thomas S; Hwang, Jenq-Neng; Yuan, Chun
2018-06-01
To develop a quantitative intracranial artery measurement technique to extract comprehensive artery features from time-of-flight MR angiography (MRA). By semiautomatically tracing arteries based on an open-curve active contour model in a graphical user interface, 12 basic morphometric features and 16 basic intensity features for each artery were identified. Arteries were then classified as one of 24 types using prediction from a probability model. Based on the anatomical structures, features were integrated within 34 vascular groups for regional features of vascular trees. Eight 3D MRA acquisitions with intracranial atherosclerosis were assessed to validate this technique. Arterial tracings were validated by an experienced neuroradiologist who checked agreement at bifurcation and stenosis locations. This technique achieved 94% sensitivity and 85% positive predictive values (PPV) for bifurcations, and 85% sensitivity and PPV for stenosis. Up to 1,456 features, such as length, volume, and averaged signal intensity for each artery, as well as vascular group in each of the MRA images, could be extracted to comprehensively reflect characteristics, distribution, and connectivity of arteries. Length for the M1 segment of the middle cerebral artery extracted by this technique was compared with reviewer-measured results, and the intraclass correlation coefficient was 0.97. A semiautomated quantitative method to trace, label, and measure intracranial arteries from 3D-MRA was developed and validated. This technique can be used to facilitate quantitative intracranial vascular research, such as studying cerebrovascular adaptation to aging and disease conditions. Magn Reson Med 79:3229-3238, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Caggiano, Alessandra
2018-03-09
Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features ( k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear ( VB max ) was achieved, with predicted values very close to the measured tool wear values.
2018-01-01
Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features (k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear (VBmax) was achieved, with predicted values very close to the measured tool wear values. PMID:29522443
NASA Astrophysics Data System (ADS)
Zheng, Yuese; Solomon, Justin; Choudhury, Kingshuk; Marin, Daniele; Samei, Ehsan
2017-03-01
Texture analysis for lung lesions is sensitive to changing imaging conditions but these effects are not well understood, in part, due to a lack of ground-truth phantoms with realistic textures. The purpose of this study was to explore the accuracy and variability of texture features across imaging conditions by comparing imaged texture features to voxel-based 3D printed textured lesions for which the true values are known. The seven features of interest were based on the Grey Level Co-Occurrence Matrix (GLCM). The lesion phantoms were designed with three shapes (spherical, lobulated, and spiculated), two textures (homogenous and heterogeneous), and two sizes (diameter < 1.5 cm and 1.5 cm < diameter < 3 cm), resulting in 24 lesions (with a second replica of each). The lesions were inserted into an anthropomorphic thorax phantom (Multipurpose Chest Phantom N1, Kyoto Kagaku) and imaged using a commercial CT system (GE Revolution) at three CTDI levels (0.67, 1.42, and 5.80 mGy), three reconstruction algorithms (FBP, IR-2, IR-4), four reconstruction kernel types (standard, soft, edge), and two slice thicknesses (0.6 mm and 5 mm). Another repeat scan was performed. Texture features from these images were extracted and compared to the ground truth feature values by percent relative error. The variability across imaging conditions was calculated by standard deviation across a certain imaging condition for all heterogeneous lesions. The results indicated that the acquisition method has a significant influence on the accuracy and variability of extracted features and as such, feature quantities are highly susceptible to imaging parameter choices. The most influential parameters were slice thickness and reconstruction kernels. Thin slice thickness and edge reconstruction kernel overall produced more accurate and more repeatable results. Some features (e.g., Contrast) were more accurately quantified under conditions that render higher spatial frequencies (e.g., thinner slice thickness and sharp kernels), while others (e.g., Homogeneity) showed more accurate quantification under conditions that render smoother images (e.g., higher dose and smoother kernels). Care should be exercised is relating texture features between cases of varied acquisition protocols, with need to cross calibration dependent on the feature of interest.
Diagnosis of combined faults in Rotary Machinery by Non-Naive Bayesian approach
NASA Astrophysics Data System (ADS)
Asr, Mahsa Yazdanian; Ettefagh, Mir Mohammad; Hassannejad, Reza; Razavi, Seyed Naser
2017-02-01
When combined faults happen in different parts of the rotating machines, their features are profoundly dependent. Experts are completely familiar with individuals faults characteristics and enough data are available from single faults but the problem arises, when the faults combined and the separation of characteristics becomes complex. Therefore, the experts cannot declare exact information about the symptoms of combined fault and its quality. In this paper to overcome this drawback, a novel method is proposed. The core idea of the method is about declaring combined fault without using combined fault features as training data set and just individual fault features are applied in training step. For this purpose, after data acquisition and resampling the obtained vibration signals, Empirical Mode Decomposition (EMD) is utilized to decompose multi component signals to Intrinsic Mode Functions (IMFs). With the use of correlation coefficient, proper IMFs for feature extraction are selected. In feature extraction step, Shannon energy entropy of IMFs was extracted as well as statistical features. It is obvious that most of extracted features are strongly dependent. To consider this matter, Non-Naive Bayesian Classifier (NNBC) is appointed, which release the fundamental assumption of Naive Bayesian, i.e., the independence among features. To demonstrate the superiority of NNBC, other counterpart methods, include Normal Naive Bayesian classifier, Kernel Naive Bayesian classifier and Back Propagation Neural Networks were applied and the classification results are compared. An experimental vibration signals, collected from automobile gearbox, were used to verify the effectiveness of the proposed method. During the classification process, only the features, related individually to healthy state, bearing failure and gear failures, were assigned for training the classifier. But, combined fault features (combined gear and bearing failures) were examined as test data. The achieved probabilities for the test data show that the combined fault can be identified with high success rate.
Tropical Timber Identification using Backpropagation Neural Network
NASA Astrophysics Data System (ADS)
Siregar, B.; Andayani, U.; Fatihah, N.; Hakim, L.; Fahmi, F.
2017-01-01
Each and every type of wood has different characteristics. Identifying the type of wood properly is important, especially for industries that need to know the type of timber specifically. However, it requires expertise in identifying the type of wood and only limited experts available. In addition, the manual identification even by experts is rather inefficient because it requires a lot of time and possibility of human errors. To overcome these problems, a digital image based method to identify the type of timber automatically is needed. In this study, backpropagation neural network is used as artificial intelligence component. Several stages were developed: a microscope image acquisition, pre-processing, feature extraction using gray level co-occurrence matrix and normalization of data extraction using decimal scaling features. The results showed that the proposed method was able to identify the timber with an accuracy of 94%.
Pavement crack detection combining non-negative feature with fast LoG in complex scene
NASA Astrophysics Data System (ADS)
Wang, Wanli; Zhang, Xiuhua; Hong, Hanyu
2015-12-01
Pavement crack detection is affected by much interference in the realistic situation, such as the shadow, road sign, oil stain, salt and pepper noise etc. Due to these unfavorable factors, the exist crack detection methods are difficult to distinguish the crack from background correctly. How to extract crack information effectively is the key problem to the road crack detection system. To solve this problem, a novel method for pavement crack detection based on combining non-negative feature with fast LoG is proposed. The two key novelties and benefits of this new approach are that 1) using image pixel gray value compensation to acquisit uniform image, and 2) combining non-negative feature with fast LoG to extract crack information. The image preprocessing results demonstrate that the method is indeed able to homogenize the crack image with more accurately compared to existing methods. A large number of experimental results demonstrate the proposed approach can detect the crack regions more correctly compared with traditional methods.
NASA Astrophysics Data System (ADS)
Zhang, T.; Lei, B.; Hu, Y.; Liu, K.; Gan, Y.
2018-04-01
Optical remote sensing images have been widely used in feature interpretation and geo-information extraction. All the fundamental applications of optical remote sensing, are greatly influenced by cloud coverage. Generally, the availability of cloudless images depends on the meteorological conditions for a given area. In this study, the cloud total amount (CTA) products of the Fengyun (FY) satellite were introduced to explore the meteorological changes in a year over China. The cloud information of CTA products were tested by using ZY-3 satellite images firstly. CTA products from 2006 to 2017 were used to get relatively reliable results. The window period of cloudless images acquisition for different areas in China was then determined. This research provides a feasible way to get the cloudless images acquisition window by using meteorological observations.
Martinez-Torteya, Antonio; Rodriguez-Rojas, Juan; Celaya-Padilla, José M; Galván-Tejada, Jorge I; Treviño, Victor; Tamez-Peña, Jose
2014-10-01
Early diagnoses of Alzheimer's disease (AD) would confer many benefits. Several biomarkers have been proposed to achieve such a task, where features extracted from magnetic resonance imaging (MRI) have played an important role. However, studies have focused exclusively on morphological characteristics. This study aims to determine whether features relating to the signal and texture of the image could predict mild cognitive impairment (MCI) to AD progression. Clinical, biological, and positron emission tomography information and MRI images of 62 subjects from the AD neuroimaging initiative were used in this study, extracting 4150 features from each MRI. Within this multimodal database, a feature selection algorithm was used to obtain an accurate and small logistic regression model, generated by a methodology that yielded a mean blind test accuracy of 0.79. This model included six features, five of them obtained from the MRI images, and one obtained from genotyping. A risk analysis divided the subjects into low-risk and high-risk groups according to a prognostic index. The groups were statistically different ([Formula: see text]). These results demonstrated that MRI features related to both signal and texture add MCI to AD predictive power, and supported the ongoing notion that multimodal biomarkers outperform single-modality ones.
Reuzé, Sylvain; Orlhac, Fanny; Chargari, Cyrus; Nioche, Christophe; Limkin, Elaine; Riet, François; Escande, Alexandre; Haie-Meder, Christine; Dercle, Laurent; Gouy, Sébastien; Buvat, Irène; Deutsch, Eric; Robert, Charlotte
2017-06-27
To identify an imaging signature predicting local recurrence for locally advanced cervical cancer (LACC) treated by chemoradiation and brachytherapy from baseline 18F-FDG PET images, and to evaluate the possibility of gathering images from two different PET scanners in a radiomic study. 118 patients were included retrospectively. Two groups (G1, G2) were defined according to the PET scanner used for image acquisition. Eleven radiomic features were extracted from delineated cervical tumors to evaluate: (i) the predictive value of features for local recurrence of LACC, (ii) their reproducibility as a function of the scanner within a hepatic reference volume, (iii) the impact of voxel size on feature values. Eight features were statistically significant predictors of local recurrence in G1 (p < 0.05). The multivariate signature trained in G2 was validated in G1 (AUC=0.76, p<0.001) and identified local recurrence more accurately than SUVmax (p=0.022). Four features were significantly different between G1 and G2 in the liver. Spatial resampling was not sufficient to explain the stratification effect. This study showed that radiomic features could predict local recurrence of LACC better than SUVmax. Further investigation is needed before applying a model designed using data from one PET scanner to another.
Near-infrared image formation and processing for the extraction of hand veins
NASA Astrophysics Data System (ADS)
Bouzida, Nabila; Hakim Bendada, Abdel; Maldague, Xavier P.
2010-10-01
The main objective of this work is to extract the hand vein network using a non-invasive technique in the near-infrared region (NIR). The visualization of the veins is based on a relevant feature of the blood in relation with certain wavelengths of the electromagnetic spectrum. In the present paper, we first introduce the image formation in the NIR spectral band. Then, the acquisition system will be presented as well as the method used for the image processing in order to extract the vein signature. Extractions of this pattern on the finger, on the wrist and on the dorsal hand are achieved after exposing the hand to an optical stimulation by reflection or transmission of light. We present meaningful results of the extracted vein pattern demonstrating the utility of the method for a clinical application like the diagnosis of vein disease, of primitive varicose vein and also for applications in vein biometrics.
Chen, Jia-Mei; Li, Yan; Xu, Jun; Gong, Lei; Wang, Lin-Wei; Liu, Wen-Lou; Liu, Juan
2017-03-01
With the advance of digital pathology, image analysis has begun to show its advantages in information analysis of hematoxylin and eosin histopathology images. Generally, histological features in hematoxylin and eosin images are measured to evaluate tumor grade and prognosis for breast cancer. This review summarized recent works in image analysis of hematoxylin and eosin histopathology images for breast cancer prognosis. First, prognostic factors for breast cancer based on hematoxylin and eosin histopathology images were summarized. Then, usual procedures of image analysis for breast cancer prognosis were systematically reviewed, including image acquisition, image preprocessing, image detection and segmentation, and feature extraction. Finally, the prognostic value of image features and image feature-based prognostic models was evaluated. Moreover, we discussed the issues of current analysis, and some directions for future research.
NASA Astrophysics Data System (ADS)
Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.
2016-10-01
Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.
Development of a diffraction imaging flow cytometer
Jacobs, Kenneth M.; Lu, Jun Q.
2013-01-01
Diffraction images record angle-resolved distribution of scattered light from a particle excited by coherent light and can correlate highly with the 3D morphology of a particle. We present a jet-in-fluid design of flow chamber for acquisition of clear diffraction images in a laminar flow. Diffraction images of polystyrene spheres of different diameters were acquired and found to correlate highly with the calculated ones based on the Mie theory. Fast Fourier transform analysis indicated that the measured images can be used to extract sphere diameter values. These results demonstrate the significant potentials of high-throughput diffraction imaging flow cytometry for extracting 3D morphological features of cells. PMID:19794790
Lingua, Andrea; Marenchino, Davide; Nex, Francesco
2009-01-01
In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.
Multi-view line-scan inspection system using planar mirrors
NASA Astrophysics Data System (ADS)
Holländer, Bransilav; Štolc, Svorad; Huber-Mörk, Reinhold
2013-04-01
We demonstrate the design, setup, and results for a line-scan stereo image acquisition system using a single area- scan sensor, single lens and two planar mirrors attached to the acquisition device. The acquired object is moving relatively to the acquisition device and is observed under three different angles at the same time. Depending on the specific configuration it is possible to observe the object under a straight view (i.e., looking along the optical axis) and two skewed views. The relative motion between an object and the acquisition device automatically fulfills the epipolar constraint in stereo vision. The choice of lines to be extracted from the CMOS sensor depends on various factors such as the number, position and size of the mirrors, the optical and sensor configuration, or other application-specific parameters like desired depth resolution. The acquisition setup presented in this paper is suitable for the inspection of a printed matter, small parts or security features such as optical variable devices and holograms. The image processing pipeline applied to the extracted sensor lines is explained in detail. The effective depth resolution achieved by the presented system, assembled from only off-the-shelf components, is approximately equal to the spatial resolution and can be smoothly controlled by changing positions and angles of the mirrors. Actual performance of the device is demonstrated on a 3D-printed ground-truth object as well as two real-world examples: (i) the EUR-100 banknote - a high-quality printed matter and (ii) the hologram at the EUR-50 banknote { an optical variable device.
Martinez-Torteya, Antonio; Rodriguez-Rojas, Juan; Celaya-Padilla, José M.; Galván-Tejada, Jorge I.; Treviño, Victor; Tamez-Peña, Jose
2014-01-01
Abstract. Early diagnoses of Alzheimer’s disease (AD) would confer many benefits. Several biomarkers have been proposed to achieve such a task, where features extracted from magnetic resonance imaging (MRI) have played an important role. However, studies have focused exclusively on morphological characteristics. This study aims to determine whether features relating to the signal and texture of the image could predict mild cognitive impairment (MCI) to AD progression. Clinical, biological, and positron emission tomography information and MRI images of 62 subjects from the AD neuroimaging initiative were used in this study, extracting 4150 features from each MRI. Within this multimodal database, a feature selection algorithm was used to obtain an accurate and small logistic regression model, generated by a methodology that yielded a mean blind test accuracy of 0.79. This model included six features, five of them obtained from the MRI images, and one obtained from genotyping. A risk analysis divided the subjects into low-risk and high-risk groups according to a prognostic index. The groups were statistically different (p-value=2.04e−11). These results demonstrated that MRI features related to both signal and texture add MCI to AD predictive power, and supported the ongoing notion that multimodal biomarkers outperform single-modality ones. PMID:26158047
Harnessing Satellite Imageries in Feature Extraction Using Google Earth Pro
NASA Astrophysics Data System (ADS)
Fernandez, Sim Joseph; Milano, Alan
2016-07-01
Climate change has been a long-time concern worldwide. Impending flooding, for one, is among its unwanted consequences. The Phil-LiDAR 1 project of the Department of Science and Technology (DOST), Republic of the Philippines, has developed an early warning system in regards to flood hazards. The project utilizes the use of remote sensing technologies in determining the lives in probable dire danger by mapping and attributing building features using LiDAR dataset and satellite imageries. A free mapping software named Google Earth Pro (GEP) is used to load these satellite imageries as base maps. Geotagging of building features has been done so far with the use of handheld Global Positioning System (GPS). Alternatively, mapping and attribution of building features using GEP saves a substantial amount of resources such as manpower, time and budget. Accuracy-wise, geotagging by GEP is dependent on either the satellite imageries or orthophotograph images of half-meter resolution obtained during LiDAR acquisition and not on the GPS of three-meter accuracy. The attributed building features are overlain to the flood hazard map of Phil-LiDAR 1 in order to determine the exposed population. The building features as obtained from satellite imageries may not only be used in flood exposure assessment but may also be used in assessing other hazards and a number of other uses. Several other features may also be extracted from the satellite imageries.
NASA Astrophysics Data System (ADS)
Liu, Jie; Hu, Youmin; Wang, Yan; Wu, Bo; Fan, Jikai; Hu, Zhongxu
2018-05-01
The diagnosis of complicated fault severity problems in rotating machinery systems is an important issue that affects the productivity and quality of manufacturing processes and industrial applications. However, it usually suffers from several deficiencies. (1) A considerable degree of prior knowledge and expertise is required to not only extract and select specific features from raw sensor signals, and but also choose a suitable fusion for sensor information. (2) Traditional artificial neural networks with shallow architectures are usually adopted and they have a limited ability to learn the complex and variable operating conditions. In multi-sensor-based diagnosis applications in particular, massive high-dimensional and high-volume raw sensor signals need to be processed. In this paper, an integrated multi-sensor fusion-based deep feature learning (IMSFDFL) approach is developed to identify the fault severity in rotating machinery processes. First, traditional statistics and energy spectrum features are extracted from multiple sensors with multiple channels and combined. Then, a fused feature vector is constructed from all of the acquisition channels. Further, deep feature learning with stacked auto-encoders is used to obtain the deep features. Finally, the traditional softmax model is applied to identify the fault severity. The effectiveness of the proposed IMSFDFL approach is primarily verified by a one-stage gearbox experimental platform that uses several accelerometers under different operating conditions. This approach can identify fault severity more effectively than the traditional approaches.
Note: Fully integrated 3.2 Gbps quantum random number generator with real-time extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiao-Guang; Nie, You-Qi; Liang, Hao
2016-07-15
We present a real-time and fully integrated quantum random number generator (QRNG) by measuring laser phase fluctuations. The QRNG scheme based on laser phase fluctuations is featured for its capability of generating ultra-high-speed random numbers. However, the speed bottleneck of a practical QRNG lies on the limited speed of randomness extraction. To close the gap between the fast randomness generation and the slow post-processing, we propose a pipeline extraction algorithm based on Toeplitz matrix hashing and implement it in a high-speed field-programmable gate array. Further, all the QRNG components are integrated into a module, including a compact and actively stabilizedmore » interferometer, high-speed data acquisition, and real-time data post-processing and transmission. The final generation rate of the QRNG module with real-time extraction can reach 3.2 Gbps.« less
Ground settlement monitoring based on temporarily coherent points between two SAR acquisitions
Zhang, L.; Ding, X.; Lu, Z.
2011-01-01
An InSAR analysis approach for identifying and extracting the temporarily coherent points (TCP) that exist between two SAR acquisitions and for determining motions of the TCP is presented for applications such as ground settlement monitoring. TCP are identified based on the spatial characteristics of the range and azimuth offsets of coherent radar scatterers. A method for coregistering TCP based on the offsets of TCP is given to reduce the coregistration errors at TCP. An improved phase unwrapping method based on the minimum cost flow (MCF) algorithm and local Delaunay triangulation is also proposed for sparse TCP data. The proposed algorithms are validated using a test site in Hong Kong. The test results show that the algorithms work satisfactorily for various ground features.
Automatic detection of multi-level acetowhite regions in RGB color images of the uterine cervix
NASA Astrophysics Data System (ADS)
Lange, Holger
2005-04-01
Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method used to detect cancer precursors and cancer of the uterine cervix, whereby a physician (colposcopist) visually inspects the metaplastic epithelium on the cervix for certain distinctly abnormal morphologic features. A contrast agent, a 3-5% acetic acid solution, is used, causing abnormal and metaplastic epithelia to turn white. The colposcopist considers diagnostic features such as the acetowhite, blood vessel structure, and lesion margin to derive a clinical diagnosis. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD, a complex image analysis system that at its core assesses the same visual features as used by colposcopists. The acetowhite feature has been identified as one of the most important individual predictors of lesion severity. Here, we present the details and preliminary results of a multi-level acetowhite region detection algorithm for RGB color images of the cervix, including the detection of the anatomic features: cervix, os and columnar region, which are used for the acetowhite region detection. The RGB images are assumed to be glare free, either obtained by cross-polarized image acquisition or glare removal pre-processing. The basic approach of the algorithm is to extract a feature image from the RGB image that provides a good acetowhite to cervix background ratio, to segment the feature image using novel pixel grouping and multi-stage region-growing algorithms that provide region segmentations with different levels of detail, to extract the acetowhite regions from the region segmentations using a novel region selection algorithm, and then finally to extract the multi-levels from the acetowhite regions using multiple thresholds. The performance of the algorithm is demonstrated using human subject data.
Brain computer interfaces, a review.
Nicolas-Alonso, Luis Fernando; Gomez-Gil, Jaime
2012-01-01
A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or 'locked in' by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices.
Design and implementation of a contactless multiple hand feature acquisition system
NASA Astrophysics Data System (ADS)
Zhao, Qiushi; Bu, Wei; Wu, Xiangqian; Zhang, David
2012-06-01
In this work, an integrated contactless multiple hand feature acquisition system is designed. The system can capture palmprint, palm vein, and palm dorsal vein images simultaneously. Moreover, the images are captured in a contactless manner, that is, users need not to touch any part of the device when capturing. Palmprint is imaged under visible illumination while palm vein and palm dorsal vein are imaged under near infrared (NIR) illumination. The capturing is controlled by computer and the whole process is less than 1 second, which is sufficient for online biometric systems. Based on this device, this paper also implements a contactless hand-based multimodal biometric system. Palmprint, palm vein, palm dorsal vein, finger vein, and hand geometry features are extracted from the captured images. After similarity measure, the matching scores are fused using weighted sum fusion rule. Experimental results show that although the verification accuracy of each uni-modality is not as high as that of state-of-the-art, the fusion result is superior to most of the existing hand-based biometric systems. This result indicates that the proposed device is competent in the application of contactless multimodal hand-based biometrics.
Classification and data acquisition with incomplete data
NASA Astrophysics Data System (ADS)
Williams, David P.
In remote-sensing applications, incomplete data can result when only a subset of sensors (e.g., radar, infrared, acoustic) are deployed at certain regions. The limitations of single sensor systems have spurred interest in employing multiple sensor modalities simultaneously. For example, in land mine detection tasks, different sensor modalities are better-suited to capture different aspects of the underlying physics of the mines. Synthetic aperture radar sensors may be better at detecting surface mines, while infrared sensors may be better at detecting buried mines. By employing multiple sensor modalities to address the detection task, the strengths of the disparate sensors can be exploited in a synergistic manner to improve performance beyond that which would be achievable with either single sensor alone. When multi-sensor approaches are employed, however, incomplete data can be manifested. If each sensor is located on a separate platform ( e.g., aircraft), each sensor may interrogate---and hence collect data over---only partially overlapping areas of land. As a result, some data points may be characterized by data (i.e., features) from only a subset of the possible sensors employed in the task. Equivalently, this scenario implies that some data points will be missing features. Increasing focus in the future on using---and fusing data from---multiple sensors will make such incomplete-data problems commonplace. In many applications involving incomplete data, it is possible to acquire the missing data at a cost. In multi-sensor remote-sensing applications, data is acquired by deploying sensors to data points. Acquiring data is usually an expensive, time-consuming task, a fact that necessitates an intelligent data acquisition process. Incomplete data is not limited to remote-sensing applications, but rather, can arise in virtually any data set. In this dissertation, we address the general problem of classification when faced with incomplete data. We also address the closely related problem of active data acquisition, which develops a strategy to acquire missing features and labels that will most benefit the classification task. We first address the general problem of classification with incomplete data, maintaining the view that all data (i.e., information) is valuable. We employ a logistic regression framework within which we formulate a supervised classification algorithm for incomplete data. This principled, yet flexible, framework permits several interesting extensions that allow all available data to be utilized. One extension incorporates labeling error, which permits the usage of potentially imperfectly labeled data in learning a classifier. A second major extension converts the proposed algorithm to a semi-supervised approach by utilizing unlabeled data via graph-based regularization. Finally, the classification algorithm is extended to the case in which (image) data---from which features are extracted---are available from multiple resolutions. Taken together, this family of incomplete-data classification algorithms exploits all available data in a principled manner by avoiding explicit imputation. Instead, missing data is integrated out analytically with the aid of an estimated conditional density function (conditioned on the observed features). This feat is accomplished by invoking only mild assumptions. We also address the problem of active data acquisition by determining which missing data should be acquired to most improve performance. Specifically, we examine this data acquisition task when the data to be acquired can be either labels or features. The proposed approach is based on a criterion that accounts for the expected benefit of the acquisition. This approach, which is applicable for any general missing data problem, exploits the incomplete-data classification framework introduced in the first part of this dissertation. This data acquisition approach allows for the acquisition of both labels and features. Moreover, several types of feature acquisition are permitted, including the acquisition of individual or multiple features for individual or multiple data points, which may be either labeled or unlabeled. Furthermore, if different types of data acquisition are feasible for a given application, the algorithm will automatically determine the most beneficial type of data to acquire. Experimental results on both benchmark machine learning data sets and real (i.e., measured) remote-sensing data demonstrate the advantages of the proposed incomplete-data classification and active data acquisition algorithms.
NASA Astrophysics Data System (ADS)
Paganelli, Chiara; Lee, Danny; Greer, Peter B.; Baroni, Guido; Riboldi, Marco; Keall, Paul
2015-09-01
The quantification of tumor motion in sites affected by respiratory motion is of primary importance to improve treatment accuracy. To account for motion, different studies analyzed the translational component only, without focusing on the rotational component, which was quantified in a few studies on the prostate with implanted markers. The aim of our study was to propose a tool able to quantify lung tumor rotation without the use of internal markers, thus providing accurate motion detection close to critical structures such as the heart or liver. Specifically, we propose the use of an automatic feature extraction method in combination with the acquisition of fast orthogonal cine MRI images of nine lung patients. As a preliminary test, we evaluated the performance of the feature extraction method by applying it on regions of interest around (i) the diaphragm and (ii) the tumor and comparing the estimated motion with that obtained by (i) the extraction of the diaphragm profile and (ii) the segmentation of the tumor, respectively. The results confirmed the capability of the proposed method in quantifying tumor motion. Then, a point-based rigid registration was applied to the extracted tumor features between all frames to account for rotation. The median lung rotation values were -0.6 ± 2.3° and -1.5 ± 2.7° in the sagittal and coronal planes respectively, confirming the need to account for tumor rotation along with translation to improve radiotherapy treatment.
Vavatzanidis, Niki Katerina; Mürbe, Dirk; Friederici, Angela; Hahne, Anja
2015-12-01
One main incentive for supplying hearing impaired children with a cochlear implant is the prospect of oral language acquisition. Only scarce knowledge exists, however, of what congenitally deaf children actually perceive when receiving their first auditory input, and specifically what speech-relevant features they are able to extract from the new modality. We therefore presented congenitally deaf infants and young children implanted before the age of 4 years with an oddball paradigm of long and short vowel variants of the syllable /ba/. We measured the EEG in regular intervals to study their discriminative ability starting with the first activation of the implant up to 8 months later. We were thus able to time-track the emerging ability to differentiate one of the most basic linguistic features that bears semantic differentiation and helps in word segmentation, namely, vowel length. Results show that already 2 months after the first auditory input, but not directly after implant activation, these early implanted children differentiate between long and short syllables. Surprisingly, after only 4 months of hearing experience, the ERPs have reached the same properties as those of the normal hearing control group, demonstrating the plasticity of the brain with respect to the new modality. We thus show that a simple but linguistically highly relevant feature such as vowel length reaches age-appropriate electrophysiological levels as fast as 4 months after the first acoustic stimulation, providing an important basis for further language acquisition.
Non-destructive forensic latent fingerprint acquisition with chromatic white light sensors
NASA Astrophysics Data System (ADS)
Leich, Marcus; Kiltz, Stefan; Dittmann, Jana; Vielhauer, Claus
2011-02-01
Non-destructive latent fingerprint acquisition is an emerging field of research, which, unlike traditional methods, makes latent fingerprints available for additional verification or further analysis like tests for substance abuse or age estimation. In this paper a series of tests is performed to investigate the overall suitability of a high resolution off-the-shelf chromatic white light sensor for the contact-less and non-destructive latent fingerprint acquisition. Our paper focuses on scanning previously determined regions with exemplary acquisition parameter settings. 3D height field and reflection data of five different latent fingerprints on six different types of surfaces (HDD platter, brushed metal, painted car body (metallic and non-metallic finish), blued metal, veneered plywood) are experimentally studied. Pre-processing is performed by removing low-frequency gradients. The quality of the results is assessed subjectively; no automated feature extraction is performed. Additionally, the degradation of the fingerprint during the acquisition period is observed. While the quality of the acquired data is highly dependent on surface structure, the sensor is capable of detecting the fingerprint on all sample surfaces. On blued metal the residual material is detected; however, the ridge line structure dissolves within minutes after fingerprint placement.
Detrended fluctuation analysis for major depressive disorder.
Mumtaz, Wajid; Malik, Aamir Saeed; Ali, Syed Saad Azhar; Yasin, Mohd Azhar Mohd; Amin, Hafeezullah
2015-01-01
Clinical utility of Electroencephalography (EEG) based diagnostic studies is less clear for major depressive disorder (MDD). In this paper, a novel machine learning (ML) scheme was presented to discriminate the MDD patients and healthy controls. The proposed method inherently involved feature extraction, selection, classification and validation. The EEG data acquisition involved eyes closed (EC) and eyes open (EO) conditions. At feature extraction stage, the de-trended fluctuation analysis (DFA) was performed, based on the EEG data, to achieve scaling exponents. The DFA was performed to analyzes the presence or absence of long-range temporal correlations (LRTC) in the recorded EEG data. The scaling exponents were used as input features to our proposed system. At feature selection stage, 3 different techniques were used for comparison purposes. Logistic regression (LR) classifier was employed. The method was validated by a 10-fold cross-validation. As results, we have observed that the effect of 3 different reference montages on the computed features. The proposed method employed 3 different types of feature selection techniques for comparison purposes as well. The results show that the DFA analysis performed better in LE data compared with the IR and AR data. In addition, during Wilcoxon ranking, the AR performed better than LE and IR. Based on the results, it was concluded that the DFA provided useful information to discriminate the MDD patients and with further validation can be employed in clinics for diagnosis of MDD.
ASPRS Digital Imagery Guideline Image Gallery Discussion
NASA Technical Reports Server (NTRS)
Ryan, Robert
2002-01-01
The objectives of the image gallery are to 1) give users and providers a simple means of identifying appropriate imagery for a given application/feature extraction; and 2) define imagery sufficiently to be described in engineering and acquisition terms. This viewgraph presentation includes a discussion of edge response and aliasing for image processing, and a series of images illustrating the effects of signal to noise ratio (SNR) on images. Another series of images illustrates how images are affected by varying the ground sample distances (GSD).
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Implementation of a Smart Phone for Motion Analysis.
Yodpijit, Nantakrit; Songwongamarit, Chalida; Tavichaiyuth, Nicha
2015-01-01
In todays information-rich environment, one of the most popular devices is a smartphone. Research has shown significant growth in the use of smartphones and apps all over the world. Accelerometer within smartphone is a motion sensor that can be used to detect human movements. Compared to other major vital signs, gait characteristics represent general health status, and can be determined using smartphones. The objective of the current study is to design and develop the alternative technology that can potentially predict health status and reduce healthcare cost. This study uses a smartphone as a wireless accelerometer for quantifying human motion characteristics from four steps of the system design and development (data acquisition operation, feature extraction algorithm, classifier design, and decision making strategy). Findings indicate that it is possible to extract features from a smartphones accelerometer using a peak detection algorithm. Gait characteristics obtain from the peak detection algorithm include stride time, stance time, swing time and cadence. Applications and limitations of this study are also discussed.
Automatic Recognition of Fetal Facial Standard Plane in Ultrasound Image via Fisher Vector.
Lei, Baiying; Tan, Ee-Leng; Chen, Siping; Zhuo, Liu; Li, Shengli; Ni, Dong; Wang, Tianfu
2015-01-01
Acquisition of the standard plane is the prerequisite of biometric measurement and diagnosis during the ultrasound (US) examination. In this paper, a new algorithm is developed for the automatic recognition of the fetal facial standard planes (FFSPs) such as the axial, coronal, and sagittal planes. Specifically, densely sampled root scale invariant feature transform (RootSIFT) features are extracted and then encoded by Fisher vector (FV). The Fisher network with multi-layer design is also developed to extract spatial information to boost the classification performance. Finally, automatic recognition of the FFSPs is implemented by support vector machine (SVM) classifier based on the stochastic dual coordinate ascent (SDCA) algorithm. Experimental results using our dataset demonstrate that the proposed method achieves an accuracy of 93.27% and a mean average precision (mAP) of 99.19% in recognizing different FFSPs. Furthermore, the comparative analyses reveal the superiority of the proposed method based on FV over the traditional methods.
Comparison of six electromyography acquisition setups on hand movement classification tasks
Pizzolato, Stefano; Tagliapietra, Luca; Cognolato, Matteo; Reggiani, Monica; Müller, Henning
2017-01-01
Hand prostheses controlled by surface electromyography are promising due to the non-invasive approach and the control capabilities offered by machine learning. Nevertheless, dexterous prostheses are still scarcely spread due to control difficulties, low robustness and often prohibitive costs. Several sEMG acquisition setups are now available, ranging in terms of costs between a few hundred and several thousand dollars. The objective of this paper is the relative comparison of six acquisition setups on an identical hand movement classification task, in order to help the researchers to choose the proper acquisition setup for their requirements. The acquisition setups are based on four different sEMG electrodes (including Otto Bock, Delsys Trigno, Cometa Wave + Dormo ECG and two Thalmic Myo armbands) and they were used to record more than 50 hand movements from intact subjects with a standardized acquisition protocol. The relative performance of the six sEMG acquisition setups is compared on 41 identical hand movements with a standardized feature extraction and data analysis pipeline aimed at performing hand movement classification. Comparable classification results are obtained with three acquisition setups including the Delsys Trigno, the Cometa Wave and the affordable setup composed of two Myo armbands. The results suggest that practical sEMG tests can be performed even when costs are relevant (e.g. in small laboratories, developing countries or use by children). All the presented datasets can be used for offline tests and their quality can easily be compared as the data sets are publicly available. PMID:29023548
Comparison of six electromyography acquisition setups on hand movement classification tasks.
Pizzolato, Stefano; Tagliapietra, Luca; Cognolato, Matteo; Reggiani, Monica; Müller, Henning; Atzori, Manfredo
2017-01-01
Hand prostheses controlled by surface electromyography are promising due to the non-invasive approach and the control capabilities offered by machine learning. Nevertheless, dexterous prostheses are still scarcely spread due to control difficulties, low robustness and often prohibitive costs. Several sEMG acquisition setups are now available, ranging in terms of costs between a few hundred and several thousand dollars. The objective of this paper is the relative comparison of six acquisition setups on an identical hand movement classification task, in order to help the researchers to choose the proper acquisition setup for their requirements. The acquisition setups are based on four different sEMG electrodes (including Otto Bock, Delsys Trigno, Cometa Wave + Dormo ECG and two Thalmic Myo armbands) and they were used to record more than 50 hand movements from intact subjects with a standardized acquisition protocol. The relative performance of the six sEMG acquisition setups is compared on 41 identical hand movements with a standardized feature extraction and data analysis pipeline aimed at performing hand movement classification. Comparable classification results are obtained with three acquisition setups including the Delsys Trigno, the Cometa Wave and the affordable setup composed of two Myo armbands. The results suggest that practical sEMG tests can be performed even when costs are relevant (e.g. in small laboratories, developing countries or use by children). All the presented datasets can be used for offline tests and their quality can easily be compared as the data sets are publicly available.
Marcos, Ma Shiela Angeli; David, Laura; Peñaflor, Eileen; Ticzon, Victor; Soriano, Maricor
2008-10-01
We introduce an automated benthic counting system in application for rapid reef assessment that utilizes computer vision on subsurface underwater reef video. Video acquisition was executed by lowering a submersible bullet-type camera from a motor boat while moving across the reef area. A GPS and echo sounder were linked to the video recorder to record bathymetry and location points. Analysis of living and non-living components was implemented through image color and texture feature extraction from the reef video frames and classification via Linear Discriminant Analysis. Compared to common rapid reef assessment protocols, our system can perform fine scale data acquisition and processing in one day. Reef video was acquired in Ngedarrak Reef, Koror, Republic of Palau. Overall success performance ranges from 60% to 77% for depths of 1 to 3 m. The development of an automated rapid reef classification system is most promising for reef studies that need fast and frequent data acquisition of percent cover of living and nonliving components.
NASA Astrophysics Data System (ADS)
Chirra, Prathyush; Leo, Patrick; Yim, Michael; Bloch, B. Nicolas; Rastinehad, Ardeshir R.; Purysko, Andrei; Rosen, Mark; Madabhushi, Anant; Viswanath, Satish
2018-02-01
The recent advent of radiomics has enabled the development of prognostic and predictive tools which use routine imaging, but a key question that still remains is how reproducible these features may be across multiple sites and scanners. This is especially relevant in the context of MRI data, where signal intensity values lack tissue specific, quantitative meaning, as well as being dependent on acquisition parameters (magnetic field strength, image resolution, type of receiver coil). In this paper we present the first empirical study of the reproducibility of 5 different radiomic feature families in a multi-site setting; specifically, for characterizing prostate MRI appearance. Our cohort comprised 147 patient T2w MRI datasets from 4 different sites, all of which were first pre-processed to correct acquisition-related for artifacts such as bias field, differing voxel resolutions, as well as intensity drift (non-standardness). 406 3D voxel wise radiomic features were extracted and evaluated in a cross-site setting to determine how reproducible they were within a relatively homogeneous non-tumor tissue region; using 2 different measures of reproducibility: Multivariate Coefficient of Variation and Instability Score. Our results demonstrated that Haralick features were most reproducible between all 4 sites. By comparison, Laws features were among the least reproducible between sites, as well as performing highly variably across their entire parameter space. Similarly, the Gabor feature family demonstrated good cross-site reproducibility, but for certain parameter combinations alone. These trends indicate that despite extensive pre-processing, only a subset of radiomic features and associated parameters may be reproducible enough for use within radiomics-based machine learning classifier schemes.
NASA Astrophysics Data System (ADS)
Asiedu, Mercy Nyamewaa; Simhal, Anish; Lam, Christopher T.; Mueller, Jenna; Chaudhary, Usamah; Schmitt, John W.; Sapiro, Guillermo; Ramanujam, Nimmi
2018-02-01
The world health organization recommends visual inspection with acetic acid (VIA) and/or Lugol's Iodine (VILI) for cervical cancer screening in low-resource settings. Human interpretation of diagnostic indicators for visual inspection is qualitative, subjective, and has high inter-observer discordance, which could lead both to adverse outcomes for the patient and unnecessary follow-ups. In this work, we a simple method for automatic feature extraction and classification for Lugol's Iodine cervigrams acquired with a low-cost, miniature, digital colposcope. Algorithms to preprocess expert physician-labelled cervigrams and to extract simple but powerful color-based features are introduced. The features are used to train a support vector machine model to classify cervigrams based on expert physician labels. The selected framework achieved a sensitivity, specificity, and accuracy of 89.2%, 66.7% and 80.6% with majority diagnosis of the expert physicians in discriminating cervical intraepithelial neoplasia (CIN +) relative to normal tissues. The proposed classifier also achieved an area under the curve of 84 when trained with majority diagnosis of the expert physicians. The results suggest that utilizing simple color-based features may enable unbiased automation of VILI cervigrams, opening the door to a full system of low-cost data acquisition complemented with automatic interpretation.
On the feasibility of interoperable schemes in hand biometrics.
Morales, Aythami; González, Ester; Ferrer, Miguel A
2012-01-01
Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors.
On the Feasibility of Interoperable Schemes in Hand Biometrics
Morales, Aythami; González, Ester; Ferrer, Miguel A.
2012-01-01
Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors. PMID:22438714
Kroll, Torsten; Schmidt, David; Schwanitz, Georg; Ahmad, Mubashir; Hamann, Jana; Schlosser, Corinne; Lin, Yu-Chieh; Böhm, Konrad J; Tuckermann, Jan; Ploubidou, Aspasia
2016-07-01
High-content analysis (HCA) converts raw light microscopy images to quantitative data through the automated extraction, multiparametric analysis, and classification of the relevant information content. Combined with automated high-throughput image acquisition, HCA applied to the screening of chemicals or RNAi-reagents is termed high-content screening (HCS). Its power in quantifying cell phenotypes makes HCA applicable also to routine microscopy. However, developing effective HCA and bioinformatic analysis pipelines for acquisition of biologically meaningful data in HCS is challenging. Here, the step-by-step development of an HCA assay protocol and an HCS bioinformatics analysis pipeline are described. The protocol's power is demonstrated by application to focal adhesion (FA) detection, quantitative analysis of multiple FA features, and functional annotation of signaling pathways regulating FA size, using primary data of a published RNAi screen. The assay and the underlying strategy are aimed at researchers performing microscopy-based quantitative analysis of subcellular features, on a small scale or in large HCS experiments. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.
Texture analysis with statistical methods for wheat ear extraction
NASA Astrophysics Data System (ADS)
Bakhouche, M.; Cointault, F.; Gouton, P.
2007-01-01
In agronomic domain, the simplification of crop counting, necessary for yield prediction and agronomic studies, is an important project for technical institutes such as Arvalis. Although the main objective of our global project is to conceive a mobile robot for natural image acquisition directly in a field, Arvalis has proposed us first to detect by image processing the number of wheat ears in images before to count them, which will allow to obtain the first component of the yield. In this paper we compare different texture image segmentation techniques based on feature extraction by first and higher order statistical methods which have been applied on our images. The extracted features are used for unsupervised pixel classification to obtain the different classes in the image. So, the K-means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been tested in this feasibility study with very average error of 6%. Although the evaluation of the quality of the detection is visually done, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order will be implemented in the future jointly with methods based on spatio-frequential transforms and specific filtering.
Graph-based sensor fusion for classification of transient acoustic signals.
Srinivas, Umamahesh; Nasrabadi, Nasser M; Monga, Vishal
2015-03-01
Advances in acoustic sensing have enabled the simultaneous acquisition of multiple measurements of the same physical event via co-located acoustic sensors. We exploit the inherent correlation among such multiple measurements for acoustic signal classification, to identify the launch/impact of munition (i.e., rockets, mortars). Specifically, we propose a probabilistic graphical model framework that can explicitly learn the class conditional correlations between the cepstral features extracted from these different measurements. Additionally, we employ symbolic dynamic filtering-based features, which offer improvements over the traditional cepstral features in terms of robustness to signal distortions. Experiments on real acoustic data sets show that our proposed algorithm outperforms conventional classifiers as well as the recently proposed joint sparsity models for multisensor acoustic classification. Additionally our proposed algorithm is less sensitive to insufficiency in training samples compared to competing approaches.
Brain Computer Interfaces, a Review
Nicolas-Alonso, Luis Fernando; Gomez-Gil, Jaime
2012-01-01
A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or ‘locked in’ by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices. PMID:22438708
NASA Astrophysics Data System (ADS)
Castro-Ortega, R.; Toxqui-Quitl, C.; Solís-Villarreal, J.; Padilla-Vivanco, A.; Castro-Ramos, J.
2014-09-01
Vein patterns can be used for accessing, identifying, and authenticating purposes; which are more reliable than classical identification way. Furthermore, these patterns can be used for venipuncture in health fields to get on to veins of patients when they cannot be seen with the naked eye. In this paper, an image acquisition system is implemented in order to acquire digital images of people hands in the near infrared. The image acquisition system consists of a CCD camera and a light source with peak emission in the 880 nm. This radiation can penetrate and can be strongly absorbed by the desoxyhemoglobin that is presented in the blood of the veins. Our method of analysis is composed by several steps and the first one of all is the enhancement of acquired images which is implemented by spatial filters. After that, adaptive thresholding and mathematical morphology operations are used in order to obtain the distribution of vein patterns. The above process is focused on the people recognition through of images of their palm-dorsal distributions obtained from the near infrared light. This work has been directed for doing a comparison of two different techniques of feature extraction as moments and veincode. The classification task is achieved using Artificial Neural Networks. Two databases are used for the analysis of the performance of the algorithms. The first database used here is owned of the Hong Kong Polytechnic University and the second one is our own database.
Collaborative knowledge acquisition for the design of context-aware alert systems.
Joffe, Erel; Havakuk, Ofer; Herskovic, Jorge R; Patel, Vimla L; Bernstam, Elmer Victor
2012-01-01
To present a framework for combining implicit knowledge acquisition from multiple experts with machine learning and to evaluate this framework in the context of anemia alerts. Five internal medicine residents reviewed 18 anemia alerts, while 'talking aloud'. They identified features that were reviewed by two or more physicians to determine appropriate alert level, etiology and treatment recommendation. Based on these features, data were extracted from 100 randomly-selected anemia cases for a training set and an additional 82 cases for a test set. Two staff internists assigned an alert level, etiology and treatment recommendation before and after reviewing the entire electronic medical record. The training set of 118 cases (100 plus 18) and the test set of 82 cases were explored using RIDOR and JRip algorithms. The feature set was sufficient to assess 93% of anemia cases (intraclass correlation for alert level before and after review of the records by internists 1 and 2 were 0.92 and 0.95, respectively). High-precision classifiers were constructed to identify low-level alerts (precision p=0.87, recall R=0.4), iron deficiency (p=1.0, R=0.73), and anemia associated with kidney disease (p=0.87, R=0.77). It was possible to identify low-level alerts and several conditions commonly associated with chronic anemia. This approach may reduce the number of clinically unimportant alerts. The study was limited to anemia alerts. Furthermore, clinicians were aware of the study hypotheses potentially biasing their evaluation. Implicit knowledge acquisition, collaborative filtering and machine learning were combined automatically to induce clinically meaningful and precise decision rules.
Larue, Ruben T H M; Defraene, Gilles; De Ruysscher, Dirk; Lambin, Philippe; van Elmpt, Wouter
2017-02-01
Quantitative analysis of tumour characteristics based on medical imaging is an emerging field of research. In recent years, quantitative imaging features derived from CT, positron emission tomography and MR scans were shown to be of added value in the prediction of outcome parameters in oncology, in what is called the radiomics field. However, results might be difficult to compare owing to a lack of standardized methodologies to conduct quantitative image analyses. In this review, we aim to present an overview of the current challenges, technical routines and protocols that are involved in quantitative imaging studies. The first issue that should be overcome is the dependency of several features on the scan acquisition and image reconstruction parameters. Adopting consistent methods in the subsequent target segmentation step is evenly crucial. To further establish robust quantitative image analyses, standardization or at least calibration of imaging features based on different feature extraction settings is required, especially for texture- and filter-based features. Several open-source and commercial software packages to perform feature extraction are currently available, all with slightly different functionalities, which makes benchmarking quite challenging. The number of imaging features calculated is typically larger than the number of patients studied, which emphasizes the importance of proper feature selection and prediction model-building routines to prevent overfitting. Even though many of these challenges still need to be addressed before quantitative imaging can be brought into daily clinical practice, radiomics is expected to be a critical component for the integration of image-derived information to personalize treatment in the future.
Development of Vision Based Multiview Gait Recognition System with MMUGait Database
Ng, Hu; Tan, Wooi-Haw; Tong, Hau-Lee
2014-01-01
This paper describes the acquisition setup and development of a new gait database, MMUGait. This database consists of 82 subjects walking under normal condition and 19 subjects walking with 11 covariate factors, which were captured under two views. This paper also proposes a multiview model-based gait recognition system with joint detection approach that performs well under different walking trajectories and covariate factors, which include self-occluded or external occluded silhouettes. In the proposed system, the process begins by enhancing the human silhouette to remove the artifacts. Next, the width and height of the body are obtained. Subsequently, the joint angular trajectories are determined once the body joints are automatically detected. Lastly, crotch height and step-size of the walking subject are determined. The extracted features are smoothened by Gaussian filter to eliminate the effect of outliers. The extracted features are normalized with linear scaling, which is followed by feature selection prior to the classification process. The classification experiments carried out on MMUGait database were benchmarked against the SOTON Small DB from University of Southampton. Results showed correct classification rate above 90% for all the databases. The proposed approach is found to outperform other approaches on SOTON Small DB in most cases. PMID:25143972
High Quality Acquisition of Surface Electromyography - Conditioning Circuit Design
NASA Astrophysics Data System (ADS)
Shobaki, Mohammed M.; Malik, Noreha Abdul; Khan, Sheroz; Nurashikin, Anis; Haider, Samnan; Larbani, Sofiane; Arshad, Atika; Tasnim, Rumana
2013-12-01
The acquisition of Surface Electromyography (SEMG) signals is used for many applications including the diagnosis of neuromuscular diseases, and prosthesis control. The diagnostic quality of the SEMG signal is highly dependent on the conditioning circuit of the SEMG acquisition system. This paper presents the design of an SEMG conditioning circuit that can guarantee to collect high quality signal with high SNR such that it is immune to environmental noise. The conditioning circuit consists of four stages; consisting of an instrumentation amplifier that is used with a gain of around 250; 4th order band pass filter in the 20-500Hz frequency range as the two initial stages. The third stage is an amplifier with adjustable gain using a variable resistance; the gain could be changed from 1000 to 50000. In the final stage the signal is translated to meet the input requirements of data acquisition device or the ADC. Acquisition of accurate signals allows it to be analyzed for extracting the required characteristic features for medical and clinical applications. According to the experimental results, the value of SNR for collected signal is 52.4 dB which is higher than the commercial system, the power spectrum density (PSD) graph is also presented and it shows that the filter has eliminated the noise below 20 Hz.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
1988-06-01
extraction nets. TerrainMaps: Tools for physical and pseudo-physical molding and growing of features on terrain and thematic maps. 5-13 + ,m , mmmmmm mmmmm...ok 1) the student neem confused, and 2) the teot for wroag-answerstshold is met Recognizing a confused tudent is admittedly a mabjective and imprecise...you know that GRADE in iine 9 is a control variable? Student: Yes 2. Tutor: OIL What i the value of GRADE at anytime during loop execution? Studam
van der Kloet, Frans M; Hendriks, Margriet; Hankemeier, Thomas; Reijmers, Theo
2013-11-01
Because of its high sensitivity and specificity, hyphenated mass spectrometry has become the predominant method to detect and quantify metabolites present in bio-samples relevant for all sorts of life science studies being executed. In contrast to targeted methods that are dedicated to specific features, global profiling acquisition methods allow new unspecific metabolites to be analyzed. The challenge with these so-called untargeted methods is the proper and automated extraction and integration of features that could be of relevance. We propose a new algorithm that enables untargeted integration of samples that are measured with high resolution liquid chromatography-mass spectrometry (LC-MS). In contrast to other approaches limited user interaction is needed allowing also less experienced users to integrate their data. The large amount of single features that are found within a sample is combined to a smaller list of, compound-related, grouped feature-sets representative for that sample. These feature-sets allow for easier interpretation and identification and as important, easier matching over samples. We show that the automatic obtained integration results for a set of known target metabolites match those generated with vendor software but that at least 10 times more feature-sets are extracted as well. We demonstrate our approach using high resolution LC-MS data acquired for 128 samples on a lipidomics platform. The data was also processed in a targeted manner (with a combination of automatic and manual integration) using vendor software for a set of 174 targets. As our untargeted extraction procedure is run per sample and per mass trace the implementation of it is scalable. Because of the generic approach, we envision that this data extraction lipids method will be used in a targeted as well as untargeted analysis of many different kinds of TOF-MS data, even CE- and GC-MS data or MRM. The Matlab package is available for download on request and efforts are directed toward a user-friendly Windows executable. Copyright © 2013 Elsevier B.V. All rights reserved.
Acquisitions and Collection Development Automation: Future Directions.
ERIC Educational Resources Information Center
Aveney, Brian; Heinemann, Luba
1983-01-01
This paper explores features of automated acquisitions systems now implemented and discusses features that might be implemented in the next few years. Distributed acquisitions, ordering, receiving, reconciliation, management reporting, and collection development (selection of current materials, retrospective purchasing, and comparative…
Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H; Gay, Alan P
2014-01-01
Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.
Collaborative knowledge acquisition for the design of context-aware alert systems
Joffe, Erel; Havakuk, Ofer; Herskovic, Jorge R; Patel, Vimla L
2012-01-01
Objective To present a framework for combining implicit knowledge acquisition from multiple experts with machine learning and to evaluate this framework in the context of anemia alerts. Materials and Methods Five internal medicine residents reviewed 18 anemia alerts, while ‘talking aloud’. They identified features that were reviewed by two or more physicians to determine appropriate alert level, etiology and treatment recommendation. Based on these features, data were extracted from 100 randomly-selected anemia cases for a training set and an additional 82 cases for a test set. Two staff internists assigned an alert level, etiology and treatment recommendation before and after reviewing the entire electronic medical record. The training set of 118 cases (100 plus 18) and the test set of 82 cases were explored using RIDOR and JRip algorithms. Results The feature set was sufficient to assess 93% of anemia cases (intraclass correlation for alert level before and after review of the records by internists 1 and 2 were 0.92 and 0.95, respectively). High-precision classifiers were constructed to identify low-level alerts (precision p=0.87, recall R=0.4), iron deficiency (p=1.0, R=0.73), and anemia associated with kidney disease (p=0.87, R=0.77). Discussion It was possible to identify low-level alerts and several conditions commonly associated with chronic anemia. This approach may reduce the number of clinically unimportant alerts. The study was limited to anemia alerts. Furthermore, clinicians were aware of the study hypotheses potentially biasing their evaluation. Conclusion Implicit knowledge acquisition, collaborative filtering and machine learning were combined automatically to induce clinically meaningful and precise decision rules. PMID:22744961
Luo, Junhai; Fu, Liang
2017-06-09
With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.
Bray, Mark-Anthony; Singh, Shantanu; Han, Han; Davis, Chadwick T.; Borgeson, Blake; Hartland, Cathy; Kost-Alimova, Maria; Gustafsdottir, Sigrun M.; Gibson, Christopher C.; Carpenter, Anne E.
2016-01-01
In morphological profiling, quantitative data are extracted from microscopy images of cells to identify biologically relevant similarities and differences among samples based on these profiles. This protocol describes the design and execution of experiments using Cell Painting, a morphological profiling assay multiplexing six fluorescent dyes imaged in five channels, to reveal eight broadly relevant cellular components or organelles. Cells are plated in multi-well plates, perturbed with the treatments to be tested, stained, fixed, and imaged on a high-throughput microscope. Then, automated image analysis software identifies individual cells and measures ~1,500 morphological features (various measures of size, shape, texture, intensity, etc.) to produce a rich profile suitable for detecting subtle phenotypes. Profiles of cell populations treated with different experimental perturbations can be compared to suit many goals, such as identifying the phenotypic impact of chemical or genetic perturbations, grouping compounds and/or genes into functional pathways, and identifying signatures of disease. Cell culture and image acquisition takes two weeks; feature extraction and data analysis take an additional 1-2 weeks. PMID:27560178
Yang, Yuan-Zheng; Chang, Yu; Hu, Yuan-Man; Liu, Miao; Li, Yue-Hui
2011-06-01
To timely and accurately acquire the spatial distribution pattern of wetlands is of significance for the dynamic monitoring, conservation, and sustainable utilization of wetlands. The small remote sensing satellite constellations A/B stars (HJ-1A/1B stars) for environmental hazards were launched by China for monitoring terrestrial resources, which could provide a new data source of remote sensing image acquisition for retrieving wetland types. Taking Liaohe Delta as a case, this paper compared the accuracy of wetland classification map and the area of each wetland type retrieved from CCD data (HJ CCD data) and TM5 data, and validated and explored the applicability and the applied potential of HJ CCD data in wetland resources dynamic monitoring. The results showed that HJ CCD data could completely replace Landsat TM5 data in feature extraction and remote sensing classification. In real-time monitoring, due to its 2 days of data acquisition cycle, HJ CCD data had the priority to Landsat TM5 data (16 days of data acquisition cycle).
Inferring Biological Structures from Super-Resolution Single Molecule Images Using Generative Models
Maji, Suvrajit; Bruchez, Marcel P.
2012-01-01
Localization-based super resolution imaging is presently limited by sampling requirements for dynamic measurements of biological structures. Generating an image requires serial acquisition of individual molecular positions at sufficient density to define a biological structure, increasing the acquisition time. Efficient analysis of biological structures from sparse localization data could substantially improve the dynamic imaging capabilities of these methods. Using a feature extraction technique called the Hough Transform simple biological structures are identified from both simulated and real localization data. We demonstrate that these generative models can efficiently infer biological structures in the data from far fewer localizations than are required for complete spatial sampling. Analysis at partial data densities revealed efficient recovery of clathrin vesicle size distributions and microtubule orientation angles with as little as 10% of the localization data. This approach significantly increases the temporal resolution for dynamic imaging and provides quantitatively useful biological information. PMID:22629348
Sensor Integration in a Low Cost Land Mobile Mapping System
Madeira, Sergio; Gonçalves, José A.; Bastos, Luísa
2012-01-01
Mobile mapping is a multidisciplinary technique which requires several dedicated equipment, calibration procedures that must be as rigorous as possible, time synchronization of all acquired data and software for data processing and extraction of additional information. To decrease the cost and complexity of Mobile Mapping Systems (MMS), the use of less expensive sensors and the simplification of procedures for calibration and data acquisition are mandatory features. This article refers to the use of MMS technology, focusing on the main aspects that need to be addressed to guarantee proper data acquisition and describing the way those aspects were handled in a terrestrial MMS developed at the University of Porto. In this case the main aim was to implement a low cost system while maintaining good quality standards of the acquired georeferenced information. The results discussed here show that this goal has been achieved. PMID:22736985
NASA Technical Reports Server (NTRS)
1982-01-01
A project to develop an effective mobility aid for blind pedestrians which acquires consecutive images of the scenes before a moving pedestrian, which locates and identifies the pedestrian's path and potential obstacles in the path, which presents path and obstacle information to the pedestrian, and which operates in real-time is discussed. The mobility aid has three principal components: an image acquisition system, an image interpretation system, and an information presentation system. The image acquisition system consists of a miniature, solid-state TV camera which transforms the scene before the blind pedestrian into an image which can be received by the image interpretation system. The image interpretation system is implemented on a microprocessor which has been programmed to execute real-time feature extraction and scene analysis algorithms for locating and identifying the pedestrian's path and potential obstacles. Identity and location information is presented to the pedestrian by means of tactile coding and machine-generated speech.
NASA Astrophysics Data System (ADS)
Jegadeeshwaran, R.; Sugumaran, V.
2015-02-01
Hydraulic brakes in automobiles are important components for the safety of passengers; therefore, the brakes are a good subject for condition monitoring. The condition of the brake components can be monitored by using the vibration characteristics. On-line condition monitoring by using machine learning approach is proposed in this paper as a possible solution to such problems. The vibration signals for both good as well as faulty conditions of brakes were acquired from a hydraulic brake test setup with the help of a piezoelectric transducer and a data acquisition system. Descriptive statistical features were extracted from the acquired vibration signals and the feature selection was carried out using the C4.5 decision tree algorithm. There is no specific method to find the right number of features required for classification for a given problem. Hence an extensive study is needed to find the optimum number of features. The effect of the number of features was also studied, by using the decision tree as well as Support Vector Machines (SVM). The selected features were classified using the C-SVM and Nu-SVM with different kernel functions. The results are discussed and the conclusion of the study is presented.
NASA Astrophysics Data System (ADS)
Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G.
2015-02-01
Today the use of spaceborne Very High Resolution (VHR) optical sensors for automatic 3D information extraction is increasing in the scientific and civil communities. The 3D Optical Metrology (3DOM) unit of the Bruno Kessler Foundation (FBK) in Trento (Italy) has collected VHR satellite imagery, as well as aerial and terrestrial data over Trento for creating a complete testfield for investigations on image radiometry, geometric accuracy, automatic digital surface model (DSM) generation, 2D/3D feature extraction, city modelling and data fusion. This paper addresses the radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction. The dataset consist of two stereo-pairs acquired by WorldView-2 and by GeoEye-1 in panchromatic and multispectral mode, and a triplet from Pléiades-1A. For reference and validation, a DSM from airborne LiDAR acquisition is used. The paper gives details on the project, dataset characteristics and achieved results.
NASA Astrophysics Data System (ADS)
Tang, Tien T.; Zawaski, Janice A.; Francis, Kathleen N.; Qutub, Amina A.; Gaber, M. Waleed
2018-02-01
Accurate diagnosis of tumor type is vital for effective treatment planning. Diagnosis relies heavily on tumor biopsies and other clinical factors. However, biopsies do not fully capture the tumor's heterogeneity due to sampling bias and are only performed if the tumor is accessible. An alternative approach is to use features derived from routine diagnostic imaging such as magnetic resonance (MR) imaging. In this study we aim to establish the use of quantitative image features to classify brain tumors and extend the use of MR images beyond tumor detection and localization. To control for interscanner, acquisition and reconstruction protocol variations, the established workflow was performed in a preclinical model. Using glioma (U87 and GL261) and medulloblastoma (Daoy) models, T1-weighted post contrast scans were acquired at different time points post-implant. The tumor regions at the center, middle, and peripheral were analyzed using in-house software to extract 32 different image features consisting of first and second order features. The extracted features were used to construct a decision tree, which could predict tumor type with 10-fold cross-validation. Results from the final classification model demonstrated that middle tumor region had the highest overall accuracy at 79%, while the AUC accuracy was over 90% for GL261 and U87 tumors. Our analysis further identified image features that were unique to certain tumor region, although GL261 tumors were more homogenous with no significant differences between the central and peripheral tumor regions. In conclusion our study shows that texture features derived from MR scans can be used to classify tumor type with high success rates. Furthermore, the algorithm we have developed can be implemented with any imaging datasets and may be applicable to multiple tumor types to determine diagnosis.
NASA Astrophysics Data System (ADS)
Ressel, Rudolf; Singha, Suman; Lehner, Susanne
2016-08-01
Arctic Sea ice monitoring has attracted increasing attention over the last few decades. Besides the scientific interest in sea ice, the operational aspect of ice charting is becoming more important due to growing navigational possibilities in an increasingly ice free Arctic. For this purpose, satellite borne SAR imagery has become an invaluable tool. In past, mostly single polarimetric datasets were investigated with supervised or unsupervised classification schemes for sea ice investigation. Despite proven sea ice classification achievements on single polarimetric data, a fully automatic, general purpose classifier for single-pol data has not been established due to large variation of sea ice manifestations and incidence angle impact. Recently, through the advent of polarimetric SAR sensors, polarimetric features have moved into the focus of ice classification research. The higher information content four polarimetric channels promises to offer greater insight into sea ice scattering mechanism and overcome some of the shortcomings of single- polarimetric classifiers. Two spatially and temporally coincident pairs of fully polarimetric acquisitions from the TerraSAR-X/TanDEM-X and RADARSAT-2 satellites are investigated. Proposed supervised classification algorithm consists of two steps: The first step comprises a feature extraction, the results of which are ingested into a neural network classifier in the second step. Based on the common coherency and covariance matrix, we extract a number of features and analyze the relevance and redundancy by means of mutual information for the purpose of sea ice classification. Coherency matrix based features which require an eigendecomposition are found to be either of low relevance or redundant to other covariance matrix based features. Among the most useful features for classification are matrix invariant based features (Geometric Intensity, Scattering Diversity, Surface Scattering Fraction).
NASA Astrophysics Data System (ADS)
Zhang, Bin; Liu, Yueyan; Zhang, Zuyu; Shen, Yonglin
2017-10-01
A multifeature soft-probability cascading scheme to solve the problem of land use and land cover (LULC) classification using high-spatial-resolution images to map rural residential areas in China is proposed. The proposed method is used to build midlevel LULC features. Local features are frequently considered as low-level feature descriptors in a midlevel feature learning method. However, spectral and textural features, which are very effective low-level features, are neglected. The acquisition of the dictionary of sparse coding is unsupervised, and this phenomenon reduces the discriminative power of the midlevel feature. Thus, we propose to learn supervised features based on sparse coding, a support vector machine (SVM) classifier, and a conditional random field (CRF) model to utilize the different effective low-level features and improve the discriminability of midlevel feature descriptors. First, three kinds of typical low-level features, namely, dense scale-invariant feature transform, gray-level co-occurrence matrix, and spectral features, are extracted separately. Second, combined with sparse coding and the SVM classifier, the probabilities of the different LULC classes are inferred to build supervised feature descriptors. Finally, the CRF model, which consists of two parts: unary potential and pairwise potential, is employed to construct an LULC classification map. Experimental results show that the proposed classification scheme can achieve impressive performance when the total accuracy reached about 87%.
Aprea, Eugenio; Gika, Helen; Carlin, Silvia; Theodoridis, Georgios; Vrhovsek, Urska; Mattivi, Fulvio
2011-07-15
A headspace SPME GC-TOF-MS method was developed for the acquisition of metabolite profiles of apple volatiles. As a first step, an experimental design was applied to find out the most appropriate conditions for the extraction of apple volatile compounds by SPME. The selected SPME method was applied in profiling of four different apple varieties by GC-EI-TOF-MS. Full scan GC-MS data were processed by MarkerLynx software for peak picking, normalisation, alignment and feature extraction. Advanced chemometric/statistical techniques (PCA and PLS-DA) were used to explore data and extract useful information. Characteristic markers of each variety were successively identified using the NIST library thus providing useful information for variety classification. The developed HS-SPME sampling method is fully automated and proved useful in obtaining the fingerprint of the volatile content of the fruit. The described analytical protocol can aid in further studies of the apple metabolome. Copyright © 2011 Elsevier B.V. All rights reserved.
Electroencephalography (EEG) Based Control in Assistive Mobile Robots: A Review
NASA Astrophysics Data System (ADS)
Krishnan, N. Murali; Mariappan, Muralindran; Muthukaruppan, Karthigayan; Hijazi, Mohd Hanafi Ahmad; Kitt, Wong Wei
2016-03-01
Recently, EEG based control in assistive robot usage has been gradually increasing in the area of biomedical field for giving quality and stress free life for disabled and elderly people. This study reviews the deployment of EGG based control in assistive robots, especially for those who in need and neurologically disabled. The main objective of this paper is to describe the methods used for (i) EEG data acquisition and signal preprocessing, (ii) feature extraction and (iii) signal classification methods. Besides that, this study presents the specific research challenges in the designing of these control systems and future research directions.
NASA Astrophysics Data System (ADS)
Rees, S. J.; Jones, Bryan F.
1992-11-01
Once feature extraction has occurred in a processed image, the recognition problem becomes one of defining a set of features which maps sufficiently well onto one of the defined shape/object models to permit a claimed recognition. This process is usually handled by aggregating features until a large enough weighting is obtained to claim membership, or an adequate number of located features are matched to the reference set. A requirement has existed for an operator or measure capable of a more direct assessment of membership/occupancy between feature sets, particularly where the feature sets may be defective representations. Such feature set errors may be caused by noise, by overlapping of objects, and by partial obscuration of features. These problems occur at the point of acquisition: repairing the data would then assume a priori knowledge of the solution. The technique described in this paper offers a set theoretical measure for partial occupancy defined in terms of the set of minimum additions to permit full occupancy and the set of locations of occupancy if such additions are made. As is shown, this technique permits recognition of partial feature sets with quantifiable degrees of uncertainty. A solution to the problems of obscuration and overlapping is therefore available.
Brain Dynamics Sustaining Rapid Rule Extraction from Speech
ERIC Educational Resources Information Center
de Diego-Balaguer, Ruth; Fuentemilla, Lluis; Rodriguez-Fornells, Antoni
2011-01-01
Language acquisition is a complex process that requires the synergic involvement of different cognitive functions, which include extracting and storing the words of the language and their embedded rules for progressive acquisition of grammatical information. As has been shown in other fields that study learning processes, synchronization…
Prediction of survival with multi-scale radiomic analysis in glioblastoma patients.
Chaddad, Ahmad; Sabri, Siham; Niazi, Tamim; Abdulkarim, Bassam
2018-06-19
We propose a multiscale texture features based on Laplacian-of Gaussian (LoG) filter to predict progression free (PFS) and overall survival (OS) in patients newly diagnosed with glioblastoma (GBM). Experiments use the extracted features derived from 40 patients of GBM with T1-weighted imaging (T1-WI) and Fluid-attenuated inversion recovery (FLAIR) images that were segmented manually into areas of active tumor, necrosis, and edema. Multiscale texture features were extracted locally from each of these areas of interest using a LoG filter and the relation between features to OS and PFS was investigated using univariate (i.e., Spearman's rank correlation coefficient, log-rank test and Kaplan-Meier estimator) and multivariate analyses (i.e., Random Forest classifier). Three and seven features were statistically correlated with PFS and OS, respectively, with absolute correlation values between 0.32 and 0.36 and p < 0.05. Three features derived from active tumor regions only were associated with OS (p < 0.05) with hazard ratios (HR) of 2.9, 3, and 3.24, respectively. Combined features showed an AUC value of 85.37 and 85.54% for predicting the PFS and OS of GBM patients, respectively, using the random forest (RF) classifier. We presented a multiscale texture features to characterize the GBM regions and predict he PFS and OS. The efficiency achievable suggests that this technique can be developed into a GBM MR analysis system suitable for clinical use after a thorough validation involving more patients. Graphical abstract Scheme of the proposed model for characterizing the heterogeneity of GBM regions and predicting the overall survival and progression free survival of GBM patients. (1) Acquisition of pretreatment MRI images; (2) Affine registration of T1-WI image with its corresponding FLAIR images, and GBM subtype (phenotypes) labelling; (3) Extraction of nine texture features from the three texture scales fine, medium, and coarse derived from each of GBM regions; (4) Comparing heterogeneity between GBM regions by ANOVA test; Survival analysis using Univariate (Spearman rank correlation between features and survival (i.e., PFS and OS) based on each of the GBM regions, Kaplan-Meier estimator and log-rank test to predict the PFS and OS of patient groups that grouped based on median of feature), and multivariate (random forest model) for predicting the PFS and OS of patients groups that grouped based on median of PFS and OS.
SU-E-QI-17: Dependence of 3D/4D PET Quantitative Image Features On Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, J; Budzevich, M; Zhang, G
2014-06-15
Purpose: Quantitative imaging is a fast evolving discipline where a large number of features are extracted from images; i.e., radiomics. Some features have been shown to have diagnostic, prognostic and predictive value. However, they are sensitive to acquisition and processing factors; e.g., noise. In this study noise was added to positron emission tomography (PET) images to determine how features were affected by noise. Methods: Three levels of Gaussian noise were added to 8 lung cancer patients PET images acquired in 3D mode (static) and using respiratory tracking (4D); for the latter images from one of 10 phases were used. Amore » total of 62 features: 14 shape, 19 intensity (1stO), 18 GLCM textures (2ndO; from grey level co-occurrence matrices) and 11 RLM textures (2ndO; from run-length matrices) features were extracted from segmented tumors. Dimensions of GLCM were 256×256, calculated using 3D images with a step size of 1 voxel in 13 directions. Grey levels were binned into 256 levels for RLM and features were calculated in all 13 directions. Results: Feature variation generally increased with noise. Shape features were the most stable while RLM were the most unstable. Intensity and GLCM features performed well; the latter being more robust. The most stable 1stO features were compactness, maximum and minimum length, standard deviation, root-mean-squared, I30, V10-V90, and entropy. The most stable 2ndO features were entropy, sum-average, sum-entropy, difference-average, difference-variance, difference-entropy, information-correlation-2, short-run-emphasis, long-run-emphasis, and run-percentage. In general, features computed from images from one of the phases of 4D scans were more stable than from 3D scans. Conclusion: This study shows the need to characterize image features carefully before they are used in research and medical applications. It also shows that the performance of features, and thereby feature selection, may be assessed in part by noise analysis.« less
Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.
Han, Youkyung; Oh, Jaehong
2018-05-17
For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.
Feature detection on 3D images of dental imprints
NASA Astrophysics Data System (ADS)
Mokhtari, Marielle; Laurendeau, Denis
1994-09-01
A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.
O'Connor, Timothy; Rawat, Siddharth; Markman, Adam; Javidi, Bahram
2018-03-01
We propose a compact imaging system that integrates an augmented reality head mounted device with digital holographic microscopy for automated cell identification and visualization. A shearing interferometer is used to produce holograms of biological cells, which are recorded using customized smart glasses containing an external camera. After image acquisition, segmentation is performed to isolate regions of interest containing biological cells in the field-of-view, followed by digital reconstruction of the cells, which is used to generate a three-dimensional (3D) pseudocolor optical path length profile. Morphological features are extracted from the cell's optical path length map, including mean optical path length, coefficient of variation, optical volume, projected area, projected area to optical volume ratio, cell skewness, and cell kurtosis. Classification is performed using the random forest classifier, support vector machines, and K-nearest neighbor, and the results are compared. Finally, the augmented reality device displays the cell's pseudocolor 3D rendering of its optical path length profile, extracted features, and the identified cell's type or class. The proposed system could allow a healthcare worker to quickly visualize cells using augmented reality smart glasses and extract the relevant information for rapid diagnosis. To the best of our knowledge, this is the first report on the integration of digital holographic microscopy with augmented reality devices for automated cell identification and visualization.
ERIC Educational Resources Information Center
Dominguez, Laura; Hicks, Glyn; Song, Hee-Jeong
2012-01-01
This study offers a Minimalist analysis of the L2 acquisition of binding properties whereby cross-linguistic differences arise from the interaction of anaphoric feature specifications and operations of the computational system (Reuland 2001, 2011; Hicks 2009). This analysis attributes difficulties in the L2 acquisition of locality and orientation…
FEX: A Knowledge-Based System For Planimetric Feature Extraction
NASA Astrophysics Data System (ADS)
Zelek, John S.
1988-10-01
Topographical planimetric features include natural surfaces (rivers, lakes) and man-made surfaces (roads, railways, bridges). In conventional planimetric feature extraction, a photointerpreter manually interprets and extracts features from imagery on a stereoplotter. Visual planimetric feature extraction is a very labour intensive operation. The advantages of automating feature extraction include: time and labour savings; accuracy improvements; and planimetric data consistency. FEX (Feature EXtraction) combines techniques from image processing, remote sensing and artificial intelligence for automatic feature extraction. The feature extraction process co-ordinates the information and knowledge in a hierarchical data structure. The system simulates the reasoning of a photointerpreter in determining the planimetric features. Present efforts have concentrated on the extraction of road-like features in SPOT imagery. Keywords: Remote Sensing, Artificial Intelligence (AI), SPOT, image understanding, knowledge base, apars.
Inference of neuronal network spike dynamics and topology from calcium imaging data
Lütcke, Henry; Gerhard, Felipe; Zenke, Friedemann; Gerstner, Wulfram; Helmchen, Fritjof
2013-01-01
Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP) occurrence (“spike trains”) from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR) and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties. PMID:24399936
Using deep learning for detecting gender in adult chest radiographs
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2018-03-01
In this paper, we present a method for automatically identifying the gender of an imaged person using their frontal chest x-ray images. Our work is motivated by the need to determine missing gender information in some datasets. The proposed method employs the technique of convolutional neural network (CNN) based deep learning and transfer learning to overcome the challenge of developing handcrafted features in limited data. Specifically, the method consists of four main steps: pre-processing, CNN feature extractor, feature selection, and classifier. The method is tested on a combined dataset obtained from several sources with varying acquisition quality resulting in different pre-processing steps that are applied for each. For feature extraction, we tested and compared four CNN architectures, viz., AlexNet, VggNet, GoogLeNet, and ResNet. We applied a feature selection technique, since the feature length is larger than the number of images. Two popular classifiers: SVM and Random Forest, are used and compared. We evaluated the classification performance by cross-validation and used seven performance measures. The best performer is the VggNet-16 feature extractor with the SVM classifier, with accuracy of 86.6% and ROC Area being 0.932 for 5-fold cross validation. We also discuss several misclassified cases and describe future work for performance improvement.
Rahim, Sarni Suhaila; Palade, Vasile; Shuttleworth, James; Jayne, Chrisina
2016-12-01
Digital retinal imaging is a challenging screening method for which effective, robust and cost-effective approaches are still to be developed. Regular screening for diabetic retinopathy and diabetic maculopathy diseases is necessary in order to identify the group at risk of visual impairment. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images by employing fuzzy image processing techniques. The paper first introduces the existing systems for diabetic retinopathy screening, with an emphasis on the maculopathy detection methods. The proposed medical decision support system consists of four parts, namely: image acquisition, image preprocessing including four retinal structures localisation, feature extraction and the classification of diabetic retinopathy and maculopathy. A combination of fuzzy image processing techniques, the Circular Hough Transform and several feature extraction methods are implemented in the proposed system. The paper also presents a novel technique for the macula region localisation in order to detect the maculopathy. In addition to the proposed detection system, the paper highlights a novel online dataset and it presents the dataset collection, the expert diagnosis process and the advantages of our online database compared to other public eye fundus image databases for diabetic retinopathy purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, E.L.
A novel method for performing real-time acquisition and processing Landsat/EROS data covers all aspects including radiometric and geometric corrections of multispectral scanner or return-beam vidicon inputs, image enhancement, statistical analysis, feature extraction, and classification. Radiometric transformations include bias/gain adjustment, noise suppression, calibration, scan angle compensation, and illumination compensation, including topography and atmospheric effects. Correction or compensation for geometric distortion includes sensor-related distortions, such as centering, skew, size, scan nonlinearity, radial symmetry, and tangential symmetry. Also included are object image-related distortions such as aspect angle (altitude), scale distortion (altitude), terrain relief, and earth curvature. Ephemeral corrections are also applied to compensatemore » for satellite forward movement, earth rotation, altitude variations, satellite vibration, and mirror scan velocity. Image enhancement includes high-pass, low-pass, and Laplacian mask filtering and data restoration for intermittent losses. Resource classification is provided by statistical analysis including histograms, correlational analysis, matrix manipulations, and determination of spectral responses. Feature extraction includes spatial frequency analysis, which is used in parallel discriminant functions in each array processor for rapid determination. The technique uses integrated parallel array processors that decimate the tasks concurrently under supervision of a control processor. The operator-machine interface is optimized for programming ease and graphics image windowing.« less
An integrated telemedicine platform for the assessment of affective physiological states
Katsis, Christos D; Ganiatsas, George; Fotiadis, Dimitrios I
2006-01-01
AUBADE is an integrated platform built for the affective assessment of individuals. The system performs evaluation of the emotional state by classifying vectors of features extracted from: facial Electromyogram, Respiration, Electrodermal Activity and Electrocardiogram. The AUBADE system consists of: (a) a multisensorial wearable, (b) a data acquisition and wireless communication module, (c) a feature extraction module, (d) a 3D facial animation module which is used for the projection of the obtained data through a generic 3D face model; whereas the end-user will be able to view the facial expression of the subject in real time, (e) an intelligent emotion recognition module, and (f) the AUBADE databases where the acquired signals along with the subject's animation videos are saved. The system is designed to be applied to human subjects operating under extreme stress conditions, in particular car racing drivers, and also to patients suffering from neurological and psychological disorders. AUBADE's classification accuracy into five predefined emotional classes (high stress, low stress, disappointment, euphoria and neutral face) is 86.0%. The pilot system applications and components are being tested and evaluated on Maserati's car. racing drivers. PMID:16879757
Classifying magnetic resonance image modalities with convolutional neural networks
NASA Astrophysics Data System (ADS)
Remedios, Samuel; Pham, Dzung L.; Butman, John A.; Roy, Snehashis
2018-02-01
Magnetic Resonance (MR) imaging allows the acquisition of images with different contrast properties depending on the acquisition protocol and the magnetic properties of tissues. Many MR brain image processing techniques, such as tissue segmentation, require multiple MR contrasts as inputs, and each contrast is treated differently. Thus it is advantageous to automate the identification of image contrasts for various purposes, such as facilitating image processing pipelines, and managing and maintaining large databases via content-based image retrieval (CBIR). Most automated CBIR techniques focus on a two-step process: extracting features from data and classifying the image based on these features. We present a novel 3D deep convolutional neural network (CNN)- based method for MR image contrast classification. The proposed CNN automatically identifies the MR contrast of an input brain image volume. Specifically, we explored three classification problems: (1) identify T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery (FLAIR) contrasts, (2) identify pre vs postcontrast T1, (3) identify pre vs post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites and multiple scanners were used. To evaluate each task, the proposed model was trained on 2137 images and tested on the remaining 1281 images. Results showed that image volumes were correctly classified with 97.57% accuracy.
A novel feature extraction approach for microarray data based on multi-algorithm fusion
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions. PMID:25780277
A novel feature extraction approach for microarray data based on multi-algorithm fusion.
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.
Identification and classification of upper limb motions using PCA.
Veer, Karan; Vig, Renu
2018-03-28
This paper describes the utility of principal component analysis (PCA) in classifying upper limb signals. PCA is a powerful tool for analyzing data of high dimension. Here, two different input strategies were explored. The first method uses upper arm dual-position-based myoelectric signal acquisition and the other solely uses PCA for classifying surface electromyogram (SEMG) signals. SEMG data from the biceps and the triceps brachii muscles and four independent muscle activities of the upper arm were measured in seven subjects (total dataset=56). The datasets used for the analysis are rotated by class-specific principal component matrices to decorrelate the measured data prior to feature extraction.
Re-Assembling Formal Features in Second Language Acquisition: Beyond Minimalism
ERIC Educational Resources Information Center
Carroll, Susanne E.
2009-01-01
In this commentary, Lardiere's discussion of features is compared with the use of features in constraint-based theories, and it is argued that constraint-based theories might offer a more elegant account of second language acquisition (SLA). Further evidence is reported to question the accuracy of Chierchia's (1998) Nominal Mapping Parameter.…
The Acquisition of Korean Plural Marking by Native English Speakers
ERIC Educational Resources Information Center
Hwang, Sun Hee
2013-01-01
This study investigated the L2 acquisition of Korean plural marking by English-speaking learners within a feature-reassembly approach--a formal feature-based approach suggesting that native-like attainment of L2 morphosyntactic knowledge is determined by whether learners can reconfigure the formal features assembled in functional categories and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoon Sohn; Charles Farrar; Norman Hunter
2001-01-01
This report summarizes the analysis of fiber-optic strain gauge data obtained from a surface-effect fast patrol boat being studied by the staff at the Norwegian Defense Research Establishment (NDRE) in Norway and the Naval Research Laboratory (NRL) in Washington D.C. Data from two different structural conditions were provided to the staff at Los Alamos National Laboratory. The problem was then approached from a statistical pattern recognition paradigm. This paradigm can be described as a four-part process: (1) operational evaluation, (2) data acquisition & cleansing, (3) feature extraction and data reduction, and (4) statistical model development for feature discrimination. Given thatmore » the first two portions of this paradigm were mostly completed by the NDRE and NRL staff, this study focused on data normalization, feature extraction, and statistical modeling for feature discrimination. The feature extraction process began by looking at relatively simple statistics of the signals and progressed to using the residual errors from auto-regressive (AR) models fit to the measured data as the damage-sensitive features. Data normalization proved to be the most challenging portion of this investigation. A novel approach to data normalization, where the residual errors in the AR model are considered to be an unmeasured input and an auto-regressive model with exogenous inputs (ARX) is then fit to portions of the data exhibiting similar waveforms, was successfully applied to this problem. With this normalization procedure, a clear distinction between the two different structural conditions was obtained. A false-positive study was also run, and the procedure developed herein did not yield any false-positive indications of damage. Finally, the results must be qualified by the fact that this procedure has only been applied to very limited data samples. A more complete analysis of additional data taken under various operational and environmental conditions as well as other structural conditions is necessary before one can definitively state that the procedure is robust enough to be used in practice.« less
Confidence-Based Feature Acquisition
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; desJardins, Marie; MacGlashan, James
2010-01-01
Confidence-based Feature Acquisition (CFA) is a novel, supervised learning method for acquiring missing feature values when there is missing data at both training (learning) and test (deployment) time. To train a machine learning classifier, data is encoded with a series of input features describing each item. In some applications, the training data may have missing values for some of the features, which can be acquired at a given cost. A relevant JPL example is that of the Mars rover exploration in which the features are obtained from a variety of different instruments, with different power consumption and integration time costs. The challenge is to decide which features will lead to increased classification performance and are therefore worth acquiring (paying the cost). To solve this problem, CFA, which is made up of two algorithms (CFA-train and CFA-predict), has been designed to greedily minimize total acquisition cost (during training and testing) while aiming for a specific accuracy level (specified as a confidence threshold). With this method, it is assumed that there is a nonempty subset of features that are free; that is, every instance in the data set includes these features initially for zero cost. It is also assumed that the feature acquisition (FA) cost associated with each feature is known in advance, and that the FA cost for a given feature is the same for all instances. Finally, CFA requires that the base-level classifiers produce not only a classification, but also a confidence (or posterior probability).
Unveiling the Biometric Potential of Finger-Based ECG Signals
Lourenço, André; Silva, Hugo; Fred, Ana
2011-01-01
The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications. PMID:21837235
Unveiling the biometric potential of finger-based ECG signals.
Lourenço, André; Silva, Hugo; Fred, Ana
2011-01-01
The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.
Vibration Sensor Monitoring of Nickel-Titanium Alloy Turning for Machinability Evaluation.
Segreto, Tiziana; Caggiano, Alessandra; Karam, Sara; Teti, Roberto
2017-12-12
Nickel-Titanium (Ni-Ti) alloys are very difficult-to-machine materials causing notable manufacturing problems due to their unique mechanical properties, including superelasticity, high ductility, and severe strain-hardening. In this framework, the aim of this paper is to assess the machinability of Ni-Ti alloys with reference to turning processes in order to realize a reliable and robust in-process identification of machinability conditions. An on-line sensor monitoring procedure based on the acquisition of vibration signals was implemented during the experimental turning tests. The detected vibration sensorial data were processed through an advanced signal processing method in time-frequency domain based on wavelet packet transform (WPT). The extracted sensorial features were used to construct WPT pattern feature vectors to send as input to suitably configured neural networks (NNs) for cognitive pattern recognition in order to evaluate the correlation between input sensorial information and output machinability conditions.
Vibration Sensor Monitoring of Nickel-Titanium Alloy Turning for Machinability Evaluation
Segreto, Tiziana; Karam, Sara; Teti, Roberto
2017-01-01
Nickel-Titanium (Ni-Ti) alloys are very difficult-to-machine materials causing notable manufacturing problems due to their unique mechanical properties, including superelasticity, high ductility, and severe strain-hardening. In this framework, the aim of this paper is to assess the machinability of Ni-Ti alloys with reference to turning processes in order to realize a reliable and robust in-process identification of machinability conditions. An on-line sensor monitoring procedure based on the acquisition of vibration signals was implemented during the experimental turning tests. The detected vibration sensorial data were processed through an advanced signal processing method in time-frequency domain based on wavelet packet transform (WPT). The extracted sensorial features were used to construct WPT pattern feature vectors to send as input to suitably configured neural networks (NNs) for cognitive pattern recognition in order to evaluate the correlation between input sensorial information and output machinability conditions. PMID:29231864
Designing Image Operators for MRI-PET Image Fusion of the Brain
NASA Astrophysics Data System (ADS)
Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.
2006-09-01
Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.
Coupling Sensing Hardware with Data Interrogation Software for Structural Health Monitoring
Farrar, Charles R.; Allen, David W.; Park, Gyuhae; ...
2006-01-01
The process of implementing a damage detection strategy for aerospace, civil and mechanical engineering infrastructure is referred to as structural health monitoring (SHM). The authors' approach is to address the SHM problem in the context of a statistical pattern recognition paradigm. In this paradigm, the process can be broken down into four parts: (1) Operational Evaluation, (2) Data Acquisition and Cleansing, (3) Feature Extraction and Data Compression, and (4) Statistical Model Development for Feature Discrimination. These processes must be implemented through hardware or software and, in general, some combination of these two approaches will be used. This paper will discussmore » each portion of the SHM process with particular emphasis on the coupling of a general purpose data interrogation software package for structural health monitoring with a modular wireless sensing and processing platform. More specifically, this paper will address the need to take an integrated hardware/software approach to developing SHM solutions.« less
The Second Language Acquisition of Number and Gender in Swahili: A Feature Reassembly Approach
ERIC Educational Resources Information Center
Spinner, Patti
2013-01-01
Much of the recent discussion surrounding the second language acquisition of morphology has centered on the question of whether learners can acquire new formal features. Lardiere's (2008, 2009) Feature Reassembly approach offers a new direction for research in this area by emphasizing the challenges presented by crosslinguistic differences in the…
Feature Biases in Early Word Learning: Network Distinctiveness Predicts Age of Acquisition
ERIC Educational Resources Information Center
Engelthaler, Tomas; Hills, Thomas T.
2017-01-01
Do properties of a word's features influence the order of its acquisition in early word learning? Combining the principles of mutual exclusivity and shape bias, the present work takes a network analysis approach to understanding how feature distinctiveness predicts the order of early word learning. Distance networks were built from nouns with edge…
NASA Astrophysics Data System (ADS)
Lee, Donghoon; Kim, Ye-seul; Choi, Sunghoon; Lee, Haenghwa; Jo, Byungdu; Choi, Seungyeon; Shin, Jungwook; Kim, Hee-Joung
2017-03-01
The chest digital tomosynthesis(CDT) is recently developed medical device that has several advantage for diagnosing lung disease. For example, CDT provides depth information with relatively low radiation dose compared to computed tomography (CT). However, a major problem with CDT is the image artifacts associated with data incompleteness resulting from limited angle data acquisition in CDT geometry. For this reason, the sensitivity of lung disease was not clear compared to CT. In this study, to improve sensitivity of lung disease detection in CDT, we developed computer aided diagnosis (CAD) systems based on machine learning. For design CAD systems, we used 100 cases of lung nodules cropped images and 100 cases of normal lesion cropped images acquired by lung man phantoms and proto type CDT. We used machine learning techniques based on support vector machine and Gabor filter. The Gabor filter was used for extracting characteristics of lung nodules and we compared performance of feature extraction of Gabor filter with various scale and orientation parameters. We used 3, 4, 5 scales and 4, 6, 8 orientations. After extracting features, support vector machine (SVM) was used for classifying feature of lesions. The linear, polynomial and Gaussian kernels of SVM were compared to decide the best SVM conditions for CDT reconstruction images. The results of CAD system with machine learning showed the capability of automatically lung lesion detection. Furthermore detection performance was the best when Gabor filter with 5 scale and 8 orientation and SVM with Gaussian kernel were used. In conclusion, our suggested CAD system showed improving sensitivity of lung lesion detection in CDT and decide Gabor filter and SVM conditions to achieve higher detection performance of our developed CAD system for CDT.
Multi-Temporal Classification and Change Detection Using Uav Images
NASA Astrophysics Data System (ADS)
Makuti, S.; Nex, F.; Yang, M. Y.
2018-05-01
In this paper different methodologies for the classification and change detection of UAV image blocks are explored. UAV is not only the cheapest platform for image acquisition but it is also the easiest platform to operate in repeated data collections over a changing area like a building construction site. Two change detection techniques have been evaluated in this study: the pre-classification and the post-classification algorithms. These methods are based on three main steps: feature extraction, classification and change detection. A set of state of the art features have been used in the tests: colour features (HSV), textural features (GLCM) and 3D geometric features. For classification purposes Conditional Random Field (CRF) has been used: the unary potential was determined using the Random Forest algorithm while the pairwise potential was defined by the fully connected CRF. In the performed tests, different feature configurations and settings have been considered to assess the performance of these methods in such challenging task. Experimental results showed that the post-classification approach outperforms the pre-classification change detection method. This was analysed using the overall accuracy, where by post classification have an accuracy of up to 62.6 % and the pre classification change detection have an accuracy of 46.5 %. These results represent a first useful indication for future works and developments.
Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.
Jiang, Zhixing; Zhang, David; Lu, Guangming
2018-04-19
Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.
A wavelet-based technique to predict treatment outcome for Major Depressive Disorder.
Mumtaz, Wajid; Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad; Malik, Aamir Saeed
2017-01-01
Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant's treatment outcome may help during antidepressant's selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant's treatment outcome for the MDD patients.
NASA Astrophysics Data System (ADS)
Wang, Bingjie; Sun, Qi; Pi, Shaohua; Wu, Hongyan
2014-09-01
In this paper, feature extraction and pattern recognition of the distributed optical fiber sensing signal have been studied. We adopt Mel-Frequency Cepstral Coefficient (MFCC) feature extraction, wavelet packet energy feature extraction and wavelet packet Shannon entropy feature extraction methods to obtain sensing signals (such as speak, wind, thunder and rain signals, etc.) characteristic vectors respectively, and then perform pattern recognition via RBF neural network. Performances of these three feature extraction methods are compared according to the results. We choose MFCC characteristic vector to be 12-dimensional. For wavelet packet feature extraction, signals are decomposed into six layers by Daubechies wavelet packet transform, in which 64 frequency constituents as characteristic vector are respectively extracted. In the process of pattern recognition, the value of diffusion coefficient is introduced to increase the recognition accuracy, while keeping the samples for testing algorithm the same. Recognition results show that wavelet packet Shannon entropy feature extraction method yields the best recognition accuracy which is up to 97%; the performance of 12-dimensional MFCC feature extraction method is less satisfactory; the performance of wavelet packet energy feature extraction method is the worst.
A Feature-Based Contrastive Approach to the L2 Acquisition of Specificity
ERIC Educational Resources Information Center
Cho, Jacee; Slabakova, Roumyana
2017-01-01
This study examined the acquisition of the Russian indefinite determiners ("kakoj"-"to" "which"-"to" and "kakoj"-"nibud" "which"-"nibud'') encoding scopal specificity by English and Korean native speakers within the feature-based contrastive framework (Lardiere 2008, 2009).…
Challenges in Extracting Information From Large Hydrogeophysical-monitoring Datasets
NASA Astrophysics Data System (ADS)
Day-Lewis, F. D.; Slater, L. D.; Johnson, T.
2012-12-01
Over the last decade, new automated geophysical data-acquisition systems have enabled collection of increasingly large and information-rich geophysical datasets. Concurrent advances in field instrumentation, web services, and high-performance computing have made real-time processing, inversion, and visualization of large three-dimensional tomographic datasets practical. Geophysical-monitoring datasets have provided high-resolution insights into diverse hydrologic processes including groundwater/surface-water exchange, infiltration, solute transport, and bioremediation. Despite the high information content of such datasets, extraction of quantitative or diagnostic hydrologic information is challenging. Visual inspection and interpretation for specific hydrologic processes is difficult for datasets that are large, complex, and (or) affected by forcings (e.g., seasonal variations) unrelated to the target hydrologic process. New strategies are needed to identify salient features in spatially distributed time-series data and to relate temporal changes in geophysical properties to hydrologic processes of interest while effectively filtering unrelated changes. Here, we review recent work using time-series and digital-signal-processing approaches in hydrogeophysics. Examples include applications of cross-correlation, spectral, and time-frequency (e.g., wavelet and Stockwell transforms) approaches to (1) identify salient features in large geophysical time series; (2) examine correlation or coherence between geophysical and hydrologic signals, even in the presence of non-stationarity; and (3) condense large datasets while preserving information of interest. Examples demonstrate analysis of large time-lapse electrical tomography and fiber-optic temperature datasets to extract information about groundwater/surface-water exchange and contaminant transport.
Extracting cross sections and water levels of vegetated ditches from LiDAR point clouds
NASA Astrophysics Data System (ADS)
Roelens, Jennifer; Dondeyne, Stefaan; Van Orshoven, Jos; Diels, Jan
2016-12-01
The hydrologic response of a catchment is sensitive to the morphology of the drainage network. Dimensions of bigger channels are usually well known, however, geometrical data for man-made ditches is often missing as there are many and small. Aerial LiDAR data offers the possibility to extract these small geometrical features. Analysing the three-dimensional point clouds directly will maintain the highest degree of information. A longitudinal and cross-sectional buffer were used to extract the cross-sectional profile points from the LiDAR point cloud. The profile was represented by spline functions fitted through the minimum envelop of the extracted points. The cross-sectional ditch profiles were classified for the presence of water and vegetation based on the normalized difference water index and the spatial characteristics of the points along the profile. The normalized difference water index was created using the RGB and intensity data coupled to the LiDAR points. The mean vertical deviation of 0.14 m found between the extracted and reference cross sections could mainly be attributed to the occurrence of water and partly to vegetation on the banks. In contrast to the cross-sectional area, the extracted width was not influenced by the environment (coefficient of determination R2 = 0.87). Water and vegetation influenced the extracted ditch characteristics, but the proposed method is still robust and therefore facilitates input data acquisition and improves accuracy of spatially explicit hydrological models.
Jung, Jaehoon; Yoon, Sanghyun; Ju, Sungha; Heo, Joon
2015-01-01
The growing interest and use of indoor mapping is driving a demand for improved data-acquisition facility, efficiency and productivity in the era of the Building Information Model (BIM). The conventional static laser scanning method suffers from some limitations on its operability in complex indoor environments, due to the presence of occlusions. Full scanning of indoor spaces without loss of information requires that surveyors change the scanner position many times, which incurs extra work for registration of each scanned point cloud. Alternatively, a kinematic 3D laser scanning system, proposed herein, uses line-feature-based Simultaneous Localization and Mapping (SLAM) technique for continuous mapping. Moreover, to reduce the uncertainty of line-feature extraction, we incorporated constrained adjustment based on an assumption made with respect to typical indoor environments: that the main structures are formed of parallel or orthogonal line features. The superiority of the proposed constrained adjustment is its reduction for uncertainties of the adjusted lines, leading to successful data association process. In the present study, kinematic scanning with and without constrained adjustment were comparatively evaluated in two test sites, and the results confirmed the effectiveness of the proposed system. The accuracy of the 3D mapping result was additionally evaluated by comparison with the reference points acquired by a total station: the Euclidean average distance error was 0.034 m for the seminar room and 0.043 m for the corridor, which satisfied the error tolerance for point cloud acquisition (0.051 m) according to the guidelines of the General Services Administration for BIM accuracy. PMID:26501292
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmon, S; Jeraj, R; Galavis, P
Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patchesmore » along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity and correlation of various texture features were shown to significantly differ between 2D and 3D extraction. Additionally, inter-feature correlations were more sensitive to reconstruction variation using single-plane extraction. This work highlights a need for standardized feature extraction/selection techniques in radiomics.« less
Target recognition based on convolutional neural network
NASA Astrophysics Data System (ADS)
Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian
2017-11-01
One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.
Near ground level sensing for spatial analysis of vegetation
NASA Technical Reports Server (NTRS)
Sauer, Tom; Rasure, John; Gage, Charlie
1991-01-01
Measured changes in vegetation indicate the dynamics of ecological processes and can identify the impacts from disturbances. Traditional methods of vegetation analysis tend to be slow because they are labor intensive; as a result, these methods are often confined to small local area measurements. Scientists need new algorithms and instruments that will allow them to efficiently study environmental dynamics across a range of different spatial scales. A new methodology that addresses this problem is presented. This methodology includes the acquisition, processing, and presentation of near ground level image data and its corresponding spatial characteristics. The systematic approach taken encompasses a feature extraction process, a supervised and unsupervised classification process, and a region labeling process yielding spatial information.
Clancy, Neil T.; Stoyanov, Danail; James, David R. C.; Di Marco, Aimee; Sauvage, Vincent; Clark, James; Yang, Guang-Zhong; Elson, Daniel S.
2012-01-01
Sequential multispectral imaging is an acquisition technique that involves collecting images of a target at different wavelengths, to compile a spectrum for each pixel. In surgical applications it suffers from low illumination levels and motion artefacts. A three-channel rigid endoscope system has been developed that allows simultaneous recording of stereoscopic and multispectral images. Salient features on the tissue surface may be tracked during the acquisition in the stereo cameras and, using multiple camera triangulation techniques, this information used to align the multispectral images automatically even though the tissue or camera is moving. This paper describes a detailed validation of the set-up in a controlled experiment before presenting the first in vivo use of the device in a porcine minimally invasive surgical procedure. Multispectral images of the large bowel were acquired and used to extract the relative concentration of haemoglobin in the tissue despite motion due to breathing during the acquisition. Using the stereoscopic information it was also possible to overlay the multispectral information on the reconstructed 3D surface. This experiment demonstrates the ability of this system for measuring blood perfusion changes in the tissue during surgery and its potential use as a platform for other sequential imaging modalities. PMID:23082296
Vertical Feature Mask Feature Classification Flag Extraction
Atmospheric Science Data Center
2013-03-28
Vertical Feature Mask Feature Classification Flag Extraction This routine demonstrates extraction of the ... in a CALIPSO Lidar Level 2 Vertical Feature Mask feature classification flag value. It is written in Interactive Data Language (IDL) ...
Wu, Shang-Lin; Liao, Lun-De; Lu, Shao-Wei; Jiang, Wei-Ling; Chen, Shi-An; Lin, Chin-Teng
2013-08-01
Electrooculography (EOG) signals can be used to control human-computer interface (HCI) systems, if properly classified. The ability to measure and process these signals may help HCI users to overcome many of the physical limitations and inconveniences in daily life. However, there are currently no effective multidirectional classification methods for monitoring eye movements. Here, we describe a classification method used in a wireless EOG-based HCI device for detecting eye movements in eight directions. This device includes wireless EOG signal acquisition components, wet electrodes and an EOG signal classification algorithm. The EOG classification algorithm is based on extracting features from the electrical signals corresponding to eight directions of eye movement (up, down, left, right, up-left, down-left, up-right, and down-right) and blinking. The recognition and processing of these eight different features were achieved in real-life conditions, demonstrating that this device can reliably measure the features of EOG signals. This system and its classification procedure provide an effective method for identifying eye movements. Additionally, it may be applied to study eye functions in real-life conditions in the near future.
A Fault Recognition System for Gearboxes of Wind Turbines
NASA Astrophysics Data System (ADS)
Yang, Zhiling; Huang, Haiyue; Yin, Zidong
2017-12-01
Costs of maintenance and loss of power generation caused by the faults of wind turbines gearboxes are the main components of operation costs for a wind farm. Therefore, the technology of condition monitoring and fault recognition for wind turbines gearboxes is becoming a hot topic. A condition monitoring and fault recognition system (CMFRS) is presented for CBM of wind turbines gearboxes in this paper. The vibration signals from acceleration sensors at different locations of gearbox and the data from supervisory control and data acquisition (SCADA) system are collected to CMFRS. Then the feature extraction and optimization algorithm is applied to these operational data. Furthermore, to recognize the fault of gearboxes, the GSO-LSSVR algorithm is proposed, combining the least squares support vector regression machine (LSSVR) with the Glowworm Swarm Optimization (GSO) algorithm. Finally, the results show that the fault recognition system used in this paper has a high rate for identifying three states of wind turbines’ gears; besides, the combination of date features can affect the identifying rate and the selection optimization algorithm presented in this paper can get a pretty good date feature subset for the fault recognition.
Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI
Farooq, Hamza; Xu, Junqian; Nam, Jung Who; Keefe, Daniel F.; Yacoub, Essa; Georgiou, Tryphon; Lenglet, Christophe
2016-01-01
Diffusion MRI (dMRI) reveals microstructural features of the brain white matter by quantifying the anisotropic diffusion of water molecules within axonal bundles. Yet, identifying features such as axonal orientation dispersion, density, diameter, etc., in complex white matter fiber configurations (e.g. crossings) has proved challenging. Besides optimized data acquisition and advanced biophysical models, computational procedures to fit such models to the data are critical. However, these procedures have been largely overlooked by the dMRI microstructure community and new, more versatile, approaches are needed to solve complex biophysical model fitting problems. Existing methods are limited to models assuming single fiber orientation, relevant to limited brain areas like the corpus callosum, or multiple orientations but without the ability to extract detailed microstructural features. Here, we introduce a new and versatile optimization technique (MIX), which enables microstructure imaging of crossing white matter fibers. We provide a MATLAB implementation of MIX, and demonstrate its applicability to general microstructure models in fiber crossings using synthetic as well as ex-vivo and in-vivo brain data. PMID:27982056
NASA Astrophysics Data System (ADS)
Tiwari, Pallavi; Danish, Shabbar; Madabhushi, Anant
2014-03-01
Laser interstitial thermal therapy (LITT) has recently emerged as a new treatment modality for cancer pain management that targets the cingulum (pain center in the brain), and has shown promise over radio-frequency (RF) based ablation which is reported to provide temporary relief. One of the major advantages enjoyed by LITT is its compatibility with magnetic resonance imaging (MRI), allowing for high resolution in vivo imaging to be used in LITT procedures. Since laser ablation for pain management is currently exploratory and is only performed at a few centers worldwide, its short-, and long-term effects on the cingulum are currently unknown. Traditionally treatment effects are evaluated by monitoring changes in volume of the ablation zone post-treatment. However, this is sub-optimal since it involves evaluating a single global parameter (volume) to detect changes pre-, and post-MRI. Additionally, the qualitative observations of LITT-related changes on multi-parametric MRI (MPMRI) do not specifically address differentiation between the appearance of treatment related changes (edema, necrosis) from recurrence of the disease (pain recurrence). In this work, we explore the utility of computer extracted texture descriptors on MP-MRI to capture early treatment related changes on a per-voxel basis by extracting quantitative relationships that may allow for an in-depth understanding of tissue response to LITT on MRI, subtle changes that may not be appreciable on original MR intensities. The second objective of this work is to investigate the efficacy of different MRI protocols in accurately capturing treatment related changes within and outside the ablation zone post-LITT. A retrospective cohort of studies comprising pre- and 24-hour post-LITT 3 Tesla T1-weighted (T1w), T2w, T2-GRE, and T2-FLAIR acquisitions was considered. Our scheme involved (1) inter-protocol as well as inter-acquisition affine registration of pre- and post-LITT MRI, (2) quantitation of MRI parameters by correcting for intensity drift in order to examine tissue-specific response, and (3) quantification of MRI maps via texture and intensity features to evaluate changes in MR markers pre- and post-LITT. A total of 78 texture features comprising of non-steerable and steerable gradient and second order statistical features were extracted from pre- and post-LITT MP-MRI on a per-voxel basis. Quantitative, voxel-wise comparison of the changes in MRI texture features between pre-, and post-LITT MRI indicate that (a) steerable and non-steerable gradient texture features were highly sensitive as well as specific in predicting subtle micro-architectural changes within and around the ablation zone pre- and post-LITT, (b) FLAIR was identified as the most sensitive MRI protocol in identifying early treatment changes yielding a normalized percentage change of 360% within the ablation zone relative to its pre-LITT value, and (c) GRE was identified as the most sensitive MRI protocol in quantifying changes outside the ablation zone post-LITT. Our preliminary results thus indicate great potential for non-invasive computerized MRI features in determining localized micro-architectural focal treatment related changes post-LITT.
Ibrahim, Wisam; Abadeh, Mohammad Saniee
2017-05-21
Protein fold recognition is an important problem in bioinformatics to predict three-dimensional structure of a protein. One of the most challenging tasks in protein fold recognition problem is the extraction of efficient features from the amino-acid sequences to obtain better classifiers. In this paper, we have proposed six descriptors to extract features from protein sequences. These descriptors are applied in the first stage of a three-stage framework PCA-DELM-LDA to extract feature vectors from the amino-acid sequences. Principal Component Analysis PCA has been implemented to reduce the number of extracted features. The extracted feature vectors have been used with original features to improve the performance of the Deep Extreme Learning Machine DELM in the second stage. Four new features have been extracted from the second stage and used in the third stage by Linear Discriminant Analysis LDA to classify the instances into 27 folds. The proposed framework is implemented on the independent and combined feature sets in SCOP datasets. The experimental results show that extracted feature vectors in the first stage could improve the performance of DELM in extracting new useful features in second stage. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ibrahim, Elsy; Kim, Wonkook; Crawford, Melba; Monbaliu, Jaak
2017-02-01
Remote sensing has been successfully utilized to distinguish and quantify sediment properties in the intertidal environment. Classification approaches of imagery are popular and powerful yet can lead to site- and case-specific results. Such specificity creates challenges for temporal studies. Thus, this paper investigates the use of regression models to quantify sediment properties instead of classifying them. Two regression approaches, namely multiple regression (MR) and support vector regression (SVR), are used in this study for the retrieval of bio-physical variables of intertidal surface sediment of the IJzermonding, a Belgian nature reserve. In the regression analysis, mud content, chlorophyll a concentration, organic matter content, and soil moisture are estimated using radiometric variables of two airborne sensors, namely airborne hyperspectral sensor (AHS) and airborne prism experiment (APEX) and and using field hyperspectral acquisitions by analytical spectral device (ASD). The performance of the two regression approaches is best for the estimation of moisture content. SVR attains the highest accuracy without feature reduction while MR achieves good results when feature reduction is carried out. Sediment property maps are successfully obtained using the models and hyperspectral imagery where SVR used with all bands achieves the best performance. The study also involves the extraction of weights identifying the contribution of each band of the images in the quantification of each sediment property when MR and principal component analysis are used.
Prognostics and Health Management of Wind Turbines: Current Status and Future Opportunities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Shuangwen
Prognostics and health management is not a new concept. It has been used in relatively mature industries, such as aviation and electronics, to help improve operation and maintenance (O&M) practices. In the wind industry, prognostics and health management is relatively new. The level for both wind industry applications and research and development (R&D) has increased in recent years because of its potential for reducing O&M cost of wind power, especially for turbines installed offshore. The majority of wind industry application efforts has been focused on diagnosis based on various sensing and feature extraction techniques. For R&D, activities are being conductedmore » in almost all areas of a typical prognostics and health management framework (i.e., sensing, data collection, feature extraction, diagnosis, prognosis, and maintenance scheduling). This presentation provides an overview of the current status of wind turbine prognostics and health management that focuses on drivetrain condition monitoring through vibration, oil debris, and oil condition analysis techniques. It also discusses turbine component health diagnosis through data mining and modeling based on supervisory control and data acquisition system data. Finally, it provides a brief survey of R&D activities for wind turbine prognostics and health management, along with future opportunities.« less
NASA Astrophysics Data System (ADS)
Krappe, Sebastian; Benz, Michaela; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian
2015-03-01
The morphological analysis of bone marrow smears is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually with the use of bright field microscope. This is a time consuming, partly subjective and tedious process. Furthermore, repeated examinations of a slide yield intra- and inter-observer variances. For this reason an automation of morphological bone marrow analysis is pursued. This analysis comprises several steps: image acquisition and smear detection, cell localization and segmentation, feature extraction and cell classification. The automated classification of bone marrow cells is depending on the automated cell segmentation and the choice of adequate features extracted from different parts of the cell. In this work we focus on the evaluation of support vector machines (SVMs) and random forests (RFs) for the differentiation of bone marrow cells in 16 different classes, including immature and abnormal cell classes. Data sets of different segmentation quality are used to test the two approaches. Automated solutions for the morphological analysis for bone marrow smears could use such a classifier to pre-classify bone marrow cells and thereby shortening the examination duration.
Computer vision and machine learning for robust phenotyping in genome-wide studies
Zhang, Jiaoping; Naik, Hsiang Sing; Assefa, Teshale; Sarkar, Soumik; Reddy, R. V. Chowda; Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh K.
2017-01-01
Traditional evaluation of crop biotic and abiotic stresses are time-consuming and labor-intensive limiting the ability to dissect the genetic basis of quantitative traits. A machine learning (ML)-enabled image-phenotyping pipeline for the genetic studies of abiotic stress iron deficiency chlorosis (IDC) of soybean is reported. IDC classification and severity for an association panel of 461 diverse plant-introduction accessions was evaluated using an end-to-end phenotyping workflow. The workflow consisted of a multi-stage procedure including: (1) optimized protocols for consistent image capture across plant canopies, (2) canopy identification and registration from cluttered backgrounds, (3) extraction of domain expert informed features from the processed images to accurately represent IDC expression, and (4) supervised ML-based classifiers that linked the automatically extracted features with expert-rating equivalent IDC scores. ML-generated phenotypic data were subsequently utilized for the genome-wide association study and genomic prediction. The results illustrate the reliability and advantage of ML-enabled image-phenotyping pipeline by identifying previously reported locus and a novel locus harboring a gene homolog involved in iron acquisition. This study demonstrates a promising path for integrating the phenotyping pipeline into genomic prediction, and provides a systematic framework enabling robust and quicker phenotyping through ground-based systems. PMID:28272456
Blumrosen, Gaddi; Luttwak, Ami
2013-01-01
Acquisition of patient kinematics in different environments plays an important role in the detection of risk situations such as fall detection in elderly patients, in rehabilitation of patients with injuries, and in the design of treatment plans for patients with neurological diseases. Received Signal Strength Indicator (RSSI) measurements in a Body Area Network (BAN), capture the signal power on a radio link. The main aim of this paper is to demonstrate the potential of utilizing RSSI measurements in assessment of human kinematic features, and to give methods to determine these features. RSSI measurements can be used for tracking different body parts' displacements on scales of a few centimeters, for classifying motion and gait patterns instead of inertial sensors, and to serve as an additional reference to other sensors, in particular inertial sensors. Criteria and analytical methods for body part tracking, kinematic motion feature extraction, and a Kalman filter model for aggregation of RSSI and inertial sensor were derived. The methods were verified by a set of experiments performed in an indoor environment. In the future, the use of RSSI measurements can help in continuous assessment of various kinematic features of patients during their daily life activities and enhance medical diagnosis accuracy with lower costs. PMID:23979481
Blumrosen, Gaddi; Luttwak, Ami
2013-08-23
Acquisition of patient kinematics in different environments plays an important role in the detection of risk situations such as fall detection in elderly patients, in rehabilitation of patients with injuries, and in the design of treatment plans for patients with neurological diseases. Received Signal Strength Indicator (RSSI) measurements in a Body Area Network (BAN), capture the signal power on a radio link. The main aim of this paper is to demonstrate the potential of utilizing RSSI measurements in assessment of human kinematic features, and to give methods to determine these features. RSSI measurements can be used for tracking different body parts' displacements on scales of a few centimeters, for classifying motion and gait patterns instead of inertial sensors, and to serve as an additional reference to other sensors, in particular inertial sensors. Criteria and analytical methods for body part tracking, kinematic motion feature extraction, and a Kalman filter model for aggregation of RSSI and inertial sensor were derived. The methods were verified by a set of experiments performed in an indoor environment. In the future, the use of RSSI measurements can help in continuous assessment of various kinematic features of patients during their daily life activities and enhance medical diagnosis accuracy with lower costs.
Removal of BCG artefact from concurrent fMRI-EEG recordings based on EMD and PCA.
Javed, Ehtasham; Faye, Ibrahima; Malik, Aamir Saeed; Abdullah, Jafri Malin
2017-11-01
Simultaneous electroencephalography (EEG) and functional magnetic resonance image (fMRI) acquisitions provide better insight into brain dynamics. Some artefacts due to simultaneous acquisition pose a threat to the quality of the data. One such problematic artefact is the ballistocardiogram (BCG) artefact. We developed a hybrid algorithm that combines features of empirical mode decomposition (EMD) with principal component analysis (PCA) to reduce the BCG artefact. The algorithm does not require extra electrocardiogram (ECG) or electrooculogram (EOG) recordings to extract the BCG artefact. The method was tested with both simulated and real EEG data of 11 participants. From the simulated data, the similarity index between the extracted BCG and the simulated BCG showed the effectiveness of the proposed method in BCG removal. On the other hand, real data were recorded with two conditions, i.e. resting state (eyes closed dataset) and task influenced (event-related potentials (ERPs) dataset). Using qualitative (visual inspection) and quantitative (similarity index, improved normalized power spectrum (INPS) ratio, power spectrum, sample entropy (SE)) evaluation parameters, the assessment results showed that the proposed method can efficiently reduce the BCG artefact while preserving the neuronal signals. Compared with conventional methods, namely, average artefact subtraction (AAS), optimal basis set (OBS) and combined independent component analysis and principal component analysis (ICA-PCA), the statistical analyses of the results showed that the proposed method has better performance, and the differences were significant for all quantitative parameters except for the power and sample entropy. The proposed method does not require any reference signal, prior information or assumption to extract the BCG artefact. It will be very useful in circumstances where the reference signal is not available. Copyright © 2017 Elsevier B.V. All rights reserved.
Djiongo Kenfack, Cedrigue Boris; Monga, Olivier; Mpong, Serge Moto; Ndoundam, René
2018-03-01
Within the last decade, several approaches using quaternion numbers to handle and model multiband images in a holistic manner were introduced. The quaternion Fourier transform can be efficiently used to model texture in multidimensional data such as color images. For practical application, multispectral satellite data appear as a primary source for measuring past trends and monitoring changes in forest carbon stocks. In this work, we propose a texture-color descriptor based on the quaternion Fourier transform to extract relevant information from multiband satellite images. We propose a new multiband image texture model extraction, called FOTO++, in order to address biomass estimation issues. The first stage consists in removing noise from the multispectral data while preserving the edges of canopies. Afterward, color texture descriptors are extracted thanks to a discrete form of the quaternion Fourier transform, and finally the support vector regression method is used to deduce biomass estimation from texture indices. Our texture features are modeled using a vector composed with the radial spectrum coming from the amplitude of the quaternion Fourier transform. We conduct several experiments in order to study the sensitivity of our model to acquisition parameters. We also assess its performance both on synthetic images and on real multispectral images of Cameroonian forest. The results show that our model is more robust to acquisition parameters than the classical Fourier Texture Ordination model (FOTO). Our scheme is also more accurate for aboveground biomass estimation. We stress that a similar methodology could be implemented using quaternion wavelets. These results highlight the potential of the quaternion-based approach to study multispectral satellite images.
NASA Astrophysics Data System (ADS)
Ferraz, A.; Painter, T. H.; Saatchi, S.; Bormann, K. J.
2016-12-01
Fusion of multi-temporal Airborne Snow Observatory (ASO) lidar data for mountainous vegetation ecosystems studies The NASA Jet Propulsion Laboratory developed the Airborne Snow Observatory (ASO), a coupled scanning lidar system and imaging spectrometer, to quantify the spatial distribution of snow volume and dynamics over mountains watersheds (Painter et al., 2015). To do this, ASO weekly over-flights mountainous areas during snowfall and snowmelt seasons. In addition, there are additional flights in snow-off conditions to calculate Digital Terrain Models (DTM). In this study, we focus on the reliability of ASO lidar data to characterize the 3D forest vegetation structure. The density of a single point cloud acquisition is of nearly 1 pt/m2, which is not optimal to properly characterize vegetation. However, ASO covers a given study site up to 14 times a year that enables computing a high-resolution point cloud by merging single acquisitions. In this study, we present a method to automatically register ASO multi-temporal lidar 3D point clouds. Although flight specifications do not change between acquisition dates, lidar datasets might have significant planimetric shifts due to inaccuracies in platform trajectory estimation introduced by the GPS system and drifts of the IMU. There are a large number of methodologies that address the problem of 3D data registration (Gressin et al., 2013). Briefly, they look for common primitive features in both datasets such as buildings corners, structures like electric poles, DTM breaklines or deformations. However, they are not suited for our experiment. First, single acquisition point clouds have low density that makes the extraction of primitive features difficult. Second, the landscape significantly changes between flights due to snowfall and snowmelt. Therefore, we developed a method to automatically register point clouds using tree apexes as keypoints because they are features that are supposed to experience little change during winter season. We applied the method to 14 lidar datasets (12 snow-on and 2 snow-off) acquired over the Tuolumne River Basin (California) in the year of 2014. To assess the reliability of the merged point cloud, we analyze the quality of vegetation related products such as canopy height models (CHM) and vertical vegetation profiles.
Iris recognition based on key image feature extraction.
Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y
2008-01-01
In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.
Experience improves feature extraction in Drosophila.
Peng, Yueqing; Xi, Wang; Zhang, Wei; Zhang, Ke; Guo, Aike
2007-05-09
Previous exposure to a pattern in the visual scene can enhance subsequent recognition of that pattern in many species from honeybees to humans. However, whether previous experience with a visual feature of an object, such as color or shape, can also facilitate later recognition of that particular feature from multiple visual features is largely unknown. Visual feature extraction is the ability to select the key component from multiple visual features. Using a visual flight simulator, we designed a novel protocol for visual feature extraction to investigate the effects of previous experience on visual reinforcement learning in Drosophila. We found that, after conditioning with a visual feature of objects among combinatorial shape-color features, wild-type flies exhibited poor ability to extract the correct visual feature. However, the ability for visual feature extraction was greatly enhanced in flies trained previously with that visual feature alone. Moreover, we demonstrated that flies might possess the ability to extract the abstract category of "shape" but not a particular shape. Finally, this experience-dependent feature extraction is absent in flies with defective MBs, one of the central brain structures in Drosophila. Our results indicate that previous experience can enhance visual feature extraction in Drosophila and that MBs are required for this experience-dependent visual cognition.
Automatic extraction of property norm-like data from large text corpora.
Kelly, Colin; Devereux, Barry; Korhonen, Anna
2014-01-01
Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties.
Rapid Statistical Learning Supporting Word Extraction From Continuous Speech.
Batterink, Laura J
2017-07-01
The identification of words in continuous speech, known as speech segmentation, is a critical early step in language acquisition. This process is partially supported by statistical learning, the ability to extract patterns from the environment. Given that speech segmentation represents a potential bottleneck for language acquisition, patterns in speech may be extracted very rapidly, without extensive exposure. This hypothesis was examined by exposing participants to continuous speech streams composed of novel repeating nonsense words. Learning was measured on-line using a reaction time task. After merely one exposure to an embedded novel word, learners demonstrated significant learning effects, as revealed by faster responses to predictable than to unpredictable syllables. These results demonstrate that learners gained sensitivity to the statistical structure of unfamiliar speech on a very rapid timescale. This ability may play an essential role in early stages of language acquisition, allowing learners to rapidly identify word candidates and "break in" to an unfamiliar language.
Text feature extraction based on deep learning: a review.
Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan
2017-01-01
Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.
Grand-Brochier, Manuel; Vacavant, Antoine; Cerutti, Guillaume; Kurtz, Camille; Weber, Jonathan; Tougne, Laure
2015-05-01
In this paper, we propose a comparative study of various segmentation methods applied to the extraction of tree leaves from natural images. This study follows the design of a mobile application, developed by Cerutti et al. (published in ReVeS Participation--Tree Species Classification Using Random Forests and Botanical Features. CLEF 2012), to highlight the impact of the choices made for segmentation aspects. All the tests are based on a database of 232 images of tree leaves depicted on natural background from smartphones acquisitions. We also propose to study the improvements, in terms of performance, using preprocessing tools, such as the interaction between the user and the application through an input stroke, as well as the use of color distance maps. The results presented in this paper shows that the method developed by Cerutti et al. (denoted Guided Active Contour), obtains the best score for almost all observation criteria. Finally, we detail our online benchmark composed of 14 unsupervised methods and 6 supervised ones.
Galavis, Paulina E; Hollensen, Christian; Jallow, Ngoneh; Paliwal, Bhudatt; Jeraj, Robert
2010-10-01
Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45-60 minutes post-injection of 10 mCi of [(18)F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor segmentation.
GALAVIS, PAULINA E.; HOLLENSEN, CHRISTIAN; JALLOW, NGONEH; PALIWAL, BHUDATT; JERAJ, ROBERT
2014-01-01
Background Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45–60 minutes post-injection of 10 mCi of [18F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor segmentation. PMID:20831489
Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information
NASA Astrophysics Data System (ADS)
Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.
2015-10-01
The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.
Implementation of a portable device for real-time ECG signal analysis.
Jeon, Taegyun; Kim, Byoungho; Jeon, Moongu; Lee, Byung-Geun
2014-12-10
Cardiac disease is one of the main causes of catastrophic mortality. Therefore, detecting the symptoms of cardiac disease as early as possible is important for increasing the patient's survival. In this study, a compact and effective architecture for detecting atrial fibrillation (AFib) and myocardial ischemia is proposed. We developed a portable device using this architecture, which allows real-time electrocardiogram (ECG) signal acquisition and analysis for cardiac diseases. A noisy ECG signal was preprocessed by an analog front-end consisting of analog filters and amplifiers before it was converted into digital data. The analog front-end was minimized to reduce the size of the device and power consumption by implementing some of its functions with digital filters realized in software. With the ECG data, we detected QRS complexes based on wavelet analysis and feature extraction for morphological shape and regularity using an ARM processor. A classifier for cardiac disease was constructed based on features extracted from a training dataset using support vector machines. The classifier then categorized the ECG data into normal beats, AFib, and myocardial ischemia. A portable ECG device was implemented, and successfully acquired and processed ECG signals. The performance of this device was also verified by comparing the processed ECG data with high-quality ECG data from a public cardiac database. Because of reduced computational complexity, the ARM processor was able to process up to a thousand samples per second, and this allowed real-time acquisition and diagnosis of heart disease. Experimental results for detection of heart disease showed that the device classified AFib and ischemia with a sensitivity of 95.1% and a specificity of 95.9%. Current home care and telemedicine systems have a separate device and diagnostic service system, which results in additional time and cost. Our proposed portable ECG device provides captured ECG data and suspected waveform to identify sporadic and chronic events of heart diseases. This device has been built and evaluated for high quality of signals, low computational complexity, and accurate detection.
A wavelet-based technique to predict treatment outcome for Major Depressive Disorder
Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad
2017-01-01
Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant’s treatment outcome may help during antidepressant’s selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant’s treatment outcome for the MDD patients. PMID:28152063
Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification.
Wang, Shouyi; Bowen, Stephen R; Chaovalitwongse, W Art; Sandison, George A; Grabowski, Thomas J; Kinahan, Paul E
2014-02-21
The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUV(peak)) over lesions of interest. Relative differences in SUV(peak) between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUV(peak) values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.
NASA Astrophysics Data System (ADS)
Jafari, Mehrnoosh; Minaei, Saeid; Safaie, Naser; Torkamani-Azar, Farah
2016-05-01
Spatial and temporal changes in surface temperature of infected and non-infected rose plant (Rosa hybrida cv. 'Angelina') leaves were visualized using digital infrared thermography. Infected areas exhibited a presymptomatic decrease in leaf temperature up to 2.3 °C. In this study, two experiments were conducted: one in the greenhouse (semi-controlled ambient conditions) and the other, in a growth chamber (controlled ambient conditions). Effect of drought stress and darkness on the thermal images were also studied in this research. It was found that thermal histograms of the infected leaves closely follow a standard normal distribution. They have a skewness near zero, kurtosis under 3, standard deviation larger than 0.6, and a Maximum Temperature Difference (MTD) more than 4. For each thermal histogram, central tendency, variability, and parameters of the best fitted Standard Normal and Laplace distributions were estimated. To classify healthy and infected leaves, feature selection was conducted and the best extracted thermal features with the largest linguistic hedge values were chosen. Among those features independent of absolute temperature measurement, MTD, SD, skewness, R2l, kurtosis and bn were selected. Then, a neuro-fuzzy classifier was trained to recognize the healthy leaves from the infected ones. The k-means clustering method was utilized to obtain the initial parameters and the fuzzy "if-then" rules. Best estimation rates of 92.55% and 92.3% were achieved in training and testing the classifier with 8 clusters. Results showed that drought stress had an adverse effect on the classification of healthy leaves. More healthy leaves under drought stress condition were classified as infected causing PPV and Specificity index values to decrease, accordingly. Image acquisition in the dark had no significant effect on the classification performance.
Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification
NASA Astrophysics Data System (ADS)
Wang, Shouyi; Bowen, Stephen R.; Chaovalitwongse, W. Art; Sandison, George A.; Grabowski, Thomas J.; Kinahan, Paul E.
2014-02-01
The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUVpeak) over lesions of interest. Relative differences in SUVpeak between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUVpeak values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.
Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.
Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu
2016-01-01
The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.
Resource-Bounded Information Acquisition and Learning
2012-05-01
candidate features arrive one at a time, and the learner’s task is to select a ‘best so far’ set of features from streaming features. Krause et al...on Artificial Intelligence. [31] Gatterbauer, Wolfgang . Estimating required recall for successful knowledge acquisition from the web. In Proceedings of...the 15th international conference on World Wide Web (New York, NY, USA, 2006), WWW ’06, ACM, pp. 969– 970. [32] Gatterbauer, Wolfgang . Rules of thumb
Integrated feature extraction and selection for neuroimage classification
NASA Astrophysics Data System (ADS)
Fan, Yong; Shen, Dinggang
2009-02-01
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
NASA Astrophysics Data System (ADS)
Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia
2018-05-01
Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.
Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop
NASA Astrophysics Data System (ADS)
Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin
2014-06-01
Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.
Acquiring a 2D rolled equivalent fingerprint image from a non-contact 3D finger scan
NASA Astrophysics Data System (ADS)
Fatehpuria, Abhishika; Lau, Daniel L.; Hassebrook, Laurence G.
2006-04-01
The use of fingerprints as a biometric is both the oldest mode of computer aided personal identification and the most relied-upon technology in use today. But current fingerprint scanning systems have some challenging and peculiar difficulties. Often skin conditions and imperfect acquisition circumstances cause the captured fingerprint image to be far from ideal. Also some of the acquisition techniques can be slow and cumbersome to use and may not provide the complete information required for reliable feature extraction and fingerprint matching. Most of the difficulties arise due to the contact of the fingerprint surface with the sensor platen. To attain a fast-capture, non-contact, fingerprint scanning technology, we are developing a scanning system that employs structured light illumination as a means for acquiring a 3-D scan of the finger with sufficiently high resolution to record ridge-level details. In this paper, we describe the postprocessing steps used for converting the acquired 3-D scan of the subject's finger into a 2-D rolled equivalent image.
A framework for feature extraction from hospital medical data with applications in risk prediction.
Tran, Truyen; Luo, Wei; Phung, Dinh; Gupta, Sunil; Rana, Santu; Kennedy, Richard Lee; Larkins, Ann; Venkatesh, Svetha
2014-12-30
Feature engineering is a time consuming component of predictive modeling. We propose a versatile platform to automatically extract features for risk prediction, based on a pre-defined and extensible entity schema. The extraction is independent of disease type or risk prediction task. We contrast auto-extracted features to baselines generated from the Elixhauser comorbidities. Hospital medical records was transformed to event sequences, to which filters were applied to extract feature sets capturing diversity in temporal scales and data types. The features were evaluated on a readmission prediction task, comparing with baseline feature sets generated from the Elixhauser comorbidities. The prediction model was through logistic regression with elastic net regularization. Predictions horizons of 1, 2, 3, 6, 12 months were considered for four diverse diseases: diabetes, COPD, mental disorders and pneumonia, with derivation and validation cohorts defined on non-overlapping data-collection periods. For unplanned readmissions, auto-extracted feature set using socio-demographic information and medical records, outperformed baselines derived from the socio-demographic information and Elixhauser comorbidities, over 20 settings (5 prediction horizons over 4 diseases). In particular over 30-day prediction, the AUCs are: COPD-baseline: 0.60 (95% CI: 0.57, 0.63), auto-extracted: 0.67 (0.64, 0.70); diabetes-baseline: 0.60 (0.58, 0.63), auto-extracted: 0.67 (0.64, 0.69); mental disorders-baseline: 0.57 (0.54, 0.60), auto-extracted: 0.69 (0.64,0.70); pneumonia-baseline: 0.61 (0.59, 0.63), auto-extracted: 0.70 (0.67, 0.72). The advantages of auto-extracted standard features from complex medical records, in a disease and task agnostic manner were demonstrated. Auto-extracted features have good predictive power over multiple time horizons. Such feature sets have potential to form the foundation of complex automated analytic tasks.
Comparative analysis of feature extraction methods in satellite imagery
NASA Astrophysics Data System (ADS)
Karim, Shahid; Zhang, Ye; Asif, Muhammad Rizwan; Ali, Saad
2017-10-01
Feature extraction techniques are extensively being used in satellite imagery and getting impressive attention for remote sensing applications. The state-of-the-art feature extraction methods are appropriate according to the categories and structures of the objects to be detected. Based on distinctive computations of each feature extraction method, different types of images are selected to evaluate the performance of the methods, such as binary robust invariant scalable keypoints (BRISK), scale-invariant feature transform, speeded-up robust features (SURF), features from accelerated segment test (FAST), histogram of oriented gradients, and local binary patterns. Total computational time is calculated to evaluate the speed of each feature extraction method. The extracted features are counted under shadow regions and preprocessed shadow regions to compare the functioning of each method. We have studied the combination of SURF with FAST and BRISK individually and found very promising results with an increased number of features and less computational time. Finally, feature matching is conferred for all methods.
NASA Technical Reports Server (NTRS)
Horton, F. E.
1970-01-01
The utility of remote sensing techniques to urban data acquisition problems in several distinct areas was identified. This endeavor included a comparison of remote sensing systems for urban data collection, the extraction of housing quality data from aerial photography, utilization of photographic sensors in urban transportation studies, urban change detection, space photography utilization, and an application of remote sensing techniques to the acquisition of data concerning intra-urban commercial centers. The systematic evaluation of variable extraction for urban modeling and planning at several different scales, and the model derivation for identifying and predicting economic growth and change within a regional system of cities are also studied.
Bille, E; Dauphin, B; Leto, J; Bougnoux, M-E; Beretti, J-L; Lotz, A; Suarez, S; Meyer, J; Join-Lambert, O; Descamps, P; Grall, N; Mory, F; Dubreuil, L; Berche, P; Nassif, X; Ferroni, A
2012-11-01
All organisms usually isolated in our laboratory are now routinely identified by matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) using the Andromas software. The aim of this study was to describe the use of this strategy in a routine clinical microbiology laboratory. The microorganisms identified included bacteria, mycobacteria, yeasts and Aspergillus spp. isolated on solid media or extracted directly from blood cultures. MALDI-TOF MS was performed on 2665 bacteria isolated on solid media, corresponding to all bacteria isolated during this period except Escherichia coli grown on chromogenic media. All acquisitions were performed without extraction. After a single acquisition, 93.1% of bacteria grown on solid media were correctly identified. When the first acquisition was not contributory, a second acquisition was performed either the same day or the next day. After two acquisitions, the rate of bacteria identified increased to 99.2%. The failures reported on 21 strains were due to an unknown profile attributed to new species (9) or an insufficient quality of the spectrum (12). MALDI-TOF MS has been applied to 162 positive blood cultures. The identification rate was 91.4%. All mycobacteria isolated during this period (22) were correctly identified by MALDI-TOF MS without any extraction. For 96.3% and 92.2% of yeasts and Aspergillus spp., respectively, the identification was obtained with a single acquisition. After a second acquisition, the overall identification rate was 98.8% for yeasts (160/162) and 98.4% (63/64) for Aspergillus spp. In conclusion, the MALDI-TOF MS strategy used in this work allows a rapid and efficient identification of all microorganisms isolated routinely. © 2011 The Authors. Clinical Microbiology and Infection © 2011 European Society of Clinical Microbiology and Infectious Diseases.
Automatic extraction of planetary image features
NASA Technical Reports Server (NTRS)
LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)
2013-01-01
A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.
Multi-texture local ternary pattern for face recognition
NASA Astrophysics Data System (ADS)
Essa, Almabrok; Asari, Vijayan
2017-05-01
In imagery and pattern analysis domain a variety of descriptors have been proposed and employed for different computer vision applications like face detection and recognition. Many of them are affected under different conditions during the image acquisition process such as variations in illumination and presence of noise, because they totally rely on the image intensity values to encode the image information. To overcome these problems, a novel technique named Multi-Texture Local Ternary Pattern (MTLTP) is proposed in this paper. MTLTP combines the edges and corners based on the local ternary pattern strategy to extract the local texture features of the input image. Then returns a spatial histogram feature vector which is the descriptor for each image that we use to recognize a human being. Experimental results using a k-nearest neighbors classifier (k-NN) on two publicly available datasets justify our algorithm for efficient face recognition in the presence of extreme variations of illumination/lighting environments and slight variation of pose conditions.
The registration of non-cooperative moving targets laser point cloud in different view point
NASA Astrophysics Data System (ADS)
Wang, Shuai; Sun, Huayan; Guo, Huichao
2018-01-01
Non-cooperative moving target multi-view cloud registration is the key technology of 3D reconstruction of laser threedimension imaging. The main problem is that the density changes greatly and noise exists under different acquisition conditions of point cloud. In this paper, firstly, the feature descriptor is used to find the most similar point cloud, and then based on the registration algorithm of region segmentation, the geometric structure of the point is extracted by the geometric similarity between point and point, The point cloud is divided into regions based on spectral clustering, feature descriptors are created for each region, searching to find the most similar regions in the most similar point of view cloud, and then aligning the pair of point clouds by aligning their minimum bounding boxes. Repeat the above steps again until registration of all point clouds is completed. Experiments show that this method is insensitive to the density of point clouds and performs well on the noise of laser three-dimension imaging.
Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems
NASA Technical Reports Server (NTRS)
Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan
2010-01-01
A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI
NASA Astrophysics Data System (ADS)
Nunes, Daniel; Cruz, Tomás L.; Jespersen, Sune N.; Shemesh, Noam
2017-04-01
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo.
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI.
Nunes, Daniel; Cruz, Tomás L; Jespersen, Sune N; Shemesh, Noam
2017-04-01
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo. Copyright © 2017 Elsevier Inc. All rights reserved.
ECG Identification System Using Neural Network with Global and Local Features
ERIC Educational Resources Information Center
Tseng, Kuo-Kun; Lee, Dachao; Chen, Charles
2016-01-01
This paper proposes a human identification system via extracted electrocardiogram (ECG) signals. Two hierarchical classification structures based on global shape feature and local statistical feature is used to extract ECG signals. Global shape feature represents the outline information of ECG signals and local statistical feature extracts the…
[Study for lung sound acquisition module based on ARM and Linux].
Lu, Qiang; Li, Wenfeng; Zhang, Xixue; Li, Junmin; Liu, Longqing
2011-07-01
A acquisition module with ARM and Linux as a core was developed. This paper presents the hardware configuration and the software design. It is shown that the module can extract human lung sound reliably and effectively.
TU-F-12A-05: Sensitivity of Textural Features to 3D Vs. 4D FDG-PET/CT Imaging in NSCLC Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Nyflot, M; Bowen, S
2014-06-15
Purpose: Neighborhood Gray-level difference matrices (NGLDM) based texture parameters extracted from conventional (3D) 18F-FDG PET scans in patients with NSCLC have been previously shown to associate with response to chemoradiation and poorer patient outcome. However, the change in these parameters when utilizing respiratory-correlated (4D) FDG-PET scans has not yet been characterized for NSCLC. The Objectives: of this study was to assess the extent to which NGLDM-based texture parameters on 4D PET images vary with reference to values derived from 3D scans in NSCLC. Methods: Eight patients with newly diagnosed NSCLC treated with concomitant chemoradiotherapy were included in this study. 4Dmore » PET scans were reconstructed with OSEM-IR in 5 respiratory phase-binned images and corresponding CT data of each phase were employed for attenuation correction. NGLDM-based texture features, consisting of coarseness, contrast, busyness, complexity and strength, were evaluated for gross tumor volumes defined on 3D/4D PET scans by radiation oncologists. Variation of the obtained texture parameters over the respiratory cycle were examined with respect to values extracted from 3D scans. Results: Differences between texture parameters derived from 4D scans at different respiratory phases and those extracted from 3D scans ranged from −30% to 13% for coarseness, −12% to 40% for contrast, −5% to 50% for busyness, −7% to 38% for complexity, and −43% to 20% for strength. Furthermore, no evident correlations were observed between respiratory phase and 4D scan texture parameters. Conclusion: Results of the current study showed that NGLDM-based texture parameters varied considerably based on choice of 3D PET and 4D PET reconstruction of NSCLC patient images, indicating that standardized image acquisition and analysis protocols need to be established for clinical studies, especially multicenter clinical trials, intending to validate prognostic values of texture features for NSCLC.« less
From data to information and knowledge for geospatial applications
NASA Astrophysics Data System (ADS)
Schenk, T.; Csatho, B.; Yoon, T.
2006-12-01
An ever-increasing number of airborne and spaceborne data-acquisition missions with various sensors produce a glut of data. Sensory data rarely contains information in a explicit form such that an application can directly use it. The processing and analyzing of data constitutes a real bottleneck; therefore, automating the processes of gaining useful information and knowledge from the raw data is of paramount interest. This presentation is concerned with the transition from data to information and knowledge. With data we refer to the sensor output and we notice that data provide very rarely direct answers for applications. For example, a pixel in a digital image or a laser point from a LIDAR system (data) have no direct relationship with elevation changes of topographic surfaces or the velocity of a glacier (information, knowledge). We propose to employ the computer vision paradigm to extract information and knowledge as it pertains to a wide range of geoscience applications. After introducing the paradigm we describe the major steps to be undertaken for extracting information and knowledge from sensory input data. Features play an important role in this process. Thus we focus on extracting features and their perceptual organization to higher order constructs. We demonstrate these concepts with imaging data and laser point clouds. The second part of the presentation addresses the problem of combining data obtained by different sensors. An absolute prerequisite for successful fusion is to establish a common reference frame. We elaborate on the concept of sensor invariant features that allow the registration of such disparate data sets as aerial/satellite imagery, 3D laser point clouds, and multi/hyperspectral imagery. Fusion takes place on the data level (sensor registration) and on the information level. We show how fusion increases the degree of automation for reconstructing topographic surfaces. Moreover, fused information gained from the three sensors results in a more abstract surface representation with a rich set of explicit surface information that can be readily used by an analyst for applications such as change detection.
Zhou, Zhenyu; Liu, Wei; Cui, Jiali; Wang, Xunheng; Arias, Diana; Wen, Ying; Bansal, Ravi; Hao, Xuejun; Wang, Zhishun; Peterson, Bradley S; Xu, Dongrong
2011-02-01
Signal variation in diffusion-weighted images (DWIs) is influenced both by thermal noise and by spatially and temporally varying artifacts, such as rigid-body motion and cardiac pulsation. Motion artifacts are particularly prevalent when scanning difficult patient populations, such as human infants. Although some motion during data acquisition can be corrected using image coregistration procedures, frequently individual DWIs are corrupted beyond repair by sudden, large amplitude motion either within or outside of the imaging plane. We propose a novel approach to identify and reject outlier images automatically using local binary patterns (LBP) and 2D partial least square (2D-PLS) to estimate diffusion tensors robustly. This method uses an enhanced LBP algorithm to extract texture features from a local texture feature of the image matrix from the DWI data. Because the images have been transformed to local texture matrices, we are able to extract discriminating information that identifies outliers in the data set by extending a traditional one-dimensional PLS algorithm to a two-dimension operator. The class-membership matrix in this 2D-PLS algorithm is adapted to process samples that are image matrix, and the membership matrix thus represents varying degrees of importance of local information within the images. We also derive the analytic form of the generalized inverse of the class-membership matrix. We show that this method can effectively extract local features from brain images obtained from a large sample of human infants to identify images that are outliers in their textural features, permitting their exclusion from further processing when estimating tensors using the DWIs. This technique is shown to be superior in performance when compared with visual inspection and other common methods to address motion-related artifacts in DWI data. This technique is applicable to correct motion artifact in other magnetic resonance imaging (MRI) techniques (e.g., the bootstrapping estimation) that use univariate or multivariate regression methods to fit MRI data to a pre-specified model. Copyright © 2011 Elsevier Inc. All rights reserved.
Methodology for creating dedicated machine and algorithm on sunflower counting
NASA Astrophysics Data System (ADS)
Muracciole, Vincent; Plainchault, Patrick; Mannino, Maria-Rosaria; Bertrand, Dominique; Vigouroux, Bertrand
2007-09-01
In order to sell grain lots in European countries, seed industries need a government certification. This certification requests purity testing, seed counting in order to quantify specified seed species and other impurities in lots, and germination testing. These analyses are carried out within the framework of international trade according to the methods of the International Seed Testing Association. Presently these different analyses are still achieved manually by skilled operators. Previous works have already shown that seeds can be characterized by around 110 visual features (morphology, colour, texture), and thus have presented several identification algorithms. Until now, most of the works in this domain are computer based. The approach presented in this article is based on the design of dedicated electronic vision machine aimed to identify and sort seeds. This machine is composed of a FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor) and a PC bearing the GUI (Human Machine Interface) of the system. Its operation relies on the stroboscopic image acquisition of a seed falling in front of a camera. A first machine was designed according to this approach, in order to simulate all the vision chain (image acquisition, feature extraction, identification) under the Matlab environment. In order to perform this task into dedicated hardware, all these algorithms were developed without the use of the Matlab toolbox. The objective of this article is to present a design methodology for a special purpose identification algorithm based on distance between groups into dedicated hardware machine for seed counting.
Revisiting the Robustness of PET-Based Textural Features in the Context of Multi-Centric Trials.
Bailly, Clément; Bodet-Milin, Caroline; Couespel, Solène; Necib, Hatem; Kraeber-Bodéré, Françoise; Ansquer, Catherine; Carlier, Thomas
2016-01-01
This study aimed to investigate the variability of textural features (TF) as a function of acquisition and reconstruction parameters within the context of multi-centric trials. The robustness of 15 selected TFs were studied as a function of the number of iterations, the post-filtering level, input data noise, the reconstruction algorithm and the matrix size. A combination of several reconstruction and acquisition settings was devised to mimic multi-centric conditions. We retrospectively studied data from 26 patients enrolled in a diagnostic study that aimed to evaluate the performance of PET/CT 68Ga-DOTANOC in gastro-entero-pancreatic neuroendocrine tumors. Forty-one tumors were extracted and served as the database. The coefficient of variation (COV) or the absolute deviation (for the noise study) was derived and compared statistically with SUVmax and SUVmean results. The majority of investigated TFs can be used in a multi-centric context when each parameter is considered individually. The impact of voxel size and noise in the input data were predominant as only 4 TFs presented a high/intermediate robustness against SUV-based metrics (Entropy, Homogeneity, RP and ZP). When combining several reconstruction settings to mimic multi-centric conditions, most of the investigated TFs were robust enough against SUVmax except Correlation, Contrast, LGRE, LGZE and LZLGE. Considering previously published results on either reproducibility or sensitivity against delineation approach and our findings, it is feasible to consider Homogeneity, Entropy, Dissimilarity, HGRE, HGZE and ZP as relevant for being used in multi-centric trials.
"Radio-oncomics" : The potential of radiomics in radiation oncology.
Peeken, Jan Caspar; Nüsslin, Fridtjof; Combs, Stephanie E
2017-10-01
Radiomics, a recently introduced concept, describes quantitative computerized algorithm-based feature extraction from imaging data including computer tomography (CT), magnetic resonance imaging (MRT), or positron-emission tomography (PET) images. For radiation oncology it offers the potential to significantly influence clinical decision-making and thus therapy planning and follow-up workflow. After image acquisition, image preprocessing, and defining regions of interest by structure segmentation, algorithms are applied to calculate shape, intensity, texture, and multiscale filter features. By combining multiple features and correlating them with clinical outcome, prognostic models can be created. Retrospective studies have proposed radiomics classifiers predicting, e. g., overall survival, radiation treatment response, distant metastases, or radiation-related toxicity. Besides, radiomics features can be correlated with genomic information ("radiogenomics") and could be used for tumor characterization. Distinct patterns based on data-based as well as genomics-based features will influence radiation oncology in the future. Individualized treatments in terms of dose level adaption and target volume definition, as well as other outcome-related parameters will depend on radiomics and radiogenomics. By integration of various datasets, the prognostic power can be increased making radiomics a valuable part of future precision medicine approaches. This perspective demonstrates the evidence for the radiomics concept in radiation oncology. The necessity of further studies to integrate radiomics classifiers into clinical decision-making and the radiation therapy workflow is emphasized.
Pattern recognition and image processing for environmental monitoring
NASA Astrophysics Data System (ADS)
Siddiqui, Khalid J.; Eastwood, DeLyle
1999-12-01
Pattern recognition (PR) and signal/image processing methods are among the most powerful tools currently available for noninvasively examining spectroscopic and other chemical data for environmental monitoring. Using spectral data, these systems have found a variety of applications employing analytical techniques for chemometrics such as gas chromatography, fluorescence spectroscopy, etc. An advantage of PR approaches is that they make no a prior assumption regarding the structure of the patterns. However, a majority of these systems rely on human judgment for parameter selection and classification. A PR problem is considered as a composite of four subproblems: pattern acquisition, feature extraction, feature selection, and pattern classification. One of the basic issues in PR approaches is to determine and measure the features useful for successful classification. Selection of features that contain the most discriminatory information is important because the cost of pattern classification is directly related to the number of features used in the decision rules. The state of the spectral techniques as applied to environmental monitoring is reviewed. A spectral pattern classification system combining the above components and automatic decision-theoretic approaches for classification is developed. It is shown how such a system can be used for analysis of large data sets, warehousing, and interpretation. In a preliminary test, the classifier was used to classify synchronous UV-vis fluorescence spectra of relatively similar petroleum oils with reasonable success.
NASA Astrophysics Data System (ADS)
Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua
2017-04-01
Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.
Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Xiaojia; Mao Qirong; Zhan Yongzhao
There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions.more » The experiments show that this method can improve the recognition rate and the time of feature extraction.« less
NASA Technical Reports Server (NTRS)
Jansen, B. J., Jr.
1998-01-01
The features of the data acquisition and control systems of the NASA Langley Research Center's Jet Noise Laboratory are presented. The Jet Noise Laboratory is a facility that simulates realistic mixed flow turbofan jet engine nozzle exhaust systems in simulated flight. The system is capable of acquiring data for a complete take-off assessment of noise and nozzle performance. This paper describes the development of an integrated system to control and measure the behavior of model jet nozzles featuring dual independent high pressure combusting air streams with wind tunnel flow. The acquisition and control system is capable of simultaneous measurement of forces, moments, static and dynamic model pressures and temperatures, and jet noise. The design concepts for the coordination of the control computers and multiple data acquisition computers and instruments are discussed. The control system design and implementation are explained, describing the features, equipment, and the experiences of using a primarily Personal Computer based system. Areas for future development are examined.
A Signal Processing Module for the Analysis of Heart Sounds and Heart Murmurs
NASA Astrophysics Data System (ADS)
Javed, Faizan; Venkatachalam, P. A.; H, Ahmad Fadzil M.
2006-04-01
In this paper a Signal Processing Module (SPM) for the computer-aided analysis of heart sounds has been developed. The module reveals important information of cardiovascular disorders and can assist general physician to come up with more accurate and reliable diagnosis at early stages. It can overcome the deficiency of expert doctors in rural as well as urban clinics and hospitals. The module has five main blocks: Data Acquisition & Pre-processing, Segmentation, Feature Extraction, Murmur Detection and Murmur Classification. The heart sounds are first acquired using an electronic stethoscope which has the capability of transferring these signals to the near by workstation using wireless media. Then the signals are segmented into individual cycles as well as individual components using the spectral analysis of heart without using any reference signal like ECG. Then the features are extracted from the individual components using Spectrogram and are used as an input to a MLP (Multiple Layer Perceptron) Neural Network that is trained to detect the presence of heart murmurs. Once the murmur is detected they are classified into seven classes depending on their timing within the cardiac cycle using Smoothed Pseudo Wigner-Ville distribution. The module has been tested with real heart sounds from 40 patients and has proved to be quite efficient and robust while dealing with a large variety of pathological conditions.
Shukla, Nagesh; Keast, John E; Ceglarek, Darek
2014-10-01
The modelling of complex workflows is an important problem-solving technique within healthcare settings. However, currently most of the workflow models use a simplified flow chart of patient flow obtained using on-site observations, group-based debates and brainstorming sessions, together with historic patient data. This paper presents a systematic and semi-automatic methodology for knowledge acquisition with detailed process representation using sequential interviews of people in the key roles involved in the service delivery process. The proposed methodology allows the modelling of roles, interactions, actions, and decisions involved in the service delivery process. This approach is based on protocol generation and analysis techniques such as: (i) initial protocol generation based on qualitative interviews of radiology staff, (ii) extraction of key features of the service delivery process, (iii) discovering the relationships among the key features extracted, and, (iv) a graphical representation of the final structured model of the service delivery process. The methodology is demonstrated through a case study of a magnetic resonance (MR) scanning service-delivery process in the radiology department of a large hospital. A set of guidelines is also presented in this paper to visually analyze the resulting process model for identifying process vulnerabilities. A comparative analysis of different workflow models is also conducted. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Laine, Matti; Polonyi, Tünde; Abari, Kálmán
2014-01-01
In literates, reading is a fundamental channel for acquiring new vocabulary both in the mother tongue and in foreign languages. By using an artificial language learning task, we examined the acquisition of novel written words and their embedded regularities (an orthographic surface feature and a syllabic feature) in three groups of university…
Mason, H. E.; Uribe, E. C.; Shusterman, J. A.
2018-01-01
Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mason, H. E.; Uribe, E. C.; Shusterman, J. A.
Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.
Rotation covariant image processing for biomedical applications.
Skibbe, Henrik; Reisert, Marco
2013-01-01
With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.
NASA Astrophysics Data System (ADS)
Zhang, Jingqiong; Zhang, Wenbiao; He, Yuting; Yan, Yong
2016-11-01
The amount of coke deposition on catalyst pellets is one of the most important indexes of catalytic property and service life. As a result, it is essential to measure this and analyze the active state of the catalysts during a continuous production process. This paper proposes a new method to predict the amount of coke deposition on catalyst pellets based on image analysis and soft computing. An image acquisition system consisting of a flatbed scanner and an opaque cover is used to obtain catalyst images. After imaging processing and feature extraction, twelve effective features are selected and two best feature sets are determined by the prediction tests. A neural network optimized by a particle swarm optimization algorithm is used to establish the prediction model of the coke amount based on various datasets. The root mean square error of the prediction values are all below 0.021 and the coefficient of determination R 2, for the model, are all above 78.71%. Therefore, a feasible, effective and precise method is demonstrated, which may be applied to realize the real-time measurement of coke deposition based on on-line sampling and fast image analysis.
A novel feature ranking algorithm for biometric recognition with PPG signals.
Reşit Kavsaoğlu, A; Polat, Kemal; Recep Bozkurt, M
2014-06-01
This study is intended for describing the application of the Photoplethysmography (PPG) signal and the time domain features acquired from its first and second derivatives for biometric identification. For this purpose, a sum of 40 features has been extracted and a feature-ranking algorithm is proposed. This proposed algorithm calculates the contribution of each feature to biometric recognition and collocates the features, the contribution of which is from great to small. While identifying the contribution of the features, the Euclidean distance and absolute distance formulas are used. The efficiency of the proposed algorithms is demonstrated by the results of the k-NN (k-nearest neighbor) classifier applications of the features. During application, each 15-period-PPG signal belonging to two different durations from each of the thirty healthy subjects were used with a PPG data acquisition card. The first PPG signals recorded from the subjects were evaluated as the 1st configuration; the PPG signals recorded later at a different time as the 2nd configuration and the combination of both were evaluated as the 3rd configuration. When the results were evaluated for the k-NN classifier model created along with the proposed algorithm, an identification of 90.44% for the 1st configuration, 94.44% for the 2nd configuration, and 87.22% for the 3rd configuration has successfully been attained. The obtained results showed that both the proposed algorithm and the biometric identification model based on this developed PPG signal are very promising for contactless recognizing the people with the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Towards intelligent diagnostic system employing integration of mathematical and engineering model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isa, Nor Ashidi Mat
The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability ofmore » the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image.« less
Towards intelligent diagnostic system employing integration of mathematical and engineering model
NASA Astrophysics Data System (ADS)
Isa, Nor Ashidi Mat
2015-05-01
The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability of the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image.
Image segmentation-based robust feature extraction for color image watermarking
NASA Astrophysics Data System (ADS)
Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen
2018-04-01
This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.
A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis
NASA Astrophysics Data System (ADS)
Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui
2015-07-01
Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.
NASA Astrophysics Data System (ADS)
Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab
2017-11-01
Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.
Phonological Feature Re-Assembly and the Importance of Phonetic Cues
ERIC Educational Resources Information Center
Archibald, John
2009-01-01
It is argued that new phonological features can be acquired in second languages, but that both feature acquisition and feature re-assembly are affected by the robustness of phonetic cues in the input.
Uniform competency-based local feature extraction for remote sensing images
NASA Astrophysics Data System (ADS)
Sedaghat, Amin; Mohammadi, Nazila
2018-01-01
Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.
Li, Jing; Hong, Wenxue
2014-12-01
The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.
NASA Astrophysics Data System (ADS)
Werdiningsih, Indah; Zaman, Badrus; Nuqoba, Barry
2017-08-01
This paper presents classification of brain cancer using wavelet transformation and Adaptive Neighborhood Based Modified Backpropagation (ANMBP). Three stages of the processes, namely features extraction, features reduction, and classification process. Wavelet transformation is used for feature extraction and ANMBP is used for classification process. The result of features extraction is feature vectors. Features reduction used 100 energy values per feature and 10 energy values per feature. Classifications of brain cancer are normal, alzheimer, glioma, and carcinoma. Based on simulation results, 10 energy values per feature can be used to classify brain cancer correctly. The correct classification rate of proposed system is 95 %. This research demonstrated that wavelet transformation can be used for features extraction and ANMBP can be used for classification of brain cancer.
ERIC Educational Resources Information Center
Mai, Ziyin; Yuan, Boping
2016-01-01
This article reports an empirical study investigating L2 acquisition of the Mandarin Chinese "shì…de" cleft construction by adult English-speaking learners within the framework of the Feature Reassembly Hypothesis (Lardiere, 2009). A Sentence Completion task, an interpretation task, two Acceptability Judgement tasks, and a felicity…
Sample-space-based feature extraction and class preserving projection for gene expression data.
Wang, Wenjun
2013-01-01
In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.
Some Questions about Feature Re-Assembly
ERIC Educational Resources Information Center
White, Lydia
2009-01-01
In this commentary, differences between feature re-assembly and feature selection are discussed. Lardiere's proposals are compared to existing approaches to grammatical features in second language (L2) acquisition. Questions are raised about the predictive power of the feature re-assembly approach. (Contains 1 footnote.)
Low complexity feature extraction for classification of harmonic signals
NASA Astrophysics Data System (ADS)
William, Peter E.
In this dissertation, feature extraction algorithms have been developed for extraction of characteristic features from harmonic signals. The common theme for all developed algorithms is the simplicity in generating a significant set of features directly from the time domain harmonic signal. The features are a time domain representation of the composite, yet sparse, harmonic signature in the spectral domain. The algorithms are adequate for low-power unattended sensors which perform sensing, feature extraction, and classification in a standalone scenario. The first algorithm generates the characteristic features using only the duration between successive zero-crossing intervals. The second algorithm estimates the harmonics' amplitudes of the harmonic structure employing a simplified least squares method without the need to estimate the true harmonic parameters of the source signal. The third algorithm, resulting from a collaborative effort with Daniel White at the DSP Lab, University of Nebraska-Lincoln, presents an analog front end approach that utilizes a multichannel analog projection and integration to extract the sparse spectral features from the analog time domain signal. Classification is performed using a multilayer feedforward neural network. Evaluation of the proposed feature extraction algorithms for classification through the processing of several acoustic and vibration data sets (including military vehicles and rotating electric machines) with comparison to spectral features shows that, for harmonic signals, time domain features are simpler to extract and provide equivalent or improved reliability over the spectral features in both the detection probabilities and false alarm rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trease, Lynn L.; Trease, Harold E.; Fowler, John
2007-03-15
One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process.more » The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.« less
Defense Acquisition Review Journal. Volume 14, Number 2
2007-09-01
2007 Vol. 14 No. 2 Learn. Perform. Succeed. Professionalism in the Acquisition Contracting Workforce Have We Gone too Far? John Krieger Contracting...acQuiSition contractinG WorKforce: HAVE WE GONE TOO FAR? John Krieger To professionalize the acquisition contracting workforce, the Department of Defense...featured author for this edition is Professor John Krieger , the Director of the Contracting Center of the Defense Acquisition University’s
Research on oral test modeling based on multi-feature fusion
NASA Astrophysics Data System (ADS)
Shi, Yuliang; Tao, Yiyue; Lei, Jun
2018-04-01
In this paper, the spectrum of speech signal is taken as an input of feature extraction. The advantage of PCNN in image segmentation and other processing is used to process the speech spectrum and extract features. And a new method combining speech signal processing and image processing is explored. At the same time of using the features of the speech map, adding the MFCC to establish the spectral features and integrating them with the features of the spectrogram to further improve the accuracy of the spoken language recognition. Considering that the input features are more complicated and distinguishable, we use Support Vector Machine (SVM) to construct the classifier, and then compare the extracted test voice features with the standard voice features to achieve the spoken standard detection. Experiments show that the method of extracting features from spectrograms using PCNN is feasible, and the fusion of image features and spectral features can improve the detection accuracy.
Ontology-Based Information Extraction for Business Intelligence
NASA Astrophysics Data System (ADS)
Saggion, Horacio; Funk, Adam; Maynard, Diana; Bontcheva, Kalina
Business Intelligence (BI) requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers or feed statistical BI models and tools. The massive amount of information available to business analysts makes information extraction and other natural language processing tools key enablers for the acquisition and use of that semantic information. We describe the application of ontology-based extraction and merging in the context of a practical e-business application for the EU MUSING Project where the goal is to gather international company intelligence and country/region information. The results of our experiments so far are very promising and we are now in the process of building a complete end-to-end solution.
Real-time UNIX in HEP data acquisition
NASA Astrophysics Data System (ADS)
Buono, S.; Gaponenko, I.; Jones, R.; Mapelli, L.; Mornacchi, G.; Prigent, D.; Sanchez-Corral, E.; Skiadelli, M.; Toppers, A.; Duval, P. Y.; Ferrato, D.; Le Van Suu, A.; Qian, Z.; Rondot, C.; Ambrosini, G.; Fumagalli, G.; Aguer, M.; Huet, M.
1994-12-01
Today's experimentation in high energy physics is characterized by an increasing need for sensitivity to rare phenomena and complex physics signatures, which require the use of huge and sophisticated detectors and consequently a high performance readout and data acquisition. Multi-level triggering, hierarchical data collection and an always increasing amount of processing power, distributed throughout the data acquisition layers, will impose a number of features on the software environment, especially the need for a high level of standardization. Real-time UNIX seems, today, the best solution for the platform independence, operating system interface standards and real-time features necessary for data acquisition in HEP experiments. We present the results of the evaluation, in a realistic application environment, of a Real-Time UNIX operating system: the EP/LX real-time UNIX system.
Audio feature extraction using probability distribution function
NASA Astrophysics Data System (ADS)
Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.
2015-05-01
Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.
A statistical framework for multiparameter analysis at the single-cell level.
Torres-García, Wandaliz; Ashili, Shashanka; Kelbauskas, Laimonas; Johnson, Roger H; Zhang, Weiwen; Runger, George C; Meldrum, Deirdre R
2012-03-01
Phenotypic characterization of individual cells provides crucial insights into intercellular heterogeneity and enables access to information that is unavailable from ensemble averaged, bulk cell analyses. Single-cell studies have attracted significant interest in recent years and spurred the development of a variety of commercially available and research-grade technologies. To quantify cell-to-cell variability of cell populations, we have developed an experimental platform for real-time measurements of oxygen consumption (OC) kinetics at the single-cell level. Unique challenges inherent to these single-cell measurements arise, and no existing data analysis methodology is available to address them. Here we present a data processing and analysis method that addresses challenges encountered with this unique type of data in order to extract biologically relevant information. We applied the method to analyze OC profiles obtained with single cells of two different cell lines derived from metaplastic and dysplastic human Barrett's esophageal epithelium. In terms of method development, three main challenges were considered for this heterogeneous dynamic system: (i) high levels of noise, (ii) the lack of a priori knowledge of single-cell dynamics, and (iii) the role of intercellular variability within and across cell types. Several strategies and solutions to address each of these three challenges are presented. The features such as slopes, intercepts, breakpoint or change-point were extracted for every OC profile and compared across individual cells and cell types. The results demonstrated that the extracted features facilitated exposition of subtle differences between individual cells and their responses to cell-cell interactions. With minor modifications, this method can be used to process and analyze data from other acquisition and experimental modalities at the single-cell level, providing a valuable statistical framework for single-cell analysis.
Beyond the resolution limit: subpixel resolution in animals and now in silicon
NASA Astrophysics Data System (ADS)
Wilcox, M. J.
2007-09-01
Automatic acquisition of aerial threats at thousands of kilometers distance requires high sensitivity to small differences in contrast and high optical quality for subpixel resolution, since targets occupy much less surface area than a single pixel. Targets travel at high speed and break up in the re-entry phase. Target/decoy discrimination at the earliest possible time is imperative. Real time performance requires a multifaceted approach with hyperspectral imaging and analog processing allowing feature extraction in real time. Hyperacuity Systems has developed a prototype chip capable of nonlinear increase in resolution or subpixel resolution far beyond either pixel size or spacing. Performance increase is due to a biomimetic implementation of animal retinas. Photosensitivity is not homogeneous across the sensor surface, allowing pixel parsing. It is remarkably simple to provide this profile to detectors and we showed at least three ways to do so. Individual photoreceptors have a Gaussian sensitivity profile and this nonlinear profile can be exploited to extract high-resolution. Adaptive, analog circuitry provides contrast enhancement, dynamic range setting with offset and gain control. Pixels are processed in parallel within modular elements called cartridges like photo-receptor inputs in fly eyes. These modular elements are connected by a novel function for a cell matrix known as L4. The system is exquisitely sensitive to small target motion and operates with a robust signal under degraded viewing conditions, allowing detection of targets smaller than a single pixel or at greater distance. Therefore, not only is instantaneous feature extraction possible but also subpixel resolution. Analog circuitry increases processing speed with more accurate motion specification for target tracking and identification.
Evaluating suitability of Pol-SAR (TerraSAR-X, Radarsat-2) for automated sea ice classification
NASA Astrophysics Data System (ADS)
Ressel, Rudolf; Singha, Suman; Lehner, Susanne
2016-05-01
Satellite borne SAR imagery has become an invaluable tool in the field of sea ice monitoring. Previously, single polarimetric imagery were employed in supervised and unsupervised classification schemes for sea ice investigation, which was preceded by image processing techniques such as segmentation and textural features. Recently, through the advent of polarimetric SAR sensors, investigation of polarimetric features in sea ice has attracted increased attention. While dual-polarimetric data has already been investigated in a number of works, full-polarimetric data has so far not been a major scientific focus. To explore the possibilities of full-polarimetric data and compare the differences in C- and X-bands, we endeavor to analyze in detail an array of datasets, simultaneously acquired, in C-band (RADARSAT-2) and X-band (TerraSAR-X) over ice infested areas. First, we propose an array of polarimetric features (Pauli and lexicographic based). Ancillary data from national ice services, SMOS data and expert judgement were utilized to identify the governing ice regimes. Based on these observations, we then extracted mentioned features. The subsequent supervised classification approach was based on an Artificial Neural Network (ANN). To gain quantitative insight into the quality of the features themselves (and reduce a possible impact of the Hughes phenomenon), we employed mutual information to unearth the relevance and redundancy of features. The results of this information theoretic analysis guided a pruning process regarding the optimal subset of features. In the last step we compared the classified results of all sensors and images, stated respective accuracies and discussed output discrepancies in the cases of simultaneous acquisitions.
Video sensor architecture for surveillance applications.
Sánchez, Jordi; Benet, Ginés; Simó, José E
2012-01-01
This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.
Video Sensor Architecture for Surveillance Applications
Sánchez, Jordi; Benet, Ginés; Simó, José E.
2012-01-01
This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%. PMID:22438723
Extracting Semantic Building Models from Aerial Stereo Images and Conversion to Citygml
NASA Astrophysics Data System (ADS)
Sengul, A.
2012-07-01
The collection of geographic data is of primary importance for the creation and maintenance of a GIS. Traditionally the acquisition of 3D information has been the task of photogrammetry using aerial stereo images. Digital photogrammetric systems employ sophisticated software to extract digital terrain models or to plot 3D objects. The demand for 3D city models leads to new applications and new standards. City Geography Mark-up Language (CityGML), a concept for modelling and exchange of 3D city and landscape models, defines the classes and relations for the most relevant topographic objects in cities and regional models with respect to their geometrical, topological, semantically and topological properties. It now is increasingly accepted, since it fulfils the prerequisites required e.g. for risk analysis, urban planning, and simulations. There is a need to include existing 3D information derived from photogrammetric processes in CityGML databases. In order to filling the gap, this paper reports on a framework transferring data plotted by Erdas LPS and Stereo Analyst for ArcGIS software to CityGML using Safe Software's Feature Manupulate Engine (FME)
Upper ankle joint space detection on low contrast intraoperative fluoroscopic C-arm projections
NASA Astrophysics Data System (ADS)
Thomas, Sarina; Schnetzke, Marc; Brehler, Michael; Swartman, Benedict; Vetter, Sven; Franke, Jochen; Grützner, Paul A.; Meinzer, Hans-Peter; Nolden, Marco
2017-03-01
Intraoperative mobile C-arm fluoroscopy is widely used for interventional verification in trauma surgery, high flexibility combined with low cost being the main advantages of the method. However, the lack of global device-to- patient orientation is challenging, when comparing the acquired data to other intrapatient datasets. In upper ankle joint fracture reduction accompanied with an unstable syndesmosis, a comparison to the unfractured contralateral site is helpful for verification of the reduction result. To reduce dose and operation time, our approach aims at the comparison of single projections of the unfractured ankle with volumetric images of the reduced fracture. For precise assessment, a pre-alignment of both datasets is a crucial step. We propose a contour extraction pipeline to estimate the joint space location for a prealignment of fluoroscopic C-arm projections containing the upper ankle joint. A quadtree-based hierarchical variance comparison extracts potential feature points and a Hough transform is applied to identify bone shaft lines together with the tibiotalar joint space. By using this information we can define the coarse orientation of the projections independent from the ankle pose during acquisition in order to align those images to the volume of the fractured ankle. The proposed method was evaluated on thirteen cadaveric datasets consisting of 100 projections each with manually adjusted image planes by three trauma surgeons. The results show that the method can be used to detect the joint space orientation. The correlation between angle deviation and anatomical projection direction gives valuable input on the acquisition direction for future clinical experiments.
Using Fourier transform IR spectroscopy to analyze biological materials
Baker, Matthew J; Trevisan, Júlio; Bassan, Paul; Bhargava, Rohit; Butler, Holly J; Dorling, Konrad M; Fielden, Peter R; Fogarty, Simon W; Fullwood, Nigel J; Heys, Kelly A; Hughes, Caryn; Lasch, Peter; Martin-Hirsch, Pierre L; Obinaju, Blessing; Sockalingum, Ganesh D; Sulé-Suso, Josep; Strong, Rebecca J; Walsh, Michael J; Wood, Bayden R; Gardner, Peter; Martin, Francis L
2015-01-01
IR spectroscopy is an excellent method for biological analyses. It enables the nonperturbative, label-free extraction of biochemical information and images toward diagnosis and the assessment of cell functionality. Although not strictly microscopy in the conventional sense, it allows the construction of images of tissue or cell architecture by the passing of spectral data through a variety of computational algorithms. Because such images are constructed from fingerprint spectra, the notion is that they can be an objective reflection of the underlying health status of the analyzed sample. One of the major difficulties in the field has been determining a consensus on spectral pre-processing and data analysis. This manuscript brings together as coauthors some of the leaders in this field to allow the standardization of methods and procedures for adapting a multistage approach to a methodology that can be applied to a variety of cell biological questions or used within a clinical setting for disease screening or diagnosis. We describe a protocol for collecting IR spectra and images from biological samples (e.g., fixed cytology and tissue sections, live cells or biofluids) that assesses the instrumental options available, appropriate sample preparation, different sampling modes as well as important advances in spectral data acquisition. After acquisition, data processing consists of a sequence of steps including quality control, spectral pre-processing, feature extraction and classification of the supervised or unsupervised type. A typical experiment can be completed and analyzed within hours. Example results are presented on the use of IR spectra combined with multivariate data processing. PMID:24992094
Mobile Context Provider for Social Networking
NASA Astrophysics Data System (ADS)
Santos, André C.; Cardoso, João M. P.; Ferreira, Diogo R.; Diniz, Pedro C.
The ability to infer user context based on a mobile device together with a set of external sensors opens up the way to new context-aware services and applications. In this paper, we describe a mobile context provider that makes use of sensors available in a smartphone as well as sensors externally connected via bluetooth. We describe the system architecture from sensor data acquisition to feature extraction, context inference and the publication of context information to well-known social networking services such as Twitter and Hi5. In the current prototype, context inference is based on decision trees, but the middleware allows the integration of other inference engines. Experimental results suggest that the proposed solution is a promising approach to provide user context to both local and network-level services.
Morphological Feature Extraction for Automatic Registration of Multispectral Images
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2007-01-01
The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.
Saeb, Sohrab; Zhang, Mi; Karr, Christopher J; Schueller, Stephen M; Corden, Marya E; Kording, Konrad P; Mohr, David C
2015-07-15
Depression is a common, burdensome, often recurring mental health disorder that frequently goes undetected and untreated. Mobile phones are ubiquitous and have an increasingly large complement of sensors that can potentially be useful in monitoring behavioral patterns that might be indicative of depressive symptoms. The objective of this study was to explore the detection of daily-life behavioral markers using mobile phone global positioning systems (GPS) and usage sensors, and their use in identifying depressive symptom severity. A total of 40 adult participants were recruited from the general community to carry a mobile phone with a sensor data acquisition app (Purple Robot) for 2 weeks. Of these participants, 28 had sufficient sensor data received to conduct analysis. At the beginning of the 2-week period, participants completed a self-reported depression survey (PHQ-9). Behavioral features were developed and extracted from GPS location and phone usage data. A number of features from GPS data were related to depressive symptom severity, including circadian movement (regularity in 24-hour rhythm; r=-.63, P=.005), normalized entropy (mobility between favorite locations; r=-.58, P=.012), and location variance (GPS mobility independent of location; r=-.58, P=.012). Phone usage features, usage duration, and usage frequency were also correlated (r=.54, P=.011, and r=.52, P=.015, respectively). Using the normalized entropy feature and a classifier that distinguished participants with depressive symptoms (PHQ-9 score ≥5) from those without (PHQ-9 score <5), we achieved an accuracy of 86.5%. Furthermore, a regression model that used the same feature to estimate the participants' PHQ-9 scores obtained an average error of 23.5%. Features extracted from mobile phone sensor data, including GPS and phone usage, provided behavioral markers that were strongly related to depressive symptom severity. While these findings must be replicated in a larger study among participants with confirmed clinical symptoms, they suggest that phone sensors offer numerous clinical opportunities, including continuous monitoring of at-risk populations with little patient burden and interventions that can provide just-in-time outreach.
Saeb, Sohrab; Zhang, Mi; Karr, Christopher J; Schueller, Stephen M; Corden, Marya E; Kording, Konrad P
2015-01-01
Background Depression is a common, burdensome, often recurring mental health disorder that frequently goes undetected and untreated. Mobile phones are ubiquitous and have an increasingly large complement of sensors that can potentially be useful in monitoring behavioral patterns that might be indicative of depressive symptoms. Objective The objective of this study was to explore the detection of daily-life behavioral markers using mobile phone global positioning systems (GPS) and usage sensors, and their use in identifying depressive symptom severity. Methods A total of 40 adult participants were recruited from the general community to carry a mobile phone with a sensor data acquisition app (Purple Robot) for 2 weeks. Of these participants, 28 had sufficient sensor data received to conduct analysis. At the beginning of the 2-week period, participants completed a self-reported depression survey (PHQ-9). Behavioral features were developed and extracted from GPS location and phone usage data. Results A number of features from GPS data were related to depressive symptom severity, including circadian movement (regularity in 24-hour rhythm; r=-.63, P=.005), normalized entropy (mobility between favorite locations; r=-.58, P=.012), and location variance (GPS mobility independent of location; r=-.58, P=.012). Phone usage features, usage duration, and usage frequency were also correlated (r=.54, P=.011, and r=.52, P=.015, respectively). Using the normalized entropy feature and a classifier that distinguished participants with depressive symptoms (PHQ-9 score ≥5) from those without (PHQ-9 score <5), we achieved an accuracy of 86.5%. Furthermore, a regression model that used the same feature to estimate the participants’ PHQ-9 scores obtained an average error of 23.5%. Conclusions Features extracted from mobile phone sensor data, including GPS and phone usage, provided behavioral markers that were strongly related to depressive symptom severity. While these findings must be replicated in a larger study among participants with confirmed clinical symptoms, they suggest that phone sensors offer numerous clinical opportunities, including continuous monitoring of at-risk populations with little patient burden and interventions that can provide just-in-time outreach. PMID:26180009
Extraction and representation of common feature from uncertain facial expressions with cloud model.
Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing
2017-12-01
Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.
PyEEG: an open source Python module for EEG/MEG feature extraction.
Bao, Forrest Sheng; Liu, Xin; Zhang, Christina
2011-01-01
Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.
PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction
Bao, Forrest Sheng; Liu, Xin; Zhang, Christina
2011-01-01
Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction. PMID:21512582
ERIC Educational Resources Information Center
Bond, Kristi
2013-01-01
This study used ERP (event-related potentials) to examine both the role of the L1 and the role of individual differences in the processing of agreement violations. Theories of L2 acquisition differ with regard to whether or not native-like acquisition of L2 features is possible (Schwartz and Sprouse, 1994, 1996; Tsimpli and Mastropavlou, 2007),…
NASA Astrophysics Data System (ADS)
Anderson, Dylan; Bapst, Aleksander; Coon, Joshua; Pung, Aaron; Kudenov, Michael
2017-05-01
Hyperspectral imaging provides a highly discriminative and powerful signature for target detection and discrimination. Recent literature has shown that considering additional target characteristics, such as spatial or temporal profiles, simultaneously with spectral content can greatly increase classifier performance. Considering these additional characteristics in a traditional discriminative algorithm requires a feature extraction step be performed first. An example of such a pipeline is computing a filter bank response to extract spatial features followed by a support vector machine (SVM) to discriminate between targets. This decoupling between feature extraction and target discrimination yields features that are suboptimal for discrimination, reducing performance. This performance reduction is especially pronounced when the number of features or available data is limited. In this paper, we propose the use of Supervised Nonnegative Tensor Factorization (SNTF) to jointly perform feature extraction and target discrimination over hyperspectral data products. SNTF learns a tensor factorization and a classification boundary from labeled training data simultaneously. This ensures that the features learned via tensor factorization are optimal for both summarizing the input data and separating the targets of interest. Practical considerations for applying SNTF to hyperspectral data are presented, and results from this framework are compared to decoupled feature extraction/target discrimination pipelines.
Hyperspectral Imagery Data for Remote Sensing
NASA Technical Reports Server (NTRS)
Garegnani, Jerry; Gualtney, Lawrence
1999-01-01
In order for remotely sensed data to be useful in a practical application for agriculture, an information product must be made available to the land management decision maker within 24 to 48 hours of data acquisition. Hyperspectral imagery data is proving useful in differentiation of plant species potentially allowing identification of non-healthy areas and pest infestations within crop fields that may require the farm managers attention. Currently however, extracting the needed site-specific feature information from the vast spectral content of large hyperspectral image files is a labor intensive and time consuming task prohibiting the necessary fast turnaround from raw data to final product. We illustrate the methods, techniques and technologies necessary to produce field-level information products from imagery and other related spatial data that are useful to the farm manager for specific decisions that must be made throughout the growing season. We also propose to demonstrate the cost effectiveness of an integrated system, from acquisition to final product distribution, to utilize imagery for decisions on a working farm in conjunction with a commercial agricultural services company and their crop scouts. The demonstration farm is Chesapeake Farms, a 3000 acre research farm in Chestertown, Maryland on the Eastern Shore and is owned by the DuPont Corporation.
Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.
Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi
2018-03-24
In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.
An airborne study of microwave surface sensing and boundary layer heat and moisture fluxes for FIFE
NASA Technical Reports Server (NTRS)
Gogineni, S. P.
1995-01-01
The objectives of this work were to perform imaging radar and scatterometer measurements over the Konza Prairie as a part of the First International land surface climatology project Field Experiments (EIFE) and to develop an mm-wave radiometer and the data acquisition system for this radiometer. We collected imaging radar data with the University of Kansas Side-Looking Airborne Radar (SLAR) operating at 9.375 GHz and scatterometer data with a helicopter-mounted scatterometer at 5.3 and 9.6 GHz. We also developed a 35-GHz null-balancing radiometer and data acquisition system. Although radar images showed good delineation of various features of the FIFE site, the data were not useful for quantitative analysis for extracting soil moisture information because of day-to-day changes in the system transfer characteristics. Our scatterometer results show that both C and X bands are sensitive to soil moisture variations over grass-covered soils. Scattering coefficients near vertical are about 4 dB lower for unburned areas because of the presence of a thatch layer, in comparison with those for burned areas. The results of the research have been documented in reports, oral presentations, and published papers.
NASA Astrophysics Data System (ADS)
Weber, Walter H.; Mair, H. Douglas; Jansen, Dion
2003-03-01
A suite of basic signal processors has been developed. These basic building blocks can be cascaded together to form more complex processors without the need for programming. The data structures between each of the processors are handled automatically. This allows a processor built for one purpose to be applied to any type of data such as images, waveform arrays and single values. The processors are part of Winspect Data Acquisition software. The new processors are fast enough to work on A-scan signals live while scanning. Their primary use is to extract features, reduce noise or to calculate material properties. The cascaded processors work equally well on live A-scan displays, live gated data or as a post-processing engine on saved data. Researchers are able to call their own MATLAB or C-code from anywhere within the processor structure. A built-in formula node processor that uses a simple algebraic editor may make external user programs unnecessary. This paper also discusses the problems associated with ad hoc software development and how graphical programming languages can tie up researchers writing software rather than designing experiments.
KAM (Knowledge Acquisition Module): A tool to simplify the knowledge acquisition process
NASA Technical Reports Server (NTRS)
Gettig, Gary A.
1988-01-01
Analysts, knowledge engineers and information specialists are faced with increasing volumes of time-sensitive data in text form, either as free text or highly structured text records. Rapid access to the relevant data in these sources is essential. However, due to the volume and organization of the contents, and limitations of human memory and association, frequently: (1) important information is not located in time; (2) reams of irrelevant data are searched; and (3) interesting or critical associations are missed due to physical or temporal gaps involved in working with large files. The Knowledge Acquisition Module (KAM) is a microcomputer-based expert system designed to assist knowledge engineers, analysts, and other specialists in extracting useful knowledge from large volumes of digitized text and text-based files. KAM formulates non-explicit, ambiguous, or vague relations, rules, and facts into a manageable and consistent formal code. A library of system rules or heuristics is maintained to control the extraction of rules, relations, assertions, and other patterns from the text. These heuristics can be added, deleted or customized by the user. The user can further control the extraction process with optional topic specifications. This allows the user to cluster extracts based on specific topics. Because KAM formalizes diverse knowledge, it can be used by a variety of expert systems and automated reasoning applications. KAM can also perform important roles in computer-assisted training and skill development. Current research efforts include the applicability of neural networks to aid in the extraction process and the conversion of these extracts into standard formats.
Defense AR Journal. Volume 14, Number 2, September 2007
2007-09-01
2007 Vol. 14 No. 2 Learn. Perform. Succeed. Professionalism in the Acquisition Contracting Workforce Have We Gone too Far? John Krieger Contracting...acQuiSition contractinG WorKforce: HAVE WE GONE TOO FAR? John Krieger To professionalize the acquisition contracting workforce, the Department of...acquisition. Our featured author for this edition is Professor John Krieger , the Director of the Contracting Center of the Defense Acquisition University’s
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-01-01
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510
New feature extraction method for classification of agricultural products from x-ray images
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.
1999-01-01
Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-03-20
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.
Wireless brain-machine interface using EEG and EOG: brain wave classification and robot control
NASA Astrophysics Data System (ADS)
Oh, Sechang; Kumar, Prashanth S.; Kwon, Hyeokjun; Varadan, Vijay K.
2012-04-01
A brain-machine interface (BMI) links a user's brain activity directly to an external device. It enables a person to control devices using only thought. Hence, it has gained significant interest in the design of assistive devices and systems for people with disabilities. In addition, BMI has also been proposed to replace humans with robots in the performance of dangerous tasks like explosives handling/diffusing, hazardous materials handling, fire fighting etc. There are mainly two types of BMI based on the measurement method of brain activity; invasive and non-invasive. Invasive BMI can provide pristine signals but it is expensive and surgery may lead to undesirable side effects. Recent advances in non-invasive BMI have opened the possibility of generating robust control signals from noisy brain activity signals like EEG and EOG. A practical implementation of a non-invasive BMI such as robot control requires: acquisition of brain signals with a robust wearable unit, noise filtering and signal processing, identification and extraction of relevant brain wave features and finally, an algorithm to determine control signals based on the wave features. In this work, we developed a wireless brain-machine interface with a small platform and established a BMI that can be used to control the movement of a robot by using the extracted features of the EEG and EOG signals. The system records and classifies EEG as alpha, beta, delta, and theta waves. The classified brain waves are then used to define the level of attention. The acceleration and deceleration or stopping of the robot is controlled based on the attention level of the wearer. In addition, the left and right movements of eye ball control the direction of the robot.
Intelligence, Surveillance, and Reconnaissance Fusion for Coalition Operations
2008-07-01
classification of the targets of interest. The MMI features extracted in this manner have two properties that provide a sound justification for...are generalizations of well- known feature extraction methods such as Principal Components Analysis (PCA) and Independent Component Analysis (ICA...augment (without degrading performance) a large class of generic fusion processes. Ontologies Classifications Feature extraction Feature analysis
NASA Astrophysics Data System (ADS)
Shi, Wenzhong; Deng, Susu; Xu, Wenbing
2018-02-01
For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent (< 5 years) landslides and approximately 35% of historical (> 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should be filtered using a filtering strategy based on supplementary information provided by expert knowledge or other data sources.
Statistical learning and language acquisition
Romberg, Alexa R.; Saffran, Jenny R.
2011-01-01
Human learners, including infants, are highly sensitive to structure in their environment. Statistical learning refers to the process of extracting this structure. A major question in language acquisition in the past few decades has been the extent to which infants use statistical learning mechanisms to acquire their native language. There have been many demonstrations showing infants’ ability to extract structures in linguistic input, such as the transitional probability between adjacent elements. This paper reviews current research on how statistical learning contributes to language acquisition. Current research is extending the initial findings of infants’ sensitivity to basic statistical information in many different directions, including investigating how infants represent regularities, learn about different levels of language, and integrate information across situations. These current directions emphasize studying statistical language learning in context: within language, within the infant learner, and within the environment as a whole. PMID:21666883
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
The effect of combining two echo times in automatic brain tumor classification by MRS.
García-Gómez, Juan M; Tortajada, Salvador; Vidal, César; Julià-Sapé, Margarida; Luts, Jan; Moreno-Torres, Angel; Van Huffel, Sabine; Arús, Carles; Robles, Montserrat
2008-11-01
(1)H MRS is becoming an accurate, non-invasive technique for initial examination of brain masses. We investigated if the combination of single-voxel (1)H MRS at 1.5 T at two different (TEs), short TE (PRESS or STEAM, 20-32 ms) and long TE (PRESS, 135-136 ms), improves the classification of brain tumors over using only one echo TE. A clinically validated dataset of 50 low-grade meningiomas, 105 aggressive tumors (glioblastoma and metastasis), and 30 low-grade glial tumors (astrocytomas grade II, oligodendrogliomas and oligoastrocytomas) was used to fit predictive models based on the combination of features from short-TEs and long-TE spectra. A new approach that combines the two consecutively was used to produce a single data vector from which relevant features of the two TE spectra could be extracted by means of three algorithms: stepwise, reliefF, and principal components analysis. Least squares support vector machines and linear discriminant analysis were applied to fit the pairwise and multiclass classifiers, respectively. Significant differences in performance were found when short-TE, long-TE or both spectra combined were used as input. In our dataset, to discriminate meningiomas, the combination of the two TE acquisitions produced optimal performance. To discriminate aggressive tumors from low-grade glial tumours, the use of short-TE acquisition alone was preferable. The classifier development strategy used here lends itself to automated learning and test performance processes, which may be of use for future web-based multicentric classifier development studies. Copyright (c) 2008 John Wiley & Sons, Ltd.
Revisiting the Robustness of PET-Based Textural Features in the Context of Multi-Centric Trials
Bailly, Clément; Bodet-Milin, Caroline; Couespel, Solène; Necib, Hatem; Kraeber-Bodéré, Françoise; Ansquer, Catherine; Carlier, Thomas
2016-01-01
Purpose This study aimed to investigate the variability of textural features (TF) as a function of acquisition and reconstruction parameters within the context of multi-centric trials. Methods The robustness of 15 selected TFs were studied as a function of the number of iterations, the post-filtering level, input data noise, the reconstruction algorithm and the matrix size. A combination of several reconstruction and acquisition settings was devised to mimic multi-centric conditions. We retrospectively studied data from 26 patients enrolled in a diagnostic study that aimed to evaluate the performance of PET/CT 68Ga-DOTANOC in gastro-entero-pancreatic neuroendocrine tumors. Forty-one tumors were extracted and served as the database. The coefficient of variation (COV) or the absolute deviation (for the noise study) was derived and compared statistically with SUVmax and SUVmean results. Results The majority of investigated TFs can be used in a multi-centric context when each parameter is considered individually. The impact of voxel size and noise in the input data were predominant as only 4 TFs presented a high/intermediate robustness against SUV-based metrics (Entropy, Homogeneity, RP and ZP). When combining several reconstruction settings to mimic multi-centric conditions, most of the investigated TFs were robust enough against SUVmax except Correlation, Contrast, LGRE, LGZE and LZLGE. Conclusion Considering previously published results on either reproducibility or sensitivity against delineation approach and our findings, it is feasible to consider Homogeneity, Entropy, Dissimilarity, HGRE, HGZE and ZP as relevant for being used in multi-centric trials. PMID:27467882
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
Huynh, Benjamin Q; Li, Hui; Giger, Maryellen L
2016-07-01
Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text
Single-trial laser-evoked potentials feature extraction for prediction of pain perception.
Huang, Gan; Xiao, Ping; Hu, Li; Hung, Yeung Sam; Zhang, Zhiguo
2013-01-01
Pain is a highly subjective experience, and the availability of an objective assessment of pain perception would be of great importance for both basic and clinical applications. The objective of the present study is to develop a novel approach to extract pain-related features from single-trial laser-evoked potentials (LEPs) for classification of pain perception. The single-trial LEP feature extraction approach combines a spatial filtering using common spatial pattern (CSP) and a multiple linear regression (MLR). The CSP method is effective in separating laser-evoked EEG response from ongoing EEG activity, while MLR is capable of automatically estimating the amplitudes and latencies of N2 and P2 from single-trial LEP waveforms. The extracted single-trial LEP features are used in a Naïve Bayes classifier to classify different levels of pain perceived by the subjects. The experimental results show that the proposed single-trial LEP feature extraction approach can effectively extract pain-related LEP features for achieving high classification accuracy.
Tackling saponin diversity in marine animals by mass spectrometry: data acquisition and integration.
Decroo, Corentin; Colson, Emmanuel; Demeyer, Marie; Lemaur, Vincent; Caulier, Guillaume; Eeckhaut, Igor; Cornil, Jérôme; Flammang, Patrick; Gerbaux, Pascal
2017-05-01
Saponin analysis by mass spectrometry methods is nowadays progressively supplementing other analytical methods such as nuclear magnetic resonance (NMR). Indeed, saponin extracts from plant or marine animals are often constituted by a complex mixture of (slightly) different saponin molecules that requires extensive purification and separation steps to meet the requirement for NMR spectroscopy measurements. Based on its intrinsic features, mass spectrometry represents an inescapable tool to access the structures of saponins within extracts by using LC-MS, MALDI-MS, and tandem mass spectrometry experiments. The combination of different MS methods nowadays allows for a nice description of saponin structures, without extensive purification. However, the structural characterization process is based on low kinetic energy CID which cannot afford a total structure elucidation as far as stereochemistry is concerned. Moreover, the structural difference between saponins in a same extract is often so small that coelution upon LC-MS analysis is unavoidable, rendering the isomeric distinction and characterization by CID challenging or impossible. In the present paper, we introduce ion mobility in combination with liquid chromatography to better tackle the structural complexity of saponin congeners. When analyzing saponin extracts with MS-based methods, handling the data remains problematic for the comprehensive report of the results, but also for their efficient comparison. We here introduce an original schematic representation using sector diagrams that are constructed from mass spectrometry data. We strongly believe that the proposed data integration could be useful for data interpretation since it allows for a direct and fast comparison, both in terms of composition and relative proportion of the saponin contents in different extracts. Graphical Abstract A combination of state-of-the-art mass spectrometry methods, including ion mobility spectroscopy, is developed to afford a complete description of the saponin molecules in natural extracts.
Wang, Yang; Feng, Ruibing; He, Chengwei; Su, Huanxing; Ma, Huan; Wan, Jian-Bo
2018-08-05
The narrow linear range and the limited scan time of the given ion make the quantification of the features challenging in liquid chromatography-mass spectrometry (LC-MS)-based untargeted metabolomics with the full-scan mode. And metabolite identification is another bottleneck of untargeted analysis owing to the difficulty of acquiring MS/MS information of most metabolites detected. In this study, an integrated workflow was proposed using the newly established multiple ion monitoring mode with time-staggered ion lists (tsMIM) and target-directed data-dependent acquisition with time-staggered ion lists (tsDDA) to improve data acquisition and metabolite identification in UHPLC/Q-TOF MS-based untargeted metabolomics. Compared to the conventional untargeted metabolomics, the proprosed workflow exhibited the better repeatability before and after data normalization. After selecting features with the significant change by statistical analysis, MS/MS information of all these features can be obtained by tsDDA analysis to facilitate metabolite identification. Using time-staggered ion lists, the workflow is more sensitive in data acquisition, especially for the low-abundant features. Moreover, the metabolites with low abundance tend to be wrongly integrated and triggered by full scan-based untargeted analysis with MS E acquisition mode, which can be greatly improved by the proposed workflow. The integrated workflow was also successfully applied to discover serum biosignatures for the genetic modification of fat-1 in mice, which indicated its practicability and great potential in future metabolomics studies. Copyright © 2018 Elsevier B.V. All rights reserved.
Alexnet Feature Extraction and Multi-Kernel Learning for Objectoriented Classification
NASA Astrophysics Data System (ADS)
Ding, L.; Li, H.; Hu, C.; Zhang, W.; Wang, S.
2018-04-01
In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.
Classification and pose estimation of objects using nonlinear features
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-03-01
A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.
Chen, Yumiao; Yang, Zhongliang
2017-01-01
Recently, several researchers have considered the problem of reconstruction of handwriting and other meaningful arm and hand movements from surface electromyography (sEMG). Although much progress has been made, several practical limitations may still affect the clinical applicability of sEMG-based techniques. In this paper, a novel three-step hybrid model of coordinate state transition, sEMG feature extraction and gene expression programming (GEP) prediction is proposed for reconstructing drawing traces of 12 basic one-stroke shapes from multichannel surface electromyography. Using a specially designed coordinate data acquisition system, we recorded the coordinate data of drawing traces collected in accordance with the time series while 7-channel EMG signals were recorded. As a widely-used time domain feature, Root Mean Square (RMS) was extracted with the analysis window. The preliminary reconstruction models can be established by GEP. Then, the original drawing traces can be approximated by a constructed prediction model. Applying the three-step hybrid model, we were able to convert seven channels of EMG activity recorded from the arm muscles into smooth reconstructions of drawing traces. The hybrid model can yield a mean accuracy of 74% in within-group design (one set of prediction models for all shapes) and 86% in between-group design (one separate set of prediction models for each shape), averaged for the reconstructed x and y coordinates. It can be concluded that it is feasible for the proposed three-step hybrid model to improve the reconstruction ability of drawing traces from sEMG.
Omega-3 chicken egg detection system using a mobile-based image processing segmentation method
NASA Astrophysics Data System (ADS)
Nurhayati, Oky Dwi; Kurniawan Teguh, M.; Cintya Amalia, P.
2017-02-01
An Omega-3 chicken egg is a chicken egg produced through food engineering technology. It is produced by hen fed with high omega-3 fatty acids. So, it has fifteen times nutrient content of omega-3 higher than Leghorn's. Visually, its shell has the same shape and colour as Leghorn's. Each egg can be distinguished by breaking the egg's shell and testing the egg yolk's nutrient content in a laboratory. But, those methods were proven not effective and efficient. Observing this problem, the purpose of this research is to make an application to detect the type of omega-3 chicken egg by using a mobile-based computer vision. This application was built in OpenCV computer vision library to support Android Operating System. This experiment required some chicken egg images taken using an egg candling box. We used 60 omega-3 chicken and Leghorn eggs as samples. Then, using an Android smartphone, image acquisition of the egg was obtained. After that, we applied several steps using image processing methods such as Grab Cut, convert RGB image to eight bit grayscale, median filter, P-Tile segmentation, and morphology technique in this research. The next steps were feature extraction which was used to extract feature values via mean, variance, skewness, and kurtosis from each image. Finally, using digital image measurement, some chicken egg images were classified. The result showed that omega-3 chicken egg and Leghorn egg had different values. This system is able to provide accurate reading around of 91%.
Applying cybernetic technology to diagnose human pulmonary sounds.
Chen, Mei-Yung; Chou, Cheng-Han
2014-06-01
Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nyflot, MJ; Yang, F; Byrd, D
Purpose: Despite increased use of heterogeneity metrics for PET imaging, standards for metrics such as textural features have yet to be developed. We evaluated the quantitative variability caused by image acquisition and reconstruction parameters on PET textural features. Methods: PET images of the NEMA IQ phantom were simulated with realistic image acquisition noise. 35 features based on intensity histograms (IH), co-occurrence matrices (COM), neighborhood-difference matrices (NDM), and zone-size matrices (ZSM) were evaluated within lesions (13, 17, 22, 28, 33 mm diameter). Variability in metrics across 50 independent images was evaluated as percent difference from mean for three phantom girths (850,more » 1030, 1200 mm) and two OSEM reconstructions (2 iterations, 28 subsets, 5 mm FWHM filtration vs 6 iterations, 28 subsets, 8.6 mm FWHM filtration). Also, patient sample size to detect a clinical effect of 30% with Bonferroni-corrected α=0.001 and 95% power was estimated. Results: As a class, NDM features demonstrated greatest sensitivity in means (5–50% difference for medium girth and reconstruction comparisons and 10–100% for large girth comparisons). Some IH features (standard deviation, energy, entropy) had variability below 10% for all sensitivity studies, while others (kurtosis, skewness) had variability above 30%. COM and ZSM features had complex sensitivities; correlation, energy, entropy (COM) and zone percentage, short-zone emphasis, zone-size non-uniformity (ZSM) had variability less than 5% while other metrics had differences up to 30%. Trends were similar for sample size estimation; for example, coarseness, contrast, and strength required 12, 38, and 52 patients to detect a 30% effect for the small girth case but 38, 88, and 128 patients in the large girth case. Conclusion: The sensitivity of PET textural features to image acquisition and reconstruction parameters is large and feature-dependent. Standards are needed to ensure that prospective trials which incorporate textural features are properly designed to detect clinical endpoints. Supported by NIH grants R01 CA169072, U01 CA148131, NCI Contract (SAIC-Frederick) 24XS036-004, and a research contract from GE Healthcare.« less
Finger vein recognition based on the hyperinformation feature
NASA Astrophysics Data System (ADS)
Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Yang, Lu
2014-01-01
The finger vein is a promising biometric pattern for personal identification due to its advantages over other existing biometrics. In finger vein recognition, feature extraction is a critical step, and many feature extraction methods have been proposed to extract the gray, texture, or shape of the finger vein. We treat them as low-level features and present a high-level feature extraction framework. Under this framework, base attribute is first defined to represent the characteristics of a certain subcategory of a subject. Then, for an image, the correlation coefficient is used for constructing the high-level feature, which reflects the correlation between this image and all base attributes. Since the high-level feature can reveal characteristics of more subcategories and contain more discriminative information, we call it hyperinformation feature (HIF). Compared with low-level features, which only represent the characteristics of one subcategory, HIF is more powerful and robust. In order to demonstrate the potential of the proposed framework, we provide a case study to extract HIF. We conduct comprehensive experiments to show the generality of the proposed framework and the efficiency of HIF on our databases, respectively. Experimental results show that HIF significantly outperforms the low-level features.
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
Feature Acquisition with Imbalanced Training Data
NASA Technical Reports Server (NTRS)
Thompson, David R.; Wagstaff, Kiri L.; Majid, Walid A.; Jones, Dayton L.
2011-01-01
This work considers cost-sensitive feature acquisition that attempts to classify a candidate datapoint from incomplete information. In this task, an agent acquires features of the datapoint using one or more costly diagnostic tests, and eventually ascribes a classification label. A cost function describes both the penalties for feature acquisition, as well as misclassification errors. A common solution is a Cost Sensitive Decision Tree (CSDT), a branching sequence of tests with features acquired at interior decision points and class assignment at the leaves. CSDT's can incorporate a wide range of diagnostic tests and can reflect arbitrary cost structures. They are particularly useful for online applications due to their low computational overhead. In this innovation, CSDT's are applied to cost-sensitive feature acquisition where the goal is to recognize very rare or unique phenomena in real time. Example applications from this domain include four areas. In stream processing, one seeks unique events in a real time data stream that is too large to store. In fault protection, a system must adapt quickly to react to anticipated errors by triggering repair activities or follow- up diagnostics. With real-time sensor networks, one seeks to classify unique, new events as they occur. With observational sciences, a new generation of instrumentation seeks unique events through online analysis of large observational datasets. This work presents a solution based on transfer learning principles that permits principled CSDT learning while exploiting any prior knowledge of the designer to correct both between-class and withinclass imbalance. Training examples are adaptively reweighted based on a decomposition of the data attributes. The result is a new, nonparametric representation that matches the anticipated attribute distribution for the target events.
Accelerating Biomedical Signal Processing Using GPU: A Case Study of Snore Sound Feature Extraction.
Guo, Jian; Qian, Kun; Zhang, Gongxuan; Xu, Huijie; Schuller, Björn
2017-12-01
The advent of 'Big Data' and 'Deep Learning' offers both, a great challenge and a huge opportunity for personalised health-care. In machine learning-based biomedical data analysis, feature extraction is a key step for 'feeding' the subsequent classifiers. With increasing numbers of biomedical data, extracting features from these 'big' data is an intensive and time-consuming task. In this case study, we employ a Graphics Processing Unit (GPU) via Python to extract features from a large corpus of snore sound data. Those features can subsequently be imported into many well-known deep learning training frameworks without any format processing. The snore sound data were collected from several hospitals (20 subjects, with 770-990 MB per subject - in total 17.20 GB). Experimental results show that our GPU-based processing significantly speeds up the feature extraction phase, by up to seven times, as compared to the previous CPU system.
Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture
NASA Astrophysics Data System (ADS)
Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans
2017-04-01
Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.
Acousto-Optic Technology for Topographic Feature Extraction and Image Analysis.
1981-03-01
This report contains all findings of the acousto - optic technology study for feature extraction conducted by Deft Laboratories Inc. for the U.S. Army...topographic feature extraction and image analysis using acousto - optic (A-O) technology. A conclusion of this study was that A-O devices are potentially
NASA Astrophysics Data System (ADS)
Shi, Bibo; Grimm, Lars J.; Mazurowski, Maciej A.; Marks, Jeffrey R.; King, Lorraine M.; Maley, Carlo C.; Hwang, E. Shelley; Lo, Joseph Y.
2017-03-01
Predicting the risk of occult invasive disease in ductal carcinoma in situ (DCIS) is an important task to help address the overdiagnosis and overtreatment problems associated with breast cancer. In this work, we investigated the feasibility of using computer-extracted mammographic features to predict occult invasive disease in patients with biopsy proven DCIS. We proposed a computer-vision algorithm based approach to extract mammographic features from magnification views of full field digital mammography (FFDM) for patients with DCIS. After an expert breast radiologist provided a region of interest (ROI) mask for the DCIS lesion, the proposed approach is able to segment individual microcalcifications (MCs), detect the boundary of the MC cluster (MCC), and extract 113 mammographic features from MCs and MCC within the ROI. In this study, we extracted mammographic features from 99 patients with DCIS (74 pure DCIS; 25 DCIS plus invasive disease). The predictive power of the mammographic features was demonstrated through binary classifications between pure DCIS and DCIS with invasive disease using linear discriminant analysis (LDA). Before classification, the minimum redundancy Maximum Relevance (mRMR) feature selection method was first applied to choose subsets of useful features. The generalization performance was assessed using Leave-One-Out Cross-Validation and Receiver Operating Characteristic (ROC) curve analysis. Using the computer-extracted mammographic features, the proposed model was able to distinguish DCIS with invasive disease from pure DCIS, with an average classification performance of AUC = 0.61 +/- 0.05. Overall, the proposed computer-extracted mammographic features are promising for predicting occult invasive disease in DCIS.
Region of interest extraction based on multiscale visual saliency analysis for remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan
2015-01-01
Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.
Rotation Covariant Image Processing for Biomedical Applications
Reisert, Marco
2013-01-01
With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences. PMID:23710255
Acquisition and processing of advanced sensor data for ERW and UXO detection and classification
NASA Astrophysics Data System (ADS)
Schultz, Gregory M.; Keranen, Joe; Miller, Jonathan S.; Shubitidze, Fridon
2014-06-01
The remediation of explosive remnants of war (ERW) and associated unexploded ordnance (UXO) has seen improvements through the injection of modern technological advances and streamlined standard operating procedures. However, reliable and cost-effective detection and geophysical mapping of sites contaminated with UXO such as cluster munitions, abandoned ordnance, and improvised explosive devices rely on the ability to discriminate hazardous items from metallic clutter. In addition to anthropogenic clutter, handheld and vehicle-based metal detector systems are plagued by natural geologic and environmental noise in many post conflict areas. We present new and advanced electromagnetic induction (EMI) technologies including man-portable and towed EMI arrays and associated data processing software. While these systems feature vastly different form factors and transmit-receive configurations, they all exhibit several fundamental traits that enable successful classification of EMI anomalies. Specifically, multidirectional sampling of scattered magnetic fields from targets and corresponding high volume of unique data provide rich information for extracting useful classification features for clutter rejection analysis. The quality of classification features depends largely on the extent to which the data resolve unique physics-based parameters. To date, most of the advanced sensors enable high quality inversion by producing data that are extremely rich in spatial content through multi-angle illumination and multi-point reception.
Monitoring machining conditions by infrared images
NASA Astrophysics Data System (ADS)
Borelli, Joao E.; Gonzaga Trabasso, Luis; Gonzaga, Adilson; Coelho, Reginaldo T.
2001-03-01
During machining process the knowledge of the temperature is the most important factor in tool analysis. It allows to control main factors that influence tool use, life time and waste. The temperature in the contact area between the piece and the tool is resulting from the material removal in cutting operation and it is too difficult to be obtained because the tool and the work piece are in motion. One way to measure the temperature in this situation is detecting the infrared radiation. This work presents a new methodology for diagnosis and monitoring of machining processes with the use of infrared images. The infrared image provides a map in gray tones of the elements in the process: tool, work piece and chips. Each gray tone in the image corresponds to a certain temperature for each one of those materials and the relationship between the gray tones and the temperature is gotten by the previous of infrared camera calibration. The system developed in this work uses an infrared camera, a frame grabber board and a software composed of three modules. The first module makes the image acquisition and processing. The second module makes the feature image extraction and performs the feature vector. Finally, the third module uses fuzzy logic to evaluate the feature vector and supplies the tool state diagnostic as output.
Language Learning in Mindbodyworld: A Sociocognitive Approach to Second Language Acquisition
ERIC Educational Resources Information Center
Atkinson, Dwight
2014-01-01
Based on recent research in cognitive science, interaction, and second language acquisition (SLA), I describe a sociocognitive approach to SLA. This approach adopts a "non-cognitivist" view of cognition: Instead of an isolated computational process in which input is extracted from the environment and used to build elaborate internal…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arinilhaq,; Widita, Rena
2014-09-30
Optical Coherence Tomography is often used in medical image acquisition to diagnose that change due easy to use and low price. Unfortunately, this type of examination produces a two-dimensional retinal image of the point of acquisition. Therefore, this study developed a method that combines and reconstruct 2-dimensional retinal images into three-dimensional images to display volumetric macular accurately. The system is built with three main stages: data acquisition, data extraction and 3-dimensional reconstruction. At data acquisition step, Optical Coherence Tomography produced six *.jpg images of each patient were further extracted with MATLAB 2010a software into six one-dimensional arrays. The six arraysmore » are combined into a 3-dimensional matrix using a kriging interpolation method with SURFER9 resulting 3-dimensional graphics of macula. Finally, system provides three-dimensional color graphs based on the data distribution normal macula. The reconstruction system which has been designed produces three-dimensional images with size of 481 × 481 × h (retinal thickness) pixels.« less
Engagement Assessment Using EEG Signals
NASA Technical Reports Server (NTRS)
Li, Feng; Li, Jiang; McKenzie, Frederic; Zhang, Guangfan; Wang, Wei; Pepe, Aaron; Xu, Roger; Schnell, Thomas; Anderson, Nick; Heitkamp, Dean
2012-01-01
In this paper, we present methods to analyze and improve an EEG-based engagement assessment approach, consisting of data preprocessing, feature extraction and engagement state classification. During data preprocessing, spikes, baseline drift and saturation caused by recording devices in EEG signals are identified and eliminated, and a wavelet based method is utilized to remove ocular and muscular artifacts in the EEG recordings. In feature extraction, power spectrum densities with 1 Hz bin are calculated as features, and these features are analyzed using the Fisher score and the one way ANOVA method. In the classification step, a committee classifier is trained based on the extracted features to assess engagement status. Finally, experiment results showed that there exist significant differences in the extracted features among different subjects, and we have implemented a feature normalization procedure to mitigate the differences and significantly improved the engagement assessment performance.
The optional selection of micro-motion feature based on Support Vector Machine
NASA Astrophysics Data System (ADS)
Li, Bo; Ren, Hongmei; Xiao, Zhi-he; Sheng, Jing
2017-11-01
Micro-motion form of target is multiple, different micro-motion forms are apt to be modulated, which makes it difficult for feature extraction and recognition. Aiming at feature extraction of cone-shaped objects with different micro-motion forms, this paper proposes the best selection method of micro-motion feature based on support vector machine. After the time-frequency distribution of radar echoes, comparing the time-frequency spectrum of objects with different micro-motion forms, features are extracted based on the differences between the instantaneous frequency variations of different micro-motions. According to the methods based on SVM (Support Vector Machine) features are extracted, then the best features are acquired. Finally, the result shows the method proposed in this paper is feasible under the test condition of certain signal-to-noise ratio(SNR).
A Review of Feature Extraction Software for Microarray Gene Expression Data
Tan, Ching Siang; Ting, Wai Soon; Mohamad, Mohd Saberi; Chan, Weng Howe; Deris, Safaai; Ali Shah, Zuraini
2014-01-01
When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method. PMID:25250315
Research on key technology of prognostic and health management for autonomous underwater vehicle
NASA Astrophysics Data System (ADS)
Zhou, Zhi
2017-12-01
Autonomous Underwater Vehicles (AUVs) are non-cable and autonomous motional underwater robotics. With a wide range of activities, it can reach thousands of kilometers. Because it has the advantages of wide range, good maneuverability, safety and intellectualization, it becomes an important tool for various underwater tasks. How to improve diagnosis accuracy of the AUVs electrical system faults, and how to repair AUVs by the information are the focus of navy in the world. In turn, ensuring safe and reliable operation of the system has very important significance to improve AUVs sailing performance. To solve these problems, in the paper the prognostic and health management(PHM) technology is researched and used to AUV, and the overall framework and key technology are proposed, such as data acquisition, feature extraction, fault diagnosis, failure prediction and so on.
BioSig: The Free and Open Source Software Library for Biomedical Signal Processing
Vidaurre, Carmen; Sander, Tilmann H.; Schlögl, Alois
2011-01-01
BioSig is an open source software library for biomedical signal processing. The aim of the BioSig project is to foster research in biomedical signal processing by providing free and open source software tools for many different application areas. Some of the areas where BioSig can be employed are neuroinformatics, brain-computer interfaces, neurophysiology, psychology, cardiovascular systems, and sleep research. Moreover, the analysis of biosignals such as the electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), or respiration signals is a very relevant element of the BioSig project. Specifically, BioSig provides solutions for data acquisition, artifact processing, quality control, feature extraction, classification, modeling, and data visualization, to name a few. In this paper, we highlight several methods to help students and researchers to work more efficiently with biomedical signals. PMID:21437227
BioSig: the free and open source software library for biomedical signal processing.
Vidaurre, Carmen; Sander, Tilmann H; Schlögl, Alois
2011-01-01
BioSig is an open source software library for biomedical signal processing. The aim of the BioSig project is to foster research in biomedical signal processing by providing free and open source software tools for many different application areas. Some of the areas where BioSig can be employed are neuroinformatics, brain-computer interfaces, neurophysiology, psychology, cardiovascular systems, and sleep research. Moreover, the analysis of biosignals such as the electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), or respiration signals is a very relevant element of the BioSig project. Specifically, BioSig provides solutions for data acquisition, artifact processing, quality control, feature extraction, classification, modeling, and data visualization, to name a few. In this paper, we highlight several methods to help students and researchers to work more efficiently with biomedical signals.
Computer Aided Diagnostic Support System for Skin Cancer: A Review of Techniques and Algorithms
Masood, Ammara; Al-Jumaily, Adel Ali
2013-01-01
Image-based computer aided diagnosis systems have significant potential for screening and early detection of malignant melanoma. We review the state of the art in these systems and examine current practices, problems, and prospects of image acquisition, pre-processing, segmentation, feature extraction and selection, and classification of dermoscopic images. This paper reports statistics and results from the most important implementations reported to date. We compared the performance of several classifiers specifically developed for skin lesion diagnosis and discussed the corresponding findings. Whenever available, indication of various conditions that affect the technique's performance is reported. We suggest a framework for comparative assessment of skin cancer diagnostic models and review the results based on these models. The deficiencies in some of the existing studies are highlighted and suggestions for future research are provided. PMID:24575126
Context Inference for Mobile Applications in the UPCASE Project
NASA Astrophysics Data System (ADS)
Santos, André C.; Tarrataca, Luís; Cardoso, João M. P.; Ferreira, Diogo R.; Diniz, Pedro C.; Chainho, Paulo
The growing processing capabilities of mobile devices coupled with portable and wearable sensors have enabled the development of context-aware services tailored to the user environment and its daily activities. The problem of determining the user context at each particular point in time is one of the main challenges in this area. In this paper, we describe the approach pursued in the UPCASE project, which makes use of sensors available in the mobile device as well as sensors externally connected via Bluetooth. We describe the system architecture from raw data acquisition to feature extraction and context inference. As a proof of concept, the inference of contexts is based on a decision tree to learn and identify contexts automatically and dynamically at runtime. Preliminary results suggest that this is a promising approach for context inference in several application scenarios.
Compact quantum random number generator based on superluminescent light-emitting diodes
NASA Astrophysics Data System (ADS)
Wei, Shihai; Yang, Jie; Fan, Fan; Huang, Wei; Li, Dashuang; Xu, Bingjie
2017-12-01
By measuring the amplified spontaneous emission (ASE) noise of the superluminescent light emitting diodes, we propose and realize a quantum random number generator (QRNG) featured with practicability. In the QRNG, after the detection and amplification of the ASE noise, the data acquisition and randomness extraction which is integrated in a field programmable gate array (FPGA) are both implemented in real-time, and the final random bit sequences are delivered to a host computer with a real-time generation rate of 1.2 Gbps. Further, to achieve compactness, all the components of the QRNG are integrated on three independent printed circuit boards with a compact design, and the QRNG is packed in a small enclosure sized 140 mm × 120 mm × 25 mm. The final random bit sequences can pass all the NIST-STS and DIEHARD tests.
Weakly supervised image semantic segmentation based on clustering superpixels
NASA Astrophysics Data System (ADS)
Yan, Xiong; Liu, Xiaohua
2018-04-01
In this paper, we propose an image semantic segmentation model which is trained from image-level labeled images. The proposed model starts with superpixel segmenting, and features of the superpixels are extracted by trained CNN. We introduce a superpixel-based graph followed by applying the graph partition method to group correlated superpixels into clusters. For the acquisition of inter-label correlations between the image-level labels in dataset, we not only utilize label co-occurrence statistics but also exploit visual contextual cues simultaneously. At last, we formulate the task of mapping appropriate image-level labels to the detected clusters as a problem of convex minimization. Experimental results on MSRC-21 dataset and LableMe dataset show that the proposed method has a better performance than most of the weakly supervised methods and is even comparable to fully supervised methods.
[Multi-channel motion signal acquisition system and experimental results].
Zhong, Sheng; Yi, Wanguan; Deng, Ke; Zhan, Kai; Wen, Huiying; Chen, Xin
2014-09-01
For the study of muscle function and features during exercise, a multi-channel data acquisition system was developed, the overall design of the system, hardware composition, the function of system and so on have made a detail implements. The synchronous acquisition and storage of the surface EMG signal, joint angle signal, plantar pressure signal, ultrasonic image and initial results have been achieved.
A phantom design for assessment of detectability in PET imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollenweber, Scott D., E-mail: scott.wollenweber@g
2016-09-15
Purpose: The primary clinical role of positron emission tomography (PET) imaging is the detection of anomalous regions of {sup 18}F-FDG uptake, which are often indicative of malignant lesions. The goal of this work was to create a task-configurable fillable phantom for realistic measurements of detectability in PET imaging. Design goals included simplicity, adjustable feature size, realistic size and contrast levels, and inclusion of a lumpy (i.e., heterogeneous) background. Methods: The detection targets were hollow 3D-printed dodecahedral nylon features. The exostructure sphere-like features created voids in a background of small, solid non-porous plastic (acrylic) spheres inside a fillable tank. The featuresmore » filled at full concentration while the background concentration was reduced due to filling only between the solid spheres. Results: Multiple iterations of feature size and phantom construction were used to determine a configuration at the limit of detectability for a PET/CT system. A full-scale design used a 20 cm uniform cylinder (head-size) filled with a fixed pattern of features at a contrast of approximately 3:1. Known signal-present and signal-absent PET sub-images were extracted from multiple scans of the same phantom and with detectability in a challenging (i.e., useful) range. These images enabled calculation and comparison of the quantitative observer detectability metrics between scanner designs and image reconstruction methods. The phantom design has several advantages including filling simplicity, wall-less contrast features, the control of the detectability range via feature size, and a clinically realistic lumpy background. Conclusions: This phantom provides a practical method for testing and comparison of lesion detectability as a function of imaging system, acquisition parameters, and image reconstruction methods and parameters.« less
Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals
Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu
2012-01-01
Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017
Liu, Jian; Cheng, Yuhu; Wang, Xuesong; Zhang, Lin; Liu, Hui
2017-08-17
It is urgent to diagnose colorectal cancer in the early stage. Some feature genes which are important to colorectal cancer development have been identified. However, for the early stage of colorectal cancer, less is known about the identity of specific cancer genes that are associated with advanced clinical stage. In this paper, we conducted a feature extraction method named Optimal Mean based Block Robust Feature Extraction method (OMBRFE) to identify feature genes associated with advanced colorectal cancer in clinical stage by using the integrated colorectal cancer data. Firstly, based on the optimal mean and L 2,1 -norm, a novel feature extraction method called Optimal Mean based Robust Feature Extraction method (OMRFE) is proposed to identify feature genes. Then the OMBRFE method which introduces the block ideology into OMRFE method is put forward to process the colorectal cancer integrated data which includes multiple genomic data: copy number alterations, somatic mutations, methylation expression alteration, as well as gene expression changes. Experimental results demonstrate that the OMBRFE is more effective than previous methods in identifying the feature genes. Moreover, genes identified by OMBRFE are verified to be closely associated with advanced colorectal cancer in clinical stage.
A judicious multiple hypothesis tracker with interacting feature extraction
NASA Astrophysics Data System (ADS)
McAnanama, James G.; Kirubarajan, T.
2009-05-01
The multiple hypotheses tracker (mht) is recognized as an optimal tracking method due to the enumeration of all possible measurement-to-track associations, which does not involve any approximation in its original formulation. However, its practical implementation is limited by the NP-hard nature of this enumeration. As a result, a number of maintenance techniques such as pruning and merging have been proposed to bound the computational complexity. It is possible to improve the performance of a tracker, mht or not, using feature information (e.g., signal strength, size, type) in addition to kinematic data. However, in most tracking systems, the extraction of features from the raw sensor data is typically independent of the subsequent association and filtering stages. In this paper, a new approach, called the Judicious Multi Hypotheses Tracker (jmht), whereby there is an interaction between feature extraction and the mht, is presented. The measure of the quality of feature extraction is input into measurement-to-track association while the prediction step feeds back the parameters to be used in the next round of feature extraction. The motivation for this forward and backward interaction between feature extraction and tracking is to improve the performance in both steps. This approach allows for a more rational partitioning of the feature space and removes unlikely features from the assignment problem. Simulation results demonstrate the benefits of the proposed approach.
A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.
Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun
2017-07-01
Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time results show that the proposed method has less time complexity after feature selection. The proposed feature extraction method is very effective for getting representatives information from mental states EEG signals in BCI applications and reducing the computational complexity of classifiers by reducing the number of extracted features. Copyright © 2017 Elsevier B.V. All rights reserved.
User-oriented summary extraction for soccer video based on multimodal analysis
NASA Astrophysics Data System (ADS)
Liu, Huayong; Jiang, Shanshan; He, Tingting
2011-11-01
An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.
NASA Astrophysics Data System (ADS)
Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.
2017-01-01
Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.
Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection
NASA Astrophysics Data System (ADS)
Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav
2014-03-01
Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.
Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images
NASA Astrophysics Data System (ADS)
Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.
2018-04-01
A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.
Chen, Zhen; Zhao, Pei; Li, Fuyi; Leier, André; Marquez-Lago, Tatiana T; Wang, Yanan; Webb, Geoffrey I; Smith, A Ian; Daly, Roger J; Chou, Kuo-Chen; Song, Jiangning
2018-03-08
Structural and physiochemical descriptors extracted from sequence data have been widely used to represent sequences and predict structural, functional, expression and interaction profiles of proteins and peptides as well as DNAs/RNAs. Here, we present iFeature, a versatile Python-based toolkit for generating various numerical feature representation schemes for both protein and peptide sequences. iFeature is capable of calculating and extracting a comprehensive spectrum of 18 major sequence encoding schemes that encompass 53 different types of feature descriptors. It also allows users to extract specific amino acid properties from the AAindex database. Furthermore, iFeature integrates 12 different types of commonly used feature clustering, selection, and dimensionality reduction algorithms, greatly facilitating training, analysis, and benchmarking of machine-learning models. The functionality of iFeature is made freely available via an online web server and a stand-alone toolkit. http://iFeature.erc.monash.edu/; https://github.com/Superzchen/iFeature/. jiangning.song@monash.edu; kcchou@gordonlifescience.org; roger.daly@monash.edu. Supplementary data are available at Bioinformatics online.
A graph-Laplacian-based feature extraction algorithm for neural spike sorting.
Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos
2009-01-01
Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.
Large-Scale Image Analytics Using Deep Learning
NASA Astrophysics Data System (ADS)
Ganguly, S.; Nemani, R. R.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.
2014-12-01
High resolution land cover classification maps are needed to increase the accuracy of current Land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) land cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agricultural Imaging Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with ~60 million pixels) and has a total size of ~100 terabytes for a single acquisition. Features extracted from the entire dataset would amount to ~8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. In order to perform image analytics in such a granular system, it is mandatory to devise an intelligent archiving and query system for image retrieval, file structuring, metadata processing and filtering of all available image scenes. Using the Open NASA Earth Exchange (NEX) initiative, which is a partnership with Amazon Web Services (AWS), we have developed an end-to-end architecture for designing the database and the deep belief network (following the distbelief computing model) to solve a grand challenge of scaling this process across quarter million NAIP tiles that cover the entire Continental United States. The AWS core components that we use to solve this problem are DynamoDB along with S3 for database query and storage, ElastiCache shared memory architecture for image segmentation, Elastic Map Reduce (EMR) for image feature extraction, and the memory optimized Elastic Cloud Compute (EC2) for the learning algorithm.
Robust digital image watermarking using distortion-compensated dither modulation
NASA Astrophysics Data System (ADS)
Li, Mianjie; Yuan, Xiaochen
2018-04-01
In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.
Difet: Distributed Feature Extraction Tool for High Spatial Resolution Remote Sensing Images
NASA Astrophysics Data System (ADS)
Eken, S.; Aydın, E.; Sayar, A.
2017-11-01
In this paper, we propose distributed feature extraction tool from high spatial resolution remote sensing images. Tool is based on Apache Hadoop framework and Hadoop Image Processing Interface. Two corner detection (Harris and Shi-Tomasi) algorithms and five feature descriptors (SIFT, SURF, FAST, BRIEF, and ORB) are considered. Robustness of the tool in the task of feature extraction from LandSat-8 imageries are evaluated in terms of horizontal scalability.
Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures
NASA Astrophysics Data System (ADS)
Li, Quanbao; Wei, Fajie; Zhou, Shenghan
2017-05-01
The linear discriminant analysis (LDA) is one of popular means for linear feature extraction. It usually performs well when the global data structure is consistent with the local data structure. Other frequently-used approaches of feature extraction usually require linear, independence, or large sample condition. However, in real world applications, these assumptions are not always satisfied or cannot be tested. In this paper, we introduce an adaptive method, local kernel nonparametric discriminant analysis (LKNDA), which integrates conventional discriminant analysis with nonparametric statistics. LKNDA is adept in identifying both complex nonlinear structures and the ad hoc rule. Six simulation cases demonstrate that LKNDA have both parametric and nonparametric algorithm advantages and higher classification accuracy. Quartic unilateral kernel function may provide better robustness of prediction than other functions. LKNDA gives an alternative solution for discriminant cases of complex nonlinear feature extraction or unknown feature extraction. At last, the application of LKNDA in the complex feature extraction of financial market activities is proposed.
Wen, Tingxi; Zhang, Zhongnan
2017-01-01
Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789
Wen, Tingxi; Zhang, Zhongnan
2017-05-01
In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.
Diabetic Rethinopathy Screening by Bright Lesions Extraction from Fundus Images
NASA Astrophysics Data System (ADS)
Hanđsková, Veronika; Pavlovičova, Jarmila; Oravec, Miloš; Blaško, Radoslav
2013-09-01
Retinal images are nowadays widely used to diagnose many diseases, for example diabetic retinopathy. In our work, we propose the algorithm for the screening application, which identifies the patients with such severe diabetic complication as diabetic retinopathy is, in early phase. In the application we use the patient's fundus photography without any additional examination by an ophtalmologist. After this screening identification, other examination methods should be considered and the patient's follow-up by a doctor is necessary. Our application is composed of three principal modules including fundus image preprocessing, feature extraction and feature classification. Image preprocessing module has the role of luminance normalization, contrast enhancement and optical disk masking. Feature extraction module includes two stages: bright lesions candidates localization and candidates feature extraction. We selected 16 statistical and structural features. For feature classification, we use multilayer perceptron (MLP) with one hidden layer. We classify images into two classes. Feature classification efficiency is about 93 percent.
Consonants and Vowels: Different Roles in Early Language Acquisition
ERIC Educational Resources Information Center
Hochmann, Jean-Remy; Benavides-Varela, Silvia; Nespor, Marina; Mehler, Jacques
2011-01-01
Language acquisition involves both acquiring a set of words (i.e. the lexicon) and learning the rules that combine them to form sentences (i.e. syntax). Here, we show that consonants are mainly involved in word processing, whereas vowels are favored for extracting and generalizing structural relations. We demonstrate that such a division of labor…
Zhang, Heng; Pan, Zhongming; Zhang, Wenna
2018-06-07
An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.
Extraction of ECG signal with adaptive filter for hearth abnormalities detection
NASA Astrophysics Data System (ADS)
Turnip, Mardi; Saragih, Rijois. I. E.; Dharma, Abdi; Esti Kusumandari, Dwi; Turnip, Arjon; Sitanggang, Delima; Aisyah, Siti
2018-04-01
This paper demonstrates an adaptive filter method for extraction ofelectrocardiogram (ECG) feature in hearth abnormalities detection. In particular, electrocardiogram (ECG) is a recording of the heart's electrical activity by capturing a tracingof cardiac electrical impulse as it moves from the atrium to the ventricles. The applied algorithm is to evaluate and analyze ECG signals for abnormalities detection based on P, Q, R and S peaks. In the first phase, the real-time ECG data is acquired and pre-processed. In the second phase, the procured ECG signal is subjected to feature extraction process. The extracted features detect abnormal peaks present in the waveform. Thus the normal and abnormal ECG signal could be differentiated based on the features extracted.
An age-related deficit in spatial-feature reference memory in homing pigeons (Columba livia).
Coppola, Vincent J; Flaim, Mary E; Carney, Samantha N; Bingman, Verner P
2015-03-01
Age-related memory decline in mammals has been well documented. By contrast, very little is known about memory decline in birds as they age. In the current study we trained younger and older homing pigeons on a reference memory task in which a goal location could be encoded by spatial and feature cues. Consistent with a previous working memory study, the results revealed impaired acquisition of combined spatial-feature reference memory in older compared to younger pigeons. Following memory acquisition, we used cue-conflict probe trials to provide an initial assessment of possible age-related differences in cue preference. Both younger and older pigeons displayed a similarly modest preference for feature over spatial cues. Copyright © 2014 Elsevier B.V. All rights reserved.
An integrated method for cancer classification and rule extraction from microarray data
Huang, Liang-Tsung
2009-01-01
Different microarray techniques recently have been successfully used to investigate useful information for cancer diagnosis at the gene expression level due to their ability to measure thousands of gene expression levels in a massively parallel way. One important issue is to improve classification performance of microarray data. However, it would be ideal that influential genes and even interpretable rules can be explored at the same time to offer biological insight. Introducing the concepts of system design in software engineering, this paper has presented an integrated and effective method (named X-AI) for accurate cancer classification and the acquisition of knowledge from DNA microarray data. This method included a feature selector to systematically extract the relative important genes so as to reduce the dimension and retain as much as possible of the class discriminatory information. Next, diagonal quadratic discriminant analysis (DQDA) was combined to classify tumors, and generalized rule induction (GRI) was integrated to establish association rules which can give an understanding of the relationships between cancer classes and related genes. Two non-redundant datasets of acute leukemia were used to validate the proposed X-AI, showing significantly high accuracy for discriminating different classes. On the other hand, I have presented the abilities of X-AI to extract relevant genes, as well as to develop interpretable rules. Further, a web server has been established for cancer classification and it is freely available at . PMID:19272192
Recursive Feature Extraction in Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-08-14
ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.
Robust image features: concentric contrasting circles and their image extraction
NASA Astrophysics Data System (ADS)
Gatrell, Lance B.; Hoff, William A.; Sklair, Cheryl W.
1992-03-01
Many computer vision tasks can be simplified if special image features are placed on the objects to be recognized. A review of special image features that have been used in the past is given and then a new image feature, the concentric contrasting circle, is presented. The concentric contrasting circle image feature has the advantages of being easily manufactured, easily extracted from the image, robust extraction (true targets are found, while few false targets are found), it is a passive feature, and its centroid is completely invariant to the three translational and one rotational degrees of freedom and nearly invariant to the remaining two rotational degrees of freedom. There are several examples of existing parallel implementations which perform most of the extraction work. Extraction robustness was measured by recording the probability of correct detection and the false alarm rate in a set of images of scenes containing mockups of satellites, fluid couplings, and electrical components. A typical application of concentric contrasting circle features is to place them on modeled objects for monocular pose estimation or object identification. This feature is demonstrated on a visually challenging background of a specular but wrinkled surface similar to a multilayered insulation spacecraft thermal blanket.
Deep Learning Methods for Underwater Target Feature Extraction and Recognition
Peng, Yuan; Qiu, Mengran; Shi, Jianfei; Liu, Liangliang
2018-01-01
The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved. PMID:29780407
A method for real-time implementation of HOG feature extraction
NASA Astrophysics Data System (ADS)
Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai
2011-08-01
Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.
Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.
Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi
2016-09-13
Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.
Using input feature information to improve ultraviolet retrieval in neural networks
NASA Astrophysics Data System (ADS)
Sun, Zhibin; Chang, Ni-Bin; Gao, Wei; Chen, Maosi; Zempila, Melina
2017-09-01
In neural networks, the training/predicting accuracy and algorithm efficiency can be improved significantly via accurate input feature extraction. In this study, some spatial features of several important factors in retrieving surface ultraviolet (UV) are extracted. An extreme learning machine (ELM) is used to retrieve the surface UV of 2014 in the continental United States, using the extracted features. The results conclude that more input weights can improve the learning capacities of neural networks.
A Hybrid Neural Network and Feature Extraction Technique for Target Recognition.
target features are extracted, the extracted data being evaluated in an artificial neural network to identify a target at a location within the image scene from which the different viewing angles extend.
NASA Astrophysics Data System (ADS)
Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien
2017-09-01
Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.
Classification of product inspection items using nonlinear features
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.; Lee, H.-W.
1998-03-01
Automated processing and classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non-invasive detection of defective product items on a conveyor belt. This approach involves two main steps: preprocessing and classification. Preprocessing locates individual items and segments ones that touch using a modified watershed algorithm. The second stage involves extraction of features that allow discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper. We use a new nonlinear feature extraction scheme called the maximum representation and discriminating feature (MRDF) extraction method to compute nonlinear features that are used as inputs to a classifier. The MRDF is shown to provide better classification and a better ROC (receiver operating characteristic) curve than other methods.
A harmonic linear dynamical system for prominent ECG feature extraction.
Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc
2014-01-01
Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.
Wang, Nizhuan; Chang, Chunqi; Zeng, Weiming; Shi, Yuhu; Yan, Hongjie
2017-01-01
Independent component analysis (ICA) has been widely used in functional magnetic resonance imaging (fMRI) data analysis to evaluate functional connectivity of the brain; however, there are still some limitations on ICA simultaneously handling neuroimaging datasets with diverse acquisition parameters, e.g., different repetition time, different scanner, etc. Therefore, it is difficult for the traditional ICA framework to effectively handle ever-increasingly big neuroimaging datasets. In this research, a novel feature-map based ICA framework (FMICA) was proposed to address the aforementioned deficiencies, which aimed at exploring brain functional networks (BFNs) at different scales, e.g., the first level (individual subject level), second level (intragroup level of subjects within a certain dataset) and third level (intergroup level of subjects across different datasets), based only on the feature maps extracted from the fMRI datasets. The FMICA was presented as a hierarchical framework, which effectively made ICA and constrained ICA as a whole to identify the BFNs from the feature maps. The simulated and real experimental results demonstrated that FMICA had the excellent ability to identify the intergroup BFNs and to characterize subject-specific and group-specific difference of BFNs from the independent component feature maps, which sharply reduced the size of fMRI datasets. Compared with traditional ICAs, FMICA as a more generalized framework could efficiently and simultaneously identify the variant BFNs at the subject-specific, intragroup, intragroup-specific and intergroup levels, implying that FMICA was able to handle big neuroimaging datasets in neuroscience research.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.
Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-05-15
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.
A multiple maximum scatter difference discriminant criterion for facial feature extraction.
Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei
2007-12-01
Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.
Popescu, Dan; Ichim, Loretta; Stoican, Florin
2017-02-23
Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes-fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms.
NASA Astrophysics Data System (ADS)
Acconcia, Giulia; Cominelli, Alessandro; Peronio, Pietro; Rech, Ivan; Ghioni, Massimo
2017-05-01
The analysis of optical signals by means of Single Photon Avalanche Diodes (SPADs) has been subject to a widespread interest in recent years. The development of multichannel high-performance Time Correlated Single Photon Counting (TCSPC) acquisition systems has undergone a fast trend. Concerning the detector performance, best in class results have been obtained resorting to custom technologies leading also to a strong dependence of the detector timing jitter from the threshold used to determine the onset of the photogenerated current flow. In this scenario, the avalanche current pick-up circuit plays a key role in determining the timing performance of the TCSPC acquisition system, especially with a large array of SPAD detectors because of electrical crosstalk issues. We developed a new current pick-up circuit based on a transimpedance amplifier structure able to extract the timing information from a 50-μm-diameter custom technology SPAD with a state-of-art timing jitter as low as 32ps and suitable to be exploited with SPAD arrays. In this paper we discuss the key features of this structure and we present a new version of the pick-up circuit that also provides quenching capabilities in order to minimize the number of interconnections required, an aspect that becomes more and more crucial in densely integrated systems.
Popescu, Dan; Ichim, Loretta; Stoican, Florin
2017-01-01
Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes—fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms. PMID:28241479
A Method to Recognize Anatomical Site and Image Acquisition View in X-ray Images.
Chang, Xiao; Mazur, Thomas; Li, H Harold; Yang, Deshan
2017-12-01
A method was developed to recognize anatomical site and image acquisition view automatically in 2D X-ray images that are used in image-guided radiation therapy. The purpose is to enable site and view dependent automation and optimization in the image processing tasks including 2D-2D image registration, 2D image contrast enhancement, and independent treatment site confirmation. The X-ray images for 180 patients of six disease sites (the brain, head-neck, breast, lung, abdomen, and pelvis) were included in this study with 30 patients each site and two images of orthogonal views each patient. A hierarchical multiclass recognition model was developed to recognize general site first and then specific site. Each node of the hierarchical model recognized the images using a feature extraction step based on principal component analysis followed by a binary classification step based on support vector machine. Given two images in known orthogonal views, the site recognition model achieved a 99% average F1 score across the six sites. If the views were unknown in the images, the average F1 score was 97%. If only one image was taken either with or without view information, the average F1 score was 94%. The accuracy of the site-specific view recognition models was 100%.
High-Resolution Remote Sensing Image Building Extraction Based on Markov Model
NASA Astrophysics Data System (ADS)
Zhao, W.; Yan, L.; Chang, Y.; Gong, L.
2018-04-01
With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.
Non-negative matrix factorization in texture feature for classification of dementia with MRI data
NASA Astrophysics Data System (ADS)
Sarwinda, D.; Bustamam, A.; Ardaneswari, G.
2017-07-01
This paper investigates applications of non-negative matrix factorization as feature selection method to select the features from gray level co-occurrence matrix. The proposed approach is used to classify dementia using MRI data. In this study, texture analysis using gray level co-occurrence matrix is done to feature extraction. In the feature extraction process of MRI data, we found seven features from gray level co-occurrence matrix. Non-negative matrix factorization selected three features that influence of all features produced by feature extractions. A Naïve Bayes classifier is adapted to classify dementia, i.e. Alzheimer's disease, Mild Cognitive Impairment (MCI) and normal control. The experimental results show that non-negative factorization as feature selection method able to achieve an accuracy of 96.4% for classification of Alzheimer's and normal control. The proposed method also compared with other features selection methods i.e. Principal Component Analysis (PCA).
Use of volumetric features for temporal comparison of mass lesions in full field digital mammograms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bozek, Jelena, E-mail: jelena.bozek@fer.hr; Grgic, Mislav; Kallenberg, Michiel
2014-02-15
Purpose: Temporal comparison of lesions might improve classification between benign and malignant lesions in full-field digital mammograms (FFDM). The authors compare the use of volumetric features for lesion classification, which are computed from dense tissue thickness maps, to the use of mammographic lesion area. Use of dense tissue thickness maps for lesion characterization is advantageous, since it results in lesion features that are invariant to acquisition parameters. Methods: The dataset used in the analysis consisted of 60 temporal mammogram pairs comprising 120 mediolateral oblique or craniocaudal views with a total of 65 lesions, of which 41 were benign and 24more » malignant. The authors analyzed the performance of four volumetric features, area, and four other commonly used features obtained from temporal mammogram pairs, current mammograms, and prior mammograms. The authors evaluated the individual performance of all features and of different feature sets. The authors used linear discriminant analysis with leave-one-out cross validation to classify different feature sets. Results: Volumetric features from temporal mammogram pairs achieved the best individual performance, as measured by the area under the receiver operating characteristic curve (A{sub z} value). Volume change (A{sub z} = 0.88) achieved higher A{sub z} value than projected lesion area change (A{sub z} = 0.78) in the temporal comparison of lesions. Best performance was achieved with a set that consisted of a set of features extracted from the current exam combined with four volumetric features representing changes with respect to the prior mammogram (A{sub z} = 0.90). This was significantly better (p = 0.005) than the performance obtained using features from the current exam only (A{sub z} = 0.77). Conclusions: Volumetric features from temporal mammogram pairs combined with features from the single exam significantly improve discrimination of benign and malignant lesions in FFDM mammograms compared to using only single exam features. In the comparison with prior mammograms, use of volumetric change may lead to better performance than use of lesion area change.« less
a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image
NASA Astrophysics Data System (ADS)
Li, L.; Yang, H.; Chen, Q.; Liu, X.
2018-04-01
Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.
Novel Features for Brain-Computer Interfaces
Woon, W. L.; Cichocki, A.
2007-01-01
While conventional approaches of BCI feature extraction are based on the power spectrum, we have tried using nonlinear features for classifying BCI data. In this paper, we report our test results and findings, which indicate that the proposed method is a potentially useful addition to current feature extraction techniques. PMID:18364991
ERIC Educational Resources Information Center
Paquot, Magali
2017-01-01
This study investigated French and Spanish EFL (English as a foreign language) learners' preferred use of three-word lexical bundles with discourse or stance-oriented function with a view to exploring the role of first language (L1) frequency effects in foreign language acquisition. Word combinations were extracted from learner performance data…
ERIC Educational Resources Information Center
Morsey, Christopher
2017-01-01
In the critical infrastructure world, many critical infrastructure sectors use a Supervisory Control and Data Acquisition (SCADA) system. The sectors that use SCADA systems are the electric power, nuclear power and water. These systems are used to control, monitor and extract data from the systems that give us all the ability to light our homes…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, C; Yin, Y
Purpose: The purpose of this research is investigating which texture features extracted from FDG-PET images by gray-level co-occurrence matrix(GLCM) have a higher prognostic value than the other texture features. Methods: 21 non-small cell lung cancer(NSCLC) patients were approved in the study. Patients underwent 18F-FDG PET/CT scans with both pre-treatment and post-treatment. Firstly, the tumors were extracted by our house developed software. Secondly, the clinical features including the maximum SUV and tumor volume were extracted by MIM vista software, and texture features including angular second moment, contrast, inverse different moment, entropy and correlation were extracted using MATLAB.The differences can be calculatedmore » by using post-treatment features to subtract pre-treatment features. Finally, the SPSS software was used to get the Pearson correlation coefficients and Spearman rank correlation coefficients between the change ratios of texture features and change ratios of clinical features. Results: The Pearson and Spearman rank correlation coefficient between contrast and SUV maximum is 0.785 and 0.709. The P and S value between inverse difference moment and tumor volume is 0.953 and 0.942. Conclusion: This preliminary study showed that the relationships between different texture features and the same clinical feature are different. Finding the prognostic value of contrast and inverse difference moment were higher than the other three textures extracted by GLCM.« less
Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat
2015-06-01
Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine.
Research of facial feature extraction based on MMC
NASA Astrophysics Data System (ADS)
Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun
2017-07-01
Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.
NASA Astrophysics Data System (ADS)
Jiang, Shan; Wang, Fang; Shen, Luming; Liao, Guiping; Wang, Lin
2017-03-01
Spectrum technology has been widely used in crop non-destructive testing diagnosis for crop information acquisition. Since spectrum covers a wide range of bands, it is of critical importance to extract the sensitive bands. In this paper, we propose a methodology to extract the sensitive spectrum bands of rapeseed using multiscale multifractal detrended fluctuation analysis. Our obtained sensitive bands are relatively robust in the range of 534 nm-574 nm. Further, by using the multifractal parameter (Hurst exponent) of the extracted sensitive bands, we propose a prediction model to forecast the Soil and plant analyzer development values ((SPAD), often used as a parameter to indicate the chlorophyll content) and an identification model to distinguish the different planting patterns. Three vegetation indices (VIs) based on previous work are used for comparison. Three evaluation indicators, namely, the root mean square error, the correlation coefficient, and the relative error employed in the SPAD values prediction model all demonstrate that our Hurst exponent has the best performance. Four rapeseed compound planting factors, namely, seeding method, planting density, fertilizer type, and weed control method are considered in the identification model. The Youden indices calculated by the random decision forest method and the K-nearest neighbor method show that our Hurst exponent is superior to other three Vis, and their combination for the factor of seeding method. In addition, there is no significant difference among the five features for other three planting factors. This interesting finding suggests that the transplanting and the direct seeding would make a big difference in the growth of rapeseed.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47.304-1 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT... equipment, wharf, or specified freight station near contractor's plant; or (2) f.o.b. destination. (c) In... management features, in that they— (1) Permit use of transit privileges (see 47.305-13); (2) Permit...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47.304-1 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT... equipment, wharf, or specified freight station near contractor's plant; or (2) f.o.b. destination. (c) In... management features, in that they— (1) Permit use of transit privileges (see 47.305-13); (2) Permit...
Capability of geometric features to classify ships in SAR imagery
NASA Astrophysics Data System (ADS)
Lang, Haitao; Wu, Siwen; Lai, Quan; Ma, Li
2016-10-01
Ship classification in synthetic aperture radar (SAR) imagery has become a new hotspot in remote sensing community for its valuable potential in many maritime applications. Several kinds of ship features, such as geometric features, polarimetric features, and scattering features have been widely applied on ship classification tasks. Compared with polarimetric features and scattering features, which are subject to SAR parameters (e.g., sensor type, incidence angle, polarization, etc.) and environment factors (e.g., sea state, wind, wave, current, etc.), geometric features are relatively independent of SAR and environment factors, and easy to be extracted stably from SAR imagery. In this paper, the capability of geometric features to classify ships in SAR imagery with various resolution has been investigated. Firstly, the relationship between the geometric feature extraction accuracy and the SAR imagery resolution is analyzed. It shows that the minimum bounding rectangle (MBR) of ship can be extracted exactly in terms of absolute precision by the proposed automatic ship-sea segmentation method. Next, six simple but effective geometric features are extracted to build a ship representation for the subsequent classification task. These six geometric features are composed of length (f1), width (f2), area (f3), perimeter (f4), elongatedness (f5) and compactness (f6). Among them, two basic features, length (f1) and width (f2), are directly extracted based on the MBR of ship, the other four are derived from those two basic features. The capability of the utilized geometric features to classify ships are validated on two data set with different image resolutions. The results show that the performance of ship classification solely by geometric features is close to that obtained by the state-of-the-art methods, which obtained by a combination of multiple kinds of features, including scattering features and geometric features after a complex feature selection process.
Question analysis for Indonesian comparative question
NASA Astrophysics Data System (ADS)
Saelan, A.; Purwarianti, A.; Widyantoro, D. H.
2017-01-01
Information seeking is one of human needs today. Comparing things using search engine surely take more times than search only one thing. In this paper, we analyzed comparative questions for comparative question answering system. Comparative question is a question that comparing two or more entities. We grouped comparative questions into 5 types: selection between mentioned entities, selection between unmentioned entities, selection between any entity, comparison, and yes or no question. Then we extracted 4 types of information from comparative questions: entity, aspect, comparison, and constraint. We built classifiers for classification task and information extraction task. Features used for classification task are bag of words, whether for information extraction, we used lexical, 2 previous and following words lexical, and previous label as features. We tried 2 scenarios: classification first and extraction first. For classification first, we used classification result as a feature for extraction. Otherwise, for extraction first, we used extraction result as features for classification. We found that the result would be better if we do extraction first before classification. For the extraction task, classification using SMO gave the best result (88.78%), while for classification, it is better to use naïve bayes (82.35%).
Nonlinear features for classification and pose estimation of machined parts from single views
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-10-01
A new nonlinear feature extraction method is presented for classification and pose estimation of objects from single views. The feature extraction method is called the maximum representation and discrimination feature (MRDF) method. The nonlinear MRDF transformations to use are obtained in closed form, and offer significant advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We consider MRDFs on image data, provide a new 2-stage nonlinear MRDF solution, and show it specializes to well-known linear and nonlinear image processing transforms under certain conditions. We show the use of MRDF in estimating the class and pose of images of rendered solid CAD models of machine parts from single views using a feature-space trajectory neural network classifier. We show new results with better classification and pose estimation accuracy than are achieved by standard principal component analysis and Fukunaga-Koontz feature extraction methods.
Information based universal feature extraction
NASA Astrophysics Data System (ADS)
Amiri, Mohammad; Brause, Rüdiger
2015-02-01
In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.
NASA Astrophysics Data System (ADS)
Liu, Xiaoqi; Wang, Chengliang; Bai, Jianying; Liao, Guobin
2018-02-01
Portal hypertensive gastropathy (PHG) is common in gastrointestinal (GI) diseases, and a severe stage of PHG (S-PHG) is a source of gastrointestinal active bleeding. Generally, the diagnosis of PHG is made visually during endoscopic examination; compared with traditional endoscopy, (wireless capsule endoscopy) WCE with noninvasive and painless is chosen as a prevalent tool for visual observation of PHG. However, accurate measurement of WCE images with PHG is a difficult task due to faint contrast and confusing variations in background gastric mucosal tissue for physicians. Therefore, this paper proposes a comprehensive methodology to automatically detect S-PHG images in WCE video to help physicians accurately diagnose S-PHG. Firstly, a rough dominatecolor-tone extraction approach is proposed for better describing global color distribution information of gastric mucosa. Secondly, a hybrid two-layer texture acquisition model is designed by integrating co-occurrence matrix into local binary pattern to depict complex and unique gastric mucosal microstructure local variation. Finally, features of mucosal color and microstructure texture are merged into linear support vector machine to accomplish this automatic classification task. Experiments were implemented on an annotated data set including 1,050 SPHG and 1,370 normal images collected from 36 real patients of different nationalities, ages and genders. By comparison with three traditional texture extraction methods, our method, combined with experimental results, performs best in detection of S-PHG images in WCE video: the maximum of accuracy, sensitivity and specificity reach 0.90, 0.92 and 0.92 respectively.
NASA Astrophysics Data System (ADS)
Wang, Ke; Guo, Ping; Luo, A.-Li
2017-03-01
Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.
Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao
2017-01-01
To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181
Houshyarifar, Vahid; Chehel Amirani, Mehdi
2016-08-12
In this paper we present a method to predict Sudden Cardiac Arrest (SCA) with higher order spectral (HOS) and linear (Time) features extracted from heart rate variability (HRV) signal. Predicting the occurrence of SCA is important in order to avoid the probability of Sudden Cardiac Death (SCD). This work is a challenge to predict five minutes before SCA onset. The method consists of four steps: pre-processing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In second step, bispectrum features of HRV signal and time-domain features are obtained. Six features are extracted from bispectrum and two features from time-domain. In the next step, these features are reduced to one feature by the linear discriminant analysis (LDA) technique. Finally, KNN and support vector machine-based classifiers are used to classify the HRV signals. We used two database named, MIT/BIH Sudden Cardiac Death (SCD) Database and Physiobank Normal Sinus Rhythm (NSR). In this work we achieved prediction of SCD occurrence for six minutes before the SCA with the accuracy over 91%.
Automated Image Registration Using Morphological Region of Interest Feature Extraction
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2005-01-01
With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.
NASA Astrophysics Data System (ADS)
Patil, Sandeep Baburao; Sinha, G. R.
2017-02-01
India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.
Brain tumor initiating cells adapt to restricted nutrition through preferential glucose uptake.
Flavahan, William A; Wu, Qiulian; Hitomi, Masahiro; Rahim, Nasiha; Kim, Youngmi; Sloan, Andrew E; Weil, Robert J; Nakano, Ichiro; Sarkaria, Jann N; Stringer, Brett W; Day, Bryan W; Li, Meizhang; Lathia, Justin D; Rich, Jeremy N; Hjelmeland, Anita B
2013-10-01
Like all cancers, brain tumors require a continuous source of energy and molecular resources for new cell production. In normal brain, glucose is an essential neuronal fuel, but the blood-brain barrier limits its delivery. We now report that nutrient restriction contributes to tumor progression by enriching for brain tumor initiating cells (BTICs) owing to preferential BTIC survival and to adaptation of non-BTICs through acquisition of BTIC features. BTICs outcompete for glucose uptake by co-opting the high affinity neuronal glucose transporter, type 3 (Glut3, SLC2A3). BTICs preferentially express Glut3, and targeting Glut3 inhibits BTIC growth and tumorigenic potential. Glut3, but not Glut1, correlates with poor survival in brain tumors and other cancers; thus, tumor initiating cells may extract nutrients with high affinity. As altered metabolism represents a cancer hallmark, metabolic reprogramming may maintain the tumor hierarchy and portend poor prognosis.
Brain Tumor Initiating Cells Adapt to Restricted Nutrition through Preferential Glucose Uptake
Flavahan, William A.; Wu, Qiulian; Hitomi, Masahiro; Rahim, Nasiha; Kim, Youngmi; Sloan, Andrew E.; Weil, Robert J.; Nakano, Ichiro; Sarkaria, Jann N.; Stringer, Brett W.; Day, Bryan W.; Li, Meizhang; Lathia, Justin D.; Rich, Jeremy N.; Hjelmeland, Anita B.
2013-01-01
Like all cancers, brain tumors require a continuous source of energy and molecular resources for new cell production. In normal brain, glucose is an essential neuronal fuel, but the blood-brain barrier limits its delivery. We now report that nutrient restriction contributes to tumor progression by enriching for brain tumor initiating cells (BTICs) due to preferential BTIC survival and adaptation of non-BTICs through acquisition of BTIC features. BTICs outcompete for glucose uptake by co-opting the high affinity neuronal glucose transporter, type 3 (Glut3, SLC2A3). BTICs preferentially express Glut3 and targeting Glut3 inhibits BTIC growth and tumorigenic potential. Glut3, but not Glut1, correlates with poor survival in brain tumors and other cancers; thus, TICs may extract nutrients with high affinity. As altered metabolism represents a cancer hallmark, metabolic reprogramming may instruct the tumor hierarchy and portend poor prognosis. PMID:23995067
[Computers in biomedical research: I. Analysis of bioelectrical signals].
Vivaldi, E A; Maldonado, P
2001-08-01
A personal computer equipped with an analog-to-digital conversion card is able to input, store and display signals of biomedical interest. These signals can additionally be submitted to ad-hoc software for analysis and diagnosis. Data acquisition is based on the sampling of a signal at a given rate and amplitude resolution. The automation of signal processing conveys syntactic aspects (data transduction, conditioning and reduction); and semantic aspects (feature extraction to describe and characterize the signal and diagnostic classification). The analytical approach that is at the basis of computer programming allows for the successful resolution of apparently complex tasks. Two basic principles involved are the definition of simple fundamental functions that are then iterated and the modular subdivision of tasks. These two principles are illustrated, respectively, by presenting the algorithm that detects relevant elements for the analysis of a polysomnogram, and the task flow in systems that automate electrocardiographic reports.
NASA Astrophysics Data System (ADS)
Hubert, Maxime; Pacureanu, Alexandra; Guilloud, Cyril; Yang, Yang; da Silva, Julio C.; Laurencin, Jerome; Lefebvre-Joud, Florence; Cloetens, Peter
2018-05-01
In X-ray tomography, ring-shaped artifacts present in the reconstructed slices are an inherent problem degrading the global image quality and hindering the extraction of quantitative information. To overcome this issue, we propose a strategy for suppression of ring artifacts originating from the coherent mixing of the incident wave and the object. We discuss the limits of validity of the empty beam correction in the framework of a simple formalism. We then deduce a correction method based on two-dimensional random sample displacement, with minimal cost in terms of spatial resolution, acquisition, and processing time. The method is demonstrated on bone tissue and on a hydrogen electrode of a ceramic-metallic solid oxide cell. Compared to the standard empty beam correction, we obtain high quality nanotomography images revealing detailed object features. The resulting absence of artifacts allows straightforward segmentation and posterior quantification of the data.
Detection of maize kernels breakage rate based on K-means clustering
NASA Astrophysics Data System (ADS)
Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping
2017-04-01
In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.
NASA Astrophysics Data System (ADS)
Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao
2018-01-01
Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.
NASA Astrophysics Data System (ADS)
Jusman, Yessi; Ng, Siew-Cheok; Hasikin, Khairunnisa; Kurnia, Rahmadi; Osman, Noor Azuan Bin Abu; Teoh, Kean Hooi
2016-10-01
The capability of field emission scanning electron microscopy and energy dispersive x-ray spectroscopy (FE-SEM/EDX) to scan material structures at the microlevel and characterize the material with its elemental properties has inspired this research, which has developed an FE-SEM/EDX-based cervical cancer screening system. The developed computer-aided screening system consisted of two parts, which were the automatic features of extraction and classification. For the automatic features extraction algorithm, the image and spectra of cervical cells features extraction algorithm for extracting the discriminant features of FE-SEM/EDX data was introduced. The system automatically extracted two types of features based on FE-SEM/EDX images and FE-SEM/EDX spectra. Textural features were extracted from the FE-SEM/EDX image using a gray level co-occurrence matrix technique, while the FE-SEM/EDX spectra features were calculated based on peak heights and corrected area under the peaks using an algorithm. A discriminant analysis technique was employed to predict the cervical precancerous stage into three classes: normal, low-grade intraepithelial squamous lesion (LSIL), and high-grade intraepithelial squamous lesion (HSIL). The capability of the developed screening system was tested using 700 FE-SEM/EDX spectra (300 normal, 200 LSIL, and 200 HSIL cases). The accuracy, sensitivity, and specificity performances were 98.2%, 99.0%, and 98.0%, respectively.
Automatic Extraction of Planetary Image Features
NASA Technical Reports Server (NTRS)
Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.
2009-01-01
With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.
Waveform fitting and geometry analysis for full-waveform lidar feature extraction
NASA Astrophysics Data System (ADS)
Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu
2016-10-01
This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.
Hussain, Lal; Ahmed, Adeel; Saeed, Sharjil; Rathore, Saima; Awan, Imtiaz Ahmed; Shah, Saeed Arif; Majid, Abdul; Idris, Adnan; Awan, Anees Ahmed
2018-02-06
Prostate is a second leading causes of cancer deaths among men. Early detection of cancer can effectively reduce the rate of mortality caused by Prostate cancer. Due to high and multiresolution of MRIs from prostate cancer require a proper diagnostic systems and tools. In the past researchers developed Computer aided diagnosis (CAD) systems that help the radiologist to detect the abnormalities. In this research paper, we have employed novel Machine learning techniques such as Bayesian approach, Support vector machine (SVM) kernels: polynomial, radial base function (RBF) and Gaussian and Decision Tree for detecting prostate cancer. Moreover, different features extracting strategies are proposed to improve the detection performance. The features extracting strategies are based on texture, morphological, scale invariant feature transform (SIFT), and elliptic Fourier descriptors (EFDs) features. The performance was evaluated based on single as well as combination of features using Machine Learning Classification techniques. The Cross validation (Jack-knife k-fold) was performed and performance was evaluated in term of receiver operating curve (ROC) and specificity, sensitivity, Positive predictive value (PPV), negative predictive value (NPV), false positive rate (FPR). Based on single features extracting strategies, SVM Gaussian Kernel gives the highest accuracy of 98.34% with AUC of 0.999. While, using combination of features extracting strategies, SVM Gaussian kernel with texture + morphological, and EFDs + morphological features give the highest accuracy of 99.71% and AUC of 1.00.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-06-17
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-01-01
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.
2015-06-01
Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.
2014-11-07
of novel papers TGA Student project; synthesis. Moringa Seed Extract for novel coagulant for water treatment Biopolymer...Reason for Use Dr. Justin Saul Keratin hydro_gel for novel adsorbent DSC Senior Design; synthesis. Education Dr. Jason Moringa Seed Extract for novel
A method for automatic feature points extraction of human vertebrae three-dimensional model
NASA Astrophysics Data System (ADS)
Wu, Zhen; Wu, Junsheng
2017-05-01
A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.
Motion Control of Drives for Prosthetic Hand Using Continuous Myoelectric Signals
NASA Astrophysics Data System (ADS)
Purushothaman, Geethanjali; Ray, Kalyan Kumar
2016-03-01
In this paper the authors present motion control of a prosthetic hand, through continuous myoelectric signal acquisition, classification and actuation of the prosthetic drive. A four channel continuous electromyogram (EMG) signal also known as myoelectric signals (MES) are acquired from the abled-body to classify the six unique movements of hand and wrist, viz, hand open (HO), hand close (HC), wrist flexion (WF), wrist extension (WE), ulnar deviation (UD) and radial deviation (RD). The classification technique involves in extracting the features/pattern through statistical time domain (TD) parameter/autoregressive coefficients (AR), which are reduced using principal component analysis (PCA). The reduced statistical TD features and or AR coefficients are used to classify the signal patterns through k nearest neighbour (kNN) as well as neural network (NN) classifier and the performance of the classifiers are compared. Performance comparison of the above two classifiers clearly shows that kNN classifier in identifying the hidden intended motion in the myoelectric signals is better than that of NN classifier. Once the classifier identifies the intended motion, the signal is amplified to actuate the three low power DC motor to perform the above mentioned movements.
A New Dusts Sensor for Cultural Heritage Applications Based on Image Processing
Proietti, Andrea; Leccese, Fabio; Caciotta, Maurizio; Morresi, Fabio; Santamaria, Ulderico; Malomo, Carmela
2014-01-01
In this paper, we propose a new sensor for the detection and analysis of dusts (seen as powders and fibers) in indoor environments, especially designed for applications in the field of Cultural Heritage or in other contexts where the presence of dust requires special care (surgery, clean rooms, etc.). The presented system relies on image processing techniques (enhancement, noise reduction, segmentation, metrics analysis) and it allows obtaining both qualitative and quantitative information on the accumulation of dust. This information aims to identify the geometric and topological features of the elements of the deposit. The curators can use this information in order to design suitable prevention and maintenance actions for objects and environments. The sensor consists of simple and relatively cheap tools, based on a high-resolution image acquisition system, a preprocessing software to improve the captured image and an analysis algorithm for the feature extraction and the classification of the elements of the dust deposit. We carried out some tests in order to validate the system operation. These tests were performed within the Sistine Chapel in the Vatican Museums, showing the good performance of the proposed sensor in terms of execution time and classification accuracy. PMID:24901977
Su, Jin-He; Piao, Ying-Chao; Luo, Ze; Yan, Bao-Ping
2018-04-26
With the application of various data acquisition devices, a large number of animal movement data can be used to label presence data in remote sensing images and predict species distribution. In this paper, a two-stage classification approach for combining movement data and moderate-resolution remote sensing images was proposed. First, we introduced a new density-based clustering method to identify stopovers from migratory birds’ movement data and generated classification samples based on the clustering result. We split the remote sensing images into 16 × 16 patches and labeled them as positive samples if they have overlap with stopovers. Second, a multi-convolution neural network model is proposed for extracting the features from temperature data and remote sensing images, respectively. Then a Support Vector Machines (SVM) model was used to combine the features together and predict classification results eventually. The experimental analysis was carried out on public Landsat 5 TM images and a GPS dataset was collected on 29 birds over three years. The results indicated that our proposed method outperforms the existing baseline methods and was able to achieve good performance in habitat suitability prediction.
Extraction of linear features on SAR imagery
NASA Astrophysics Data System (ADS)
Liu, Junyi; Li, Deren; Mei, Xin
2006-10-01
Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.
NASA Astrophysics Data System (ADS)
Jiang, Li; Xuan, Jianping; Shi, Tielin
2013-12-01
Generally, the vibration signals of faulty machinery are non-stationary and nonlinear under complicated operating conditions. Therefore, it is a big challenge for machinery fault diagnosis to extract optimal features for improving classification accuracy. This paper proposes semi-supervised kernel Marginal Fisher analysis (SSKMFA) for feature extraction, which can discover the intrinsic manifold structure of dataset, and simultaneously consider the intra-class compactness and the inter-class separability. Based on SSKMFA, a novel approach to fault diagnosis is put forward and applied to fault recognition of rolling bearings. SSKMFA directly extracts the low-dimensional characteristics from the raw high-dimensional vibration signals, by exploiting the inherent manifold structure of both labeled and unlabeled samples. Subsequently, the optimal low-dimensional features are fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories and severities of bearings. The experimental results demonstrate that the proposed approach improves the fault recognition performance and outperforms the other four feature extraction methods.
Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram
Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi
2016-01-01
Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171
Wang, Jinjia; Zhang, Yanna
2015-02-01
Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups of IV-III and IV - I. The experimental results proved that the method proposed in this paper was feasible.
Does Whole-Word Multimedia Software Support Literacy Acquisition?
ERIC Educational Resources Information Center
Karemaker, Arjette M.; Pitchford, Nicola J.; O'Malley, Claire
2010-01-01
This study examined the extent to which multimedia features of typical literacy learning software provide added benefits for developing literacy skills compared with typical whole-class teaching methods. The effectiveness of the multimedia software Oxford Reading Tree (ORT) for Clicker in supporting early literacy acquisition was investigated…
What’s in a URL? Genre Classification from URLs
2012-01-01
webpages with access to the content of a document and feature extraction from URLs alone. Feature Extraction from Webpages Stylistic and structural...2010). Character n-grams (sequence of n characters) are attractive because of their simplicity and because they encapsulate both lexical and stylistic ...report might be stylistic . Feature Extraction from URLs The syntactic characteristics of URLs have been fairly sta- ble over the years. URL terms are
Detection of goal events in soccer videos
NASA Astrophysics Data System (ADS)
Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas
2005-01-01
In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.
Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong
2018-05-11
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images
Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-01-01
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. PMID:29762519
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN
Cheng, Gang; Chen, Xihui
2018-01-01
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671
Zhang, Jian; Niu, Xin; Yang, Xue-zhi; Zhu, Qing-wen; Li, Hai-yan; Wang, Xuan; Zhang, Zhi-guo; Sha, Hong
2014-09-01
To design the pulse information which includes the parameter of pulse-position, pulse-number, pulse-shape and pulse-force acquisition and analysis system with function of dynamic recognition, and research the digitalization and visualization of some common cardiovascular mechanism of single pulse. To use some flexible sensors to catch the radial artery pressure pulse wave and utilize the high frequency B mode ultrasound scanning technology to synchronously obtain the information of radial extension and axial movement, by the way of dynamic images, then the gathered information was analyzed and processed together with ECG. Finally, the pulse information acquisition and analysis system was established which has the features of visualization and dynamic recognition, and it was applied to serve for ten healthy adults. The new system overcome the disadvantage of one-dimensional pulse information acquisition and process method which was common used in current research area of pulse diagnosis in traditional Chinese Medicine, initiated a new way of pulse diagnosis which has the new features of dynamic recognition, two-dimensional information acquisition, multiplex signals combination and deep data mining. The newly developed system could translate the pulse signals into digital, visual and measurable motion information of vessel.
Automated feature extraction and classification from image sources
,
1995-01-01
The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.
Prominent feature extraction for review analysis: an empirical study
NASA Astrophysics Data System (ADS)
Agarwal, Basant; Mittal, Namita
2016-05-01
Sentiment analysis (SA) research has increased tremendously in recent times. SA aims to determine the sentiment orientation of a given text into positive or negative polarity. Motivation for SA research is the need for the industry to know the opinion of the users about their product from online portals, blogs, discussion boards and reviews and so on. Efficient features need to be extracted for machine-learning algorithm for better sentiment classification. In this paper, initially various features are extracted such as unigrams, bi-grams and dependency features from the text. In addition, new bi-tagged features are also extracted that conform to predefined part-of-speech patterns. Furthermore, various composite features are created using these features. Information gain (IG) and minimum redundancy maximum relevancy (mRMR) feature selection methods are used to eliminate the noisy and irrelevant features from the feature vector. Finally, machine-learning algorithms are used for classifying the review document into positive or negative class. Effects of different categories of features are investigated on four standard data-sets, namely, movie review and product (book, DVD and electronics) review data-sets. Experimental results show that composite features created from prominent features of unigram and bi-tagged features perform better than other features for sentiment classification. mRMR is a better feature selection method as compared with IG for sentiment classification. Boolean Multinomial Naïve Bayes) algorithm performs better than support vector machine classifier for SA in terms of accuracy and execution time.
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei
2015-03-01
A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.
Line fitting based feature extraction for object recognition
NASA Astrophysics Data System (ADS)
Li, Bing
2014-06-01
Image feature extraction plays a significant role in image based pattern applications. In this paper, we propose a new approach to generate hierarchical features. This new approach applies line fitting to adaptively divide regions based upon the amount of information and creates line fitting features for each subsequent region. It overcomes the feature wasting drawback of the wavelet based approach and demonstrates high performance in real applications. For gray scale images, we propose a diffusion equation approach to map information-rich pixels (pixels near edges and ridge pixels) into high values, and pixels in homogeneous regions into small values near zero that form energy map images. After the energy map images are generated, we propose a line fitting approach to divide regions recursively and create features for each region simultaneously. This new feature extraction approach is similar to wavelet based hierarchical feature extraction in which high layer features represent global characteristics and low layer features represent local characteristics. However, the new approach uses line fitting to adaptively focus on information-rich regions so that we avoid the feature waste problems of the wavelet approach in homogeneous regions. Finally, the experiments for handwriting word recognition show that the new method provides higher performance than the regular handwriting word recognition approach.
A new EMI system for detection and classification of challenging targets
NASA Astrophysics Data System (ADS)
Shubitidze, F.; Fernández, J. P.; Barrowes, B. E.; O'Neill, K.
2013-06-01
Advanced electromagnetic induction (EMI) sensors currently feature multi-axis illumination of targets and tri-axial vector sensing (e.g., MetalMapper), or exploit multi-static array data acquisition (e.g., TEMTADS). They produce data of high density, quality, and diversity, and have been combined with advanced EMI models to provide superb classification performance relative to the previous generation of single-axis, monostatic sensors. However, these advances yet have to improve significantly our ability to classify small, deep, and otherwise challenging targets. Particularly, recent live-site discrimination studies at Camp Butner, NC and Camp Beale, CA have revealed that it is more challenging to detect and discriminate small munitions (with calibers ranging from 20 mm to 60 mm) than larger ones. In addition, a live-site test at the Massachusetts Military Reservation, MA highlighted the difficulties for current sensors to classify large, deep, and overlapping targets with high confidence. There are two main approaches to overcome these problems: 1) adapt advanced EMI models to the existing systems and 2) improve the detection limits of current sensors by modifying their hardware. In this paper we demonstrate a combined software/hardware approach that will provide extended detection range and spatial resolution to next-generation EMI systems; we analyze and invert EMI data to extract classification features for small and deep targets; and we propose a new system that features a large transmitter coil.
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.
2013-10-01
Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.
Artificially intelligent recognition of Arabic speaker using voice print-based local features
NASA Astrophysics Data System (ADS)
Mahmood, Awais; Alsulaiman, Mansour; Muhammad, Ghulam; Akram, Sheeraz
2016-11-01
Local features for any pattern recognition system are based on the information extracted locally. In this paper, a local feature extraction technique was developed. This feature was extracted in the time-frequency plain by taking the moving average on the diagonal directions of the time-frequency plane. This feature captured the time-frequency events producing a unique pattern for each speaker that can be viewed as a voice print of the speaker. Hence, we referred to this technique as voice print-based local feature. The proposed feature was compared to other features including mel-frequency cepstral coefficient (MFCC) for speaker recognition using two different databases. One of the databases used in the comparison is a subset of an LDC database that consisted of two short sentences uttered by 182 speakers. The proposed feature attained 98.35% recognition rate compared to 96.7% for MFCC using the LDC subset.
NASA Astrophysics Data System (ADS)
Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki
2017-09-01
Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.
NASA Astrophysics Data System (ADS)
Paino, A.; Keller, J.; Popescu, M.; Stone, K.
2014-06-01
In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.
NASA Astrophysics Data System (ADS)
Li, S.; Zhang, S.; Yang, D.
2017-09-01
Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.
Wireless photoplethysmographic device for heart rate variability signal acquisition and analysis.
Reyes, Ivan; Nazeran, Homer; Franco, Mario; Haltiwanger, Emily
2012-01-01
The photoplethysmographic (PPG) signal has the potential to aid in the acquisition and analysis of heart rate variability (HRV) signal: a non-invasive quantitative marker of the autonomic nervous system that could be used to assess cardiac health and other physiologic conditions. A low-power wireless PPG device was custom-developed to monitor, acquire and analyze the arterial pulse in the finger. The system consisted of an optical sensor to detect arterial pulse as variations in reflected light intensity, signal conditioning circuitry to process the reflected light signal, a microcontroller to control PPG signal acquisition, digitization and wireless transmission, a receiver to collect the transmitted digital data and convert them back to their analog representations. A personal computer was used to further process the captured PPG signals and display them. A MATLAB program was then developed to capture the PPG data, detect the RR peaks, perform spectral analysis of the PPG data, and extract the HRV signal. A user-friendly graphical user interface (GUI) was developed in LabView to display the PPG data and their spectra. The performance of each module (sensing unit, signal conditioning, wireless transmission/reception units, and graphical user interface) was assessed individually and the device was then tested as a whole. Consequently, PPG data were obtained from five healthy individuals to test the utility of the wireless system. The device was able to reliably acquire the PPG signals from the volunteers. To validate the accuracy of the MATLAB codes, RR peak information from each subject was fed into Kubios software as a text file. Kubios was able to generate a report sheet with the time domain and frequency domain parameters of the acquired data. These features were then compared against those calculated by MATLAB. The preliminary results demonstrate that the prototype wireless device could be used to perform HRV signal acquisition and analysis.
NASA Astrophysics Data System (ADS)
Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.
2018-04-01
A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.
Are You Experienced The Case for Acquisition Professional Qualification Standards
2015-10-01
1, DAU 236), featuring a cover story on micro machines used in defense systems. The winners for Defense AT&L were managing edi - tor/senior editor of...Gonzalez, copy edi - tor and circulation manager; and Noelia Gamboa, editorial support. Defense AT&L’s sister DAU publication, the Defense Acquisition
Bilingual First Language Acquisition: Exploring the Limits of the Language Faculty.
ERIC Educational Resources Information Center
Genesee, Fred
2001-01-01
Reviews current research in three domains of bilingual acquisition: pragmatic features of bilingual code mixing, grammatical constraints on child bilingual code mixing, and bilingual syntactic development. Examines implications from these domains for the understanding of the limits of the mental faculty to acquire language. (Author/VWL)
Teaching Sociolinguistic Variation in the Intermediate Language Classroom: "Voseo" in Latin America
ERIC Educational Resources Information Center
Shenk, Elaine M.
2014-01-01
The acquisition of sociolinguistic variation by second language learners has gained increased attention. Some research highlights the value of naturalistic exposure through study abroad while other studies point out that classroom input can facilitate the acquisition of particular features of variation. Nevertheless, said attention to the…
Jones, Drew R; Wu, Zhiping; Chauhan, Dharminder; Anderson, Kenneth C; Peng, Junmin
2014-04-01
Global metabolomics relies on highly reproducible and sensitive detection of a wide range of metabolites in biological samples. Here we report the optimization of metabolome analysis by nanoflow ultraperformance liquid chromatography coupled to high-resolution orbitrap mass spectrometry. Reliable peak features were extracted from the LC-MS runs based on mandatory detection in duplicates and additional noise filtering according to blank injections. The run-to-run variation in peak area showed a median of 14%, and the false discovery rate during a mock comparison was evaluated. To maximize the number of peak features identified, we systematically characterized the effect of sample loading amount, gradient length, and MS resolution. The number of features initially rose and later reached a plateau as a function of sample amount, fitting a hyperbolic curve. Longer gradients improved unique feature detection in part by time-resolving isobaric species. Increasing the MS resolution up to 120000 also aided in the differentiation of near isobaric metabolites, but higher MS resolution reduced the data acquisition rate and conferred no benefits, as predicted from a theoretical simulation of possible metabolites. Moreover, a biphasic LC gradient allowed even distribution of peak features across the elution, yielding markedly more peak features than the linear gradient. Using this robust nUPLC-HRMS platform, we were able to consistently analyze ~6500 metabolite features in a single 60 min gradient from 2 mg of yeast, equivalent to ~50 million cells. We applied this optimized method in a case study of drug (bortezomib) resistant and drug-sensitive multiple myeloma cells. Overall, 18% of metabolite features were matched to KEGG identifiers, enabling pathway enrichment analysis. Principal component analysis and heat map data correctly clustered isogenic phenotypes, highlighting the potential for hundreds of small molecule biomarkers of cancer drug resistance.
Zhang, Xiong; Zhao, Yacong; Zhang, Yu; Zhong, Xuefei; Fan, Zhaowen
2018-01-01
The novel human-computer interface (HCI) using bioelectrical signals as input is a valuable tool to improve the lives of people with disabilities. In this paper, surface electromyography (sEMG) signals induced by four classes of wrist movements were acquired from four sites on the lower arm with our designed system. Forty-two features were extracted from the time, frequency and time-frequency domains. Optimal channels were determined from single-channel classification performance rank. The optimal-feature selection was according to a modified entropy criteria (EC) and Fisher discrimination (FD) criteria. The feature selection results were evaluated by four different classifiers, and compared with other conventional feature subsets. In online tests, the wearable system acquired real-time sEMG signals. The selected features and trained classifier model were used to control a telecar through four different paradigms in a designed environment with simple obstacles. Performance was evaluated based on travel time (TT) and recognition rate (RR). The results of hardware evaluation verified the feasibility of our acquisition systems, and ensured signal quality. Single-channel analysis results indicated that the channel located on the extensor carpi ulnaris (ECU) performed best with mean classification accuracy of 97.45% for all movement’s pairs. Channels placed on ECU and the extensor carpi radialis (ECR) were selected according to the accuracy rank. Experimental results showed that the proposed FD method was better than other feature selection methods and single-type features. The combination of FD and random forest (RF) performed best in offline analysis, with 96.77% multi-class RR. Online results illustrated that the state-machine paradigm with a 125 ms window had the highest maneuverability and was closest to real-life control. Subjects could accomplish online sessions by three sEMG-based paradigms, with average times of 46.02, 49.06 and 48.08 s, respectively. These experiments validate the feasibility of proposed real-time wearable HCI system and algorithms, providing a potential assistive device interface for persons with disabilities. PMID:29543737
NASA Astrophysics Data System (ADS)
Thomaz, Ricardo L.; Carneiro, Pedro C.; Patrocinio, Ana C.
2017-03-01
Breast cancer is the leading cause of death for women in most countries. The high levels of mortality relate mostly to late diagnosis and to the direct proportionally relationship between breast density and breast cancer development. Therefore, the correct assessment of breast density is important to provide better screening for higher risk patients. However, in modern digital mammography the discrimination among breast densities is highly complex due to increased contrast and visual information for all densities. Thus, a computational system for classifying breast density might be a useful tool for aiding medical staff. Several machine-learning algorithms are already capable of classifying small number of classes with good accuracy. However, machinelearning algorithms main constraint relates to the set of features extracted and used for classification. Although well-known feature extraction techniques might provide a good set of features, it is a complex task to select an initial set during design of a classifier. Thus, we propose feature extraction using a Convolutional Neural Network (CNN) for classifying breast density by a usual machine-learning classifier. We used 307 mammographic images downsampled to 260x200 pixels to train a CNN and extract features from a deep layer. After training, the activation of 8 neurons from a deep fully connected layer are extracted and used as features. Then, these features are feedforward to a single hidden layer neural network that is cross-validated using 10-folds to classify among four classes of breast density. The global accuracy of this method is 98.4%, presenting only 1.6% of misclassification. However, the small set of samples and memory constraints required the reuse of data in both CNN and MLP-NN, therefore overfitting might have influenced the results even though we cross-validated the network. Thus, although we presented a promising method for extracting features and classifying breast density, a greater database is still required for evaluating the results.
Feature extraction applied to agricultural crops as seen by LANDSAT
NASA Technical Reports Server (NTRS)
Kauth, R. J.; Lambeck, P. F.; Richardson, W.; Thomas, G. S.; Pentland, A. P. (Principal Investigator)
1979-01-01
The physical interpretation of the spectral-temporal structure of LANDSAT data can be conveniently described in terms of a graphic descriptive model called the Tassled Cap. This model has been a source of development not only in crop-related feature extraction, but also for data screening and for haze effects correction. Following its qualitative description and an indication of its applications, the model is used to analyze several feature extraction algorithms.
Optical character recognition with feature extraction and associative memory matrix
NASA Astrophysics Data System (ADS)
Sasaki, Osami; Shibahara, Akihito; Suzuki, Takamasa
1998-06-01
A method is proposed in which handwritten characters are recognized using feature extraction and an associative memory matrix. In feature extraction, simple processes such as shifting and superimposing patterns are executed. A memory matrix is generated with singular value decomposition and by modifying small singular values. The method is optically implemented with two liquid crystal displays. Experimental results for the recognition of 25 handwritten alphabet characters clearly shows the effectiveness of the method.
Spectral Analysis of Breast Cancer on Tissue Microarrays: Seeing Beyond Morphology
2005-04-01
Harvey N., Szymanski J.J., Bloch J.J., Mitchell M. investigation of image feature extraction by a genetic algorithm. Proc. SPIE 1999;3812:24-31. 11...automated feature extraction using multiple data sources. Proc. SPIE 2003;5099:190-200. 15 4 Spectral-Spatial Analysis of Urine Cytology Angeletti et al...Appendix Contents: 1. Harvey, N.R., Levenson, R.M., Rimm, D.L. (2003) Investigation of Automated Feature Extraction Techniques for Applications in
A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification
Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.
2015-01-01
In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898
Less is More: How manipulative features affect children’s learning from picture books
Tare, Medha; Chiong, Cynthia; Ganea, Patricia; DeLoache, Judy
2010-01-01
Picture books are ubiquitous in young children’s lives and are assumed to support children’s acquisition of information about the world. Given their importance, relatively little research has directly examined children’s learning from picture books. We report two studies examining children’s acquisition of labels and facts from picture books that vary on two dimensions: iconicity of the pictures and presence of manipulative features (or “pop-ups”). In Study 1, 20-month-old children generalized novel labels less well when taught from a book with manipulative features than from standard picture books without such elements. In Study 2, 30- and 36-month-old children learned fewer facts when taught from a manipulative picture book with drawings than from a standard picture book with realistic images and no manipulative features. The results of the two studies indicate that children’s learning from picture books is facilitated by realistic illustrations, but impeded by manipulative features. PMID:20948970
Liu, Bo; Wu, Huayi; Wang, Yandong; Liu, Wenming
2015-01-01
Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS) databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1) using directional mathematical morphology to enhance the contrast between roads and non-roads; (2) using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction. PMID:26397832
Automation of lidar-based hydrologic feature extraction workflows using GIS
NASA Astrophysics Data System (ADS)
Borlongan, Noel Jerome B.; de la Cruz, Roel M.; Olfindo, Nestor T.; Perez, Anjillyn Mae C.
2016-10-01
With the advent of LiDAR technology, higher resolution datasets become available for use in different remote sensing and GIS applications. One significant application of LiDAR datasets in the Philippines is in resource features extraction. Feature extraction using LiDAR datasets require complex and repetitive workflows which can take a lot of time for researchers through manual execution and supervision. The Development of the Philippine Hydrologic Dataset for Watersheds from LiDAR Surveys (PHD), a project under the Nationwide Detailed Resources Assessment Using LiDAR (Phil-LiDAR 2) program, created a set of scripts, the PHD Toolkit, to automate its processes and workflows necessary for hydrologic features extraction specifically Streams and Drainages, Irrigation Network, and Inland Wetlands, using LiDAR Datasets. These scripts are created in Python and can be added in the ArcGIS® environment as a toolbox. The toolkit is currently being used as an aid for the researchers in hydrologic feature extraction by simplifying the workflows, eliminating human errors when providing the inputs, and providing quick and easy-to-use tools for repetitive tasks. This paper discusses the actual implementation of different workflows developed by Phil-LiDAR 2 Project 4 in Streams, Irrigation Network and Inland Wetlands extraction.
Feature Extraction and Selection Strategies for Automated Target Recognition
NASA Technical Reports Server (NTRS)
Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin
2010-01-01
Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.
Feature extraction and selection strategies for automated target recognition
NASA Astrophysics Data System (ADS)
Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin
2010-04-01
Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory regionof- interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.
Ensemble methods with simple features for document zone classification
NASA Astrophysics Data System (ADS)
Obafemi-Ajayi, Tayo; Agam, Gady; Xie, Bingqing
2012-01-01
Document layout analysis is of fundamental importance for document image understanding and information retrieval. It requires the identification of blocks extracted from a document image via features extraction and block classification. In this paper, we focus on the classification of the extracted blocks into five classes: text (machine printed), handwriting, graphics, images, and noise. We propose a new set of features for efficient classifications of these blocks. We present a comparative evaluation of three ensemble based classification algorithms (boosting, bagging, and combined model trees) in addition to other known learning algorithms. Experimental results are demonstrated for a set of 36503 zones extracted from 416 document images which were randomly selected from the tobacco legacy document collection. The results obtained verify the robustness and effectiveness of the proposed set of features in comparison to the commonly used Ocropus recognition features. When used in conjunction with the Ocropus feature set, we further improve the performance of the block classification system to obtain a classification accuracy of 99.21%.
A method of vehicle license plate recognition based on PCANet and compressive sensing
NASA Astrophysics Data System (ADS)
Ye, Xianyi; Min, Feng
2018-03-01
The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.
Selecting relevant 3D image features of margin sharpness and texture for lung nodule retrieval.
Ferreira, José Raniery; de Azevedo-Marques, Paulo Mazzoncini; Oliveira, Marcelo Costa
2017-03-01
Lung cancer is the leading cause of cancer-related deaths in the world. Its diagnosis is a challenge task to specialists due to several aspects on the classification of lung nodules. Therefore, it is important to integrate content-based image retrieval methods on the lung nodule classification process, since they are capable of retrieving similar cases from databases that were previously diagnosed. However, this mechanism depends on extracting relevant image features in order to obtain high efficiency. The goal of this paper is to perform the selection of 3D image features of margin sharpness and texture that can be relevant on the retrieval of similar cancerous and benign lung nodules. A total of 48 3D image attributes were extracted from the nodule volume. Border sharpness features were extracted from perpendicular lines drawn over the lesion boundary. Second-order texture features were extracted from a cooccurrence matrix. Relevant features were selected by a correlation-based method and a statistical significance analysis. Retrieval performance was assessed according to the nodule's potential malignancy on the 10 most similar cases and by the parameters of precision and recall. Statistical significant features reduced retrieval performance. Correlation-based method selected 2 margin sharpness attributes and 6 texture attributes and obtained higher precision compared to all 48 extracted features on similar nodule retrieval. Feature space dimensionality reduction of 83 % obtained higher retrieval performance and presented to be a computationaly low cost method of retrieving similar nodules for the diagnosis of lung cancer.
Chinese character recognition based on Gabor feature extraction and CNN
NASA Astrophysics Data System (ADS)
Xiong, Yudian; Lu, Tongwei; Jiang, Yongyuan
2018-03-01
As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.
Human listening studies reveal insights into object features extracted by echolocating dolphins
NASA Astrophysics Data System (ADS)
Delong, Caroline M.; Au, Whitlow W. L.; Roitblat, Herbert L.
2004-05-01
Echolocating dolphins extract object feature information from the acoustic parameters of object echoes. However, little is known about which object features are salient to dolphins or how they extract those features. To gain insight into how dolphins might be extracting feature information, human listeners were presented with echoes from objects used in a dolphin echoic-visual cross-modal matching task. Human participants performed a task similar to the one the dolphin had performed; however, echoic samples consisting of 23-echo trains were presented via headphones. The participants listened to the echoic sample and then visually selected the correct object from among three alternatives. The participants performed as well as or better than the dolphin (M=88.0% correct), and reported using a combination of acoustic cues to extract object features (e.g., loudness, pitch, timbre). Participants frequently reported using the pattern of aural changes in the echoes across the echo train to identify the shape and structure of the objects (e.g., peaks in loudness or pitch). It is likely that dolphins also attend to the pattern of changes across echoes as objects are echolocated from different angles.
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
An approach for automatic classification of grouper vocalizations with passive acoustic monitoring.
Ibrahim, Ali K; Chérubin, Laurent M; Zhuang, Hanqi; Schärer Umpierre, Michelle T; Dalgleish, Fraser; Erdol, Nurgun; Ouyang, B; Dalgleish, A
2018-02-01
Grouper, a family of marine fishes, produce distinct vocalizations associated with their reproductive behavior during spawning aggregation. These low frequencies sounds (50-350 Hz) consist of a series of pulses repeated at a variable rate. In this paper, an approach is presented for automatic classification of grouper vocalizations from ambient sounds recorded in situ with fixed hydrophones based on weighted features and sparse classifier. Group sounds were labeled initially by humans for training and testing various feature extraction and classification methods. In the feature extraction phase, four types of features were used to extract features of sounds produced by groupers. Once the sound features were extracted, three types of representative classifiers were applied to categorize the species that produced these sounds. Experimental results showed that the overall percentage of identification using the best combination of the selected feature extractor weighted mel frequency cepstral coefficients and sparse classifier achieved 82.7% accuracy. The proposed algorithm has been implemented in an autonomous platform (wave glider) for real-time detection and classification of group vocalizations.
NASA Astrophysics Data System (ADS)
Wang, Ximing; Kim, Bokkyu; Park, Ji Hoon; Wang, Erik; Forsyth, Sydney; Lim, Cody; Ravi, Ragini; Karibyan, Sarkis; Sanchez, Alexander; Liu, Brent
2017-03-01
Quantitative imaging biomarkers are used widely in clinical trials for tracking and evaluation of medical interventions. Previously, we have presented a web based informatics system utilizing quantitative imaging features for predicting outcomes in stroke rehabilitation clinical trials. The system integrates imaging features extraction tools and a web-based statistical analysis tool. The tools include a generalized linear mixed model(GLMM) that can investigate potential significance and correlation based on features extracted from clinical data and quantitative biomarkers. The imaging features extraction tools allow the user to collect imaging features and the GLMM module allows the user to select clinical data and imaging features such as stroke lesion characteristics from the database as regressors and regressands. This paper discusses the application scenario and evaluation results of the system in a stroke rehabilitation clinical trial. The system was utilized to manage clinical data and extract imaging biomarkers including stroke lesion volume, location and ventricle/brain ratio. The GLMM module was validated and the efficiency of data analysis was also evaluated.
Deductive reasoning, brain maturation, and science concept acquisition: Are they linked?
NASA Astrophysics Data System (ADS)
Lawson, Anton E.
The present study tested the alternative hypotheses that the poor performance of the intuitive and transitional students on the concept acquisition tasks employed in the Lawson et al. (1991) study was due either to their failure (a) to use deductive reasoning to test potentially relevant task features, as suggested by Lawson et al. (1991); (b) to identify potentially relevant features; or (c) to derive and test a successful problem-solving strategy. To test these hypotheses a training session, which consisted of a series of seven concept acquisition tasks, was designed to reveal to students key task features and the deductive reasoning pattern necessary to solve the tasks. The training was individually administered to students (ages 5-14 years). Results revealed that none of the five- and six-year-olds, approximately half of the seven-year-olds, and virtually all of the students eight years and older responded successfully to the training. These results are viewed as contradictory to the hypothesis that the intuitive and transitional students in the Lawson et al. (1991) study lacked the reasoning skills necessary to identify and test potentially relevant task features. Instead, the results support the hypothesis that their poor performance was due to their failure to use hypothetico-deductive reasoning to derive an effective strategy. Previous research is cited that indicates that the brain's frontal lobes undergo a pronounced growth spurt from about four years of age to about seven years of age. In fact, the performance of normal six-year-olds and adults with frontal lobe damage on tasks such as the Wisconsin Card Sorting Task (WCST), a task similar in many ways to the present concept acquisition tasks, has been found to be identical. Consequently, the hypothesis is advanced that maturation of the frontal lobes can explain the striking improvement in performance at age seven. A neural network of the role of the frontal lobes in task performance based upon the work of Levine and Prueitt (1989) is presented. The advance in reasoning that presumably results from effective operation of the frontal lobes is seen as a fundamental advance in intellectual development because it enables children to employ an inductive-deductive reasoning pattern to change their minds when confronted with contradictory evidence regarding features of perceptible objects, a skill necessary for descriptive concept acquisition. It is suggested that a further qualitative advance in intellectual development occurs when an analogous pattern of abductive-deductive reasoning is applied to hypothetical objects and/or processes to allow for alternative hypothesis testing and theoretical concept acquisition. Apparently this is the reasoning pattern needed to derive an effective problem-solving strategy to solve the concept acquisition tasks of Lawson et al. (1991) when direct instruction is not provided. Implications for the science classroom are suggested.
Interpreting Definiteness in a Second Language without Articles: The Case of L2 Russian
ERIC Educational Resources Information Center
Cho, Jacee; Slabakova, Roumyana
2014-01-01
This article investigates the second language (L2) acquisition of two expressions of the semantic feature [definite] in Russian, a language without articles, by English and Korean native speakers. Within the Feature Reassembly approach (Lardiere, 2009), Slabakova (2009) has argued that reassembling features that are represented overtly in the…
Feature extraction from multiple data sources using genetic programming
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Brumby, Steven P.; Pope, Paul A.; Eads, Damian R.; Esch-Mosher, Diana M.; Galassi, Mark C.; Harvey, Neal R.; McCulloch, Hersey D.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Bloch, Jeffrey J.; David, Nancy A.
2002-08-01
Feature extraction from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. We use the GENetic Imagery Exploitation (GENIE) software for this purpose, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land cover features including towns, wildfire burnscars, and forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.
Supporting the Growing Needs of the GIS Industry
NASA Technical Reports Server (NTRS)
2003-01-01
Visual Learning Systems, Inc. (VLS), of Missoula, Montana, has developed a commercial software application called Feature Analyst. Feature Analyst was conceived under a Small Business Innovation Research (SBIR) contract with NASA's Stennis Space Center, and through the Montana State University TechLink Center, an organization funded by NASA and the U.S. Department of Defense to link regional companies with Federal laboratories for joint research and technology transfer. The software provides a paradigm shift to automated feature extraction, as it utilizes spectral, spatial, temporal, and ancillary information to model the feature extraction process; presents the ability to remove clutter; incorporates advanced machine learning techniques to supply unparalleled levels of accuracy; and includes an exceedingly simple interface for feature extraction.
Automation and Robotics in the Laboratory.
ERIC Educational Resources Information Center
DiCesare, Frank; And Others
1985-01-01
A general laboratory course featuring microcomputer interfacing for data acquisition, process control and automation, and robotics was developed at Rensselaer Polytechnic Institute and is now available to all junior engineering students. The development and features of the course are described. (JN)
Automated Recognition of 3D Features in GPIR Images
NASA Technical Reports Server (NTRS)
Park, Han; Stough, Timothy; Fijany, Amir
2007-01-01
A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.
Nonlinear features for product inspection
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1999-03-01
Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non-invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data.
A mechatronics platform to study prosthetic hand control using EMG signals.
Geethanjali, P
2016-09-01
In this paper, a low-cost mechatronics platform for the design and development of robotic hands as well as a surface electromyogram (EMG) pattern recognition system is proposed. This paper also explores various EMG classification techniques using a low-cost electronics system in prosthetic hand applications. The proposed platform involves the development of a four channel EMG signal acquisition system; pattern recognition of acquired EMG signals; and development of a digital controller for a robotic hand. Four-channel surface EMG signals, acquired from ten healthy subjects for six different movements of the hand, were used to analyse pattern recognition in prosthetic hand control. Various time domain features were extracted and grouped into five ensembles to compare the influence of features in feature-selective classifiers (SLR) with widely considered non-feature-selective classifiers, such as neural networks (NN), linear discriminant analysis (LDA) and support vector machines (SVM) applied with different kernels. The results divulged that the average classification accuracy of the SVM, with a linear kernel function, outperforms other classifiers with feature ensembles, Hudgin's feature set and auto regression (AR) coefficients. However, the slight improvement in classification accuracy of SVM incurs more processing time and memory space in the low-level controller. The Kruskal-Wallis (KW) test also shows that there is no significant difference in the classification performance of SLR with Hudgin's feature set to that of SVM with Hudgin's features along with AR coefficients. In addition, the KW test shows that SLR was found to be better in respect to computation time and memory space, which is vital in a low-level controller. Similar to SVM, with a linear kernel function, other non-feature selective LDA and NN classifiers also show a slight improvement in performance using twice the features but with the drawback of increased memory space requirement and time. This prototype facilitated the study of various issues of pattern recognition and identified an efficient classifier, along with a feature ensemble, in the implementation of EMG controlled prosthetic hands in a laboratory setting at low-cost. This platform may help to motivate and facilitate prosthetic hand research in developing countries.
Feature extraction inspired by V1 in visual cortex
NASA Astrophysics Data System (ADS)
Lv, Chao; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Xin, Peng; Zhu, Mingning; Ma, Hongqiang
2018-04-01
Target feature extraction plays an important role in pattern recognition. It is the most complicated activity in the brain mechanism of biological vision. Inspired by high properties of primary visual cortex (V1) in extracting dynamic and static features, a visual perception model was raised. Firstly, 28 spatial-temporal filters with different orientations, half-squaring operation and divisive normalization were adopted to obtain the responses of V1 simple cells; then, an adjustable parameter was added to the output weight so that the response of complex cells was got. Experimental results indicate that the proposed V1 model can perceive motion information well. Besides, it has a good edge detection capability. The model inspired by V1 has good performance in feature extraction and effectively combines brain-inspired intelligence with computer vision.
Task Demands Control Acquisition and Storage of Visual Information
ERIC Educational Resources Information Center
Droll, Jason A.; Hayhoe, Mary M.; Triesch, Jochen; Sullivan, Brian T.
2005-01-01
Attention and working memory limitations set strict limits on visual representations, yet researchers have little appreciation of how these limits constrain the acquisition of information in ongoing visually guided behavior. Subjects performed a brick sorting task in a virtual environment. A change was made to 1 of the features of the brick being…
Multinode data acquisition and control system for the 4-element TACTIC telescope array
NASA Astrophysics Data System (ADS)
Yadav, K. K.; Chouhan, N.; Kaul, S. R.; Koul, R.
2002-03-01
An interrupt driven multinode data acquisition and control system has been developed for the 4-element gamma-ray telescope array, TACTIC. Computer networking technology and the CAMAC bus have been integrated to develop this icon-based, userfriendly failsafe system. The paper describes the salient features of the system.
Variogram-based feature extraction for neural network recognition of logos
NASA Astrophysics Data System (ADS)
Pham, Tuan D.
2003-03-01
This paper presents a new approach for extracting spatial features of images based on the theory of regionalized variables. These features can be effectively used for automatic recognition of logo images using neural networks. Experimental results on a public-domain logo database show the effectiveness of the proposed approach.
The neural basis for novel semantic categorization.
Koenig, Phyllis; Smith, Edward E; Glosser, Guila; DeVita, Chris; Moore, Peachie; McMillan, Corey; Gee, Jim; Grossman, Murray
2005-01-15
We monitored regional cerebral activity with BOLD fMRI during acquisition of a novel semantic category and subsequent categorization of test stimuli by a rule-based strategy or a similarity-based strategy. We observed different patterns of activation in direct comparisons of rule- and similarity-based categorization. During rule-based category acquisition, subjects recruited anterior cingulate, thalamic, and parietal regions to support selective attention to perceptual features, and left inferior frontal cortex to helps maintain rules in working memory. Subsequent rule-based categorization revealed anterior cingulate and parietal activation while judging stimuli whose conformity with the rules was readily apparent, and left inferior frontal recruitment during judgments of stimuli whose conformity was less apparent. By comparison, similarity-based category acquisition showed recruitment of anterior prefrontal and posterior cingulate regions, presumably to support successful retrieval of previously encountered exemplars from long-term memory, and bilateral temporal-parietal activation for perceptual feature integration. Subsequent similarity-based categorization revealed temporal-parietal, posterior cingulate, and anterior prefrontal activation. These findings suggest that large-scale networks support relatively distinct categorization processes during the acquisition and judgment of semantic category knowledge.
Telkemeyer, Silke; Rossi, Sonja; Nierhaus, Till; Steinbrink, Jens; Obrig, Hellmuth; Wartenburger, Isabell
2010-01-01
Speech perception requires rapid extraction of the linguistic content from the acoustic signal. The ability to efficiently process rapid changes in auditory information is important for decoding speech and thereby crucial during language acquisition. Investigating functional networks of speech perception in infancy might elucidate neuronal ensembles supporting perceptual abilities that gate language acquisition. Interhemispheric specializations for language have been demonstrated in infants. How these asymmetries are shaped by basic temporal acoustic properties is under debate. We recently provided evidence that newborns process non-linguistic sounds sharing temporal features with language in a differential and lateralized fashion. The present study used the same material while measuring brain responses of 6 and 3 month old infants using simultaneous recordings of electroencephalography (EEG) and near-infrared spectroscopy (NIRS). NIRS reveals that the lateralization observed in newborns remains constant over the first months of life. While fast acoustic modulations elicit bilateral neuronal activations, slow modulations lead to right-lateralized responses. Additionally, auditory-evoked potentials and oscillatory EEG responses show differential responses for fast and slow modulations indicating a sensitivity for temporal acoustic variations. Oscillatory responses reveal an effect of development, that is, 6 but not 3 month old infants show stronger theta-band desynchronization for slowly modulated sounds. Whether this developmental effect is due to increasing fine-grained perception for spectrotemporal sounds in general remains speculative. Our findings support the notion that a more general specialization for acoustic properties can be considered the basis for lateralization of speech perception. The results show that concurrent assessment of vascular based imaging and electrophysiological responses have great potential in the research on language acquisition. PMID:21716574
A high-efficiency real-time digital signal averager for time-of-flight mass spectrometry.
Wang, Yinan; Xu, Hui; Li, Qingjiang; Li, Nan; Huang, Zhengxu; Zhou, Zhen; Liu, Husheng; Sun, Zhaolin; Xu, Xin; Yu, Hongqi; Liu, Haijun; Li, David D-U; Wang, Xi; Dong, Xiuzhen; Gao, Wei
2013-05-30
Analog-to-digital converter (ADC)-based acquisition systems are widely applied in time-of-flight mass spectrometers (TOFMS) due to their ability to record the signal intensity of all ions within the same pulse. However, the acquisition system raises the requirement for data throughput, along with increasing the conversion rate and resolution of the ADC. It is therefore of considerable interest to develop a high-performance real-time acquisition system, which can relieve the limitation of data throughput. We present in this work a high-efficiency real-time digital signal averager, consisting of a signal conditioner, a data conversion module and a signal processing module. Two optimization strategies are implemented using field programmable gate arrays (FPGAs) to enhance the efficiency of the real-time processing. A pipeline procedure is used to reduce the time consumption of the accumulation strategy. To realize continuous data transfer, a high-efficiency transmission strategy is developed, based on a ping-pong procedure. The digital signal averager features good responsiveness, analog bandwidth and dynamic performance. The optimal effective number of bits reaches 6.7 bits. For a 32 µs record length, the averager can realize 100% efficiency with an extraction frequency below 31.23 kHz by modifying the number of accumulation steps. In unit time, the averager yields superior signal-to-noise ratio (SNR) compared with data accumulation in a computer. The digital signal averager is combined with a vacuum ultraviolet single-photon ionization time-of-flight mass spectrometer (VUV-SPI-TOFMS). The efficiency of the real-time processing is tested by analyzing the volatile organic compounds (VOCs) from ordinary printed materials. In these experiments, 22 kinds of compounds are detected, and the dynamic range exceeds 3 orders of magnitude. Copyright © 2013 John Wiley & Sons, Ltd.
Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V
2018-04-17
Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.
Face-iris multimodal biometric scheme based on feature level fusion
NASA Astrophysics Data System (ADS)
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei
2015-11-01
Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.
Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.
Segovia, F; Górriz, J M; Ramírez, J; Phillips, C
2016-01-01
Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods.
Icy Soil Acquisition Device for the 2007 Phoenix Mars Lander
NASA Technical Reports Server (NTRS)
Chu, Philip; Wilson, Jack; Davis, Kiel; Shiraishi, Lori; Burke, Kevin
2008-01-01
The Icy Soil Acquisition Device is a first of its kind mechanism that is designed to acquire ice-bearing soil from the surface of the Martian polar region and transfer the samples to analytical instruments, playing a critical role in the potential discovery of existing water on Mars. The device incorporates a number of novel features that further the state of the art in spacecraft design for harsh environments, sample acquisition and handling, and high-speed low torque mechanism design.
Lo, P; Young, S; Kim, H J; Brown, M S; McNitt-Gray, M F
2016-08-01
To investigate the effects of dose level and reconstruction method on density and texture based features computed from CT lung nodules. This study had two major components. In the first component, a uniform water phantom was scanned at three dose levels and images were reconstructed using four conventional filtered backprojection (FBP) and four iterative reconstruction (IR) methods for a total of 24 different combinations of acquisition and reconstruction conditions. In the second component, raw projection (sinogram) data were obtained for 33 lung nodules from patients scanned as a part of their clinical practice, where low dose acquisitions were simulated by adding noise to sinograms acquired at clinical dose levels (a total of four dose levels) and reconstructed using one FBP kernel and two IR kernels for a total of 12 conditions. For the water phantom, spherical regions of interest (ROIs) were created at multiple locations within the water phantom on one reference image obtained at a reference condition. For the lung nodule cases, the ROI of each nodule was contoured semiautomatically (with manual editing) from images obtained at a reference condition. All ROIs were applied to their corresponding images reconstructed at different conditions. For 17 of the nodule cases, repeat contours were performed to assess repeatability. Histogram (eight features) and gray level co-occurrence matrix (GLCM) based texture features (34 features) were computed for all ROIs. For the lung nodule cases, the reference condition was selected to be 100% of clinical dose with FBP reconstruction using the B45f kernel; feature values calculated from other conditions were compared to this reference condition. A measure was introduced, which the authors refer to as Q, to assess the stability of features across different conditions, which is defined as the ratio of reproducibility (across conditions) to repeatability (across repeat contours) of each feature. The water phantom results demonstrated substantial variability among feature values calculated across conditions, with the exception of histogram mean. Features calculated from lung nodules demonstrated similar results with histogram mean as the most robust feature (Q ≤ 1), having a mean and standard deviation Q of 0.37 and 0.22, respectively. Surprisingly, histogram standard deviation and variance features were also quite robust. Some GLCM features were also quite robust across conditions, namely, diff. variance, sum variance, sum average, variance, and mean. Except for histogram mean, all features have a Q of larger than one in at least one of the 3% dose level conditions. As expected, the histogram mean is the most robust feature in their study. The effects of acquisition and reconstruction conditions on GLCM features vary widely, though trending toward features involving summation of product between intensities and probabilities being more robust, barring a few exceptions. Overall, care should be taken into account for variation in density and texture features if a variety of dose and reconstruction conditions are used for the quantification of lung nodules in CT, otherwise changes in quantification results may be more reflective of changes due to acquisition and reconstruction conditions than in the nodule itself.
DARHT Multi-intelligence Seismic and Acoustic Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Garrison Nicole; Van Buren, Kendra Lu; Hemez, Francois M.
The purpose of this report is to document the analysis of seismic and acoustic data collected at the Dual-Axis Radiographic Hydrodynamic Test (DARHT) facility at Los Alamos National Laboratory for robust, multi-intelligence decision making. The data utilized herein is obtained from two tri-axial seismic sensors and three acoustic sensors, resulting in a total of nine data channels. The goal of this analysis is to develop a generalized, automated framework to determine internal operations at DARHT using informative features extracted from measurements collected external of the facility. Our framework involves four components: (1) feature extraction, (2) data fusion, (3) classification, andmore » finally (4) robustness analysis. Two approaches are taken for extracting features from the data. The first of these, generic feature extraction, involves extraction of statistical features from the nine data channels. The second approach, event detection, identifies specific events relevant to traffic entering and leaving the facility as well as explosive activities at DARHT and nearby explosive testing sites. Event detection is completed using a two stage method, first utilizing signatures in the frequency domain to identify outliers and second extracting short duration events of interest among these outliers by evaluating residuals of an autoregressive exogenous time series model. Features extracted from each data set are then fused to perform analysis with a multi-intelligence paradigm, where information from multiple data sets are combined to generate more information than available through analysis of each independently. The fused feature set is used to train a statistical classifier and predict the state of operations to inform a decision maker. We demonstrate this classification using both generic statistical features and event detection and provide a comparison of the two methods. Finally, the concept of decision robustness is presented through a preliminary analysis where uncertainty is added to the system through noise in the measurements.« less
A neural joint model for entity and relation extraction from biomedical text.
Li, Fei; Zhang, Meishan; Fu, Guohong; Ji, Donghong
2017-03-31
Extracting biomedical entities and their relations from text has important applications on biomedical research. Previous work primarily utilized feature-based pipeline models to process this task. Many efforts need to be made on feature engineering when feature-based models are employed. Moreover, pipeline models may suffer error propagation and are not able to utilize the interactions between subtasks. Therefore, we propose a neural joint model to extract biomedical entities as well as their relations simultaneously, and it can alleviate the problems above. Our model was evaluated on two tasks, i.e., the task of extracting adverse drug events between drug and disease entities, and the task of extracting resident relations between bacteria and location entities. Compared with the state-of-the-art systems in these tasks, our model improved the F1 scores of the first task by 5.1% in entity recognition and 8.0% in relation extraction, and that of the second task by 9.2% in relation extraction. The proposed model achieves competitive performances with less work on feature engineering. We demonstrate that the model based on neural networks is effective for biomedical entity and relation extraction. In addition, parameter sharing is an alternative method for neural models to jointly process this task. Our work can facilitate the research on biomedical text mining.
Image standards in tissue-based diagnosis (diagnostic surgical pathology).
Kayser, Klaus; Görtler, Jürgen; Goldmann, Torsten; Vollmer, Ekkehard; Hufnagl, Peter; Kayser, Gian
2008-04-18
Progress in automated image analysis, virtual microscopy, hospital information systems, and interdisciplinary data exchange require image standards to be applied in tissue-based diagnosis. To describe the theoretical background, practical experiences and comparable solutions in other medical fields to promote image standards applicable for diagnostic pathology. THEORY AND EXPERIENCES: Images used in tissue-based diagnosis present with pathology-specific characteristics. It seems appropriate to discuss their characteristics and potential standardization in relation to the levels of hierarchy in which they appear. All levels can be divided into legal, medical, and technological properties. Standards applied to the first level include regulations or aims to be fulfilled. In legal properties, they have to regulate features of privacy, image documentation, transmission, and presentation; in medical properties, features of disease-image combination, human-diagnostics, automated information extraction, archive retrieval and access; and in technological properties features of image acquisition, display, formats, transfer speed, safety, and system dynamics. The next lower second level has to implement the prescriptions of the upper one, i.e. describe how they are implemented. Legal aspects should demand secure encryption for privacy of all patient related data, image archives that include all images used for diagnostics for a period of 10 years at minimum, accurate annotations of dates and viewing, and precise hardware and software information. Medical aspects should demand standardized patients' files such as DICOM 3 or HL 7 including history and previous examinations, information of image display hardware and software, of image resolution and fields of view, of relation between sizes of biological objects and image sizes, and of access to archives and retrieval. Technological aspects should deal with image acquisition systems (resolution, colour temperature, focus, brightness, and quality evaluation procedures), display resolution data, implemented image formats, storage, cycle frequency, backup procedures, operation system, and external system accessibility. The lowest third level describes the permitted limits and threshold in detail. At present, an applicable standard including all mentioned features does not exist to our knowledge; some aspects can be taken from radiological standards (PACS, DICOM 3); others require specific solutions or are not covered yet. The progress in virtual microscopy and application of artificial intelligence (AI) in tissue-based diagnosis demands fast preparation and implementation of an internationally acceptable standard. The described hierarchic order as well as analytic investigation in all potentially necessary aspects and details offers an appropriate tool to specifically determine standardized requirements.
Mid-Infrared Spectroscopy of Carbon Stars in the Small Magellanic Cloud
2006-07-10
nod. Before extracting spectra from fit a variety of spectral feature shapes using MgS considerably the images, we used the imclean software package...mined from neighboring pixels. In addition to the dust features , the IRS wavelength range also To extract spectra from the cleaned and differenced...Example of the extraction of the molecular bands and the SiC dust 24 jIm, and they avoid any potential problems at the joint be- feature from the spectrum
NASA Astrophysics Data System (ADS)
Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun
2012-10-01
Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.
Fast and Efficient Feature Engineering for Multi-Cohort Analysis of EHR Data.
Ozery-Flato, Michal; Yanover, Chen; Gottlieb, Assaf; Weissbrod, Omer; Parush Shear-Yashuv, Naama; Goldschmidt, Yaara
2017-01-01
We present a framework for feature engineering, tailored for longitudinal structured data, such as electronic health records (EHRs). To fast-track feature engineering and extraction, the framework combines general-use plug-in extractors, a multi-cohort management mechanism, and modular memoization. Using this framework, we rapidly extracted thousands of features from diverse and large healthcare data sources in multiple projects.
Feature generation using genetic programming with application to fault classification.
Guo, Hong; Jack, Lindsay B; Nandi, Asoke K
2005-02-01
One of the major challenges in pattern recognition problems is the feature extraction process which derives new features from existing features, or directly from raw data in order to reduce the cost of computation during the classification process, while improving classifier efficiency. Most current feature extraction techniques transform the original pattern vector into a new vector with increased discrimination capability but lower dimensionality. This is conducted within a predefined feature space, and thus, has limited searching power. Genetic programming (GP) can generate new features from the original dataset without prior knowledge of the probabilistic distribution. In this paper, a GP-based approach is developed for feature extraction from raw vibration data recorded from a rotating machine with six different conditions. The created features are then used as the inputs to a neural classifier for the identification of six bearing conditions. Experimental results demonstrate the ability of GP to discover autimatically the different bearing conditions using features expressed in the form of nonlinear functions. Furthermore, four sets of results--using GP extracted features with artificial neural networks (ANN) and support vector machines (SVM), as well as traditional features with ANN and SVM--have been obtained. This GP-based approach is used for bearing fault classification for the first time and exhibits superior searching power over other techniques. Additionaly, it significantly reduces the time for computation compared with genetic algorithm (GA), therefore, makes a more practical realization of the solution.
An ensemble method for extracting adverse drug events from social media.
Liu, Jing; Zhao, Songzheng; Zhang, Xiaodi
2016-06-01
Because adverse drug events (ADEs) are a serious health problem and a leading cause of death, it is of vital importance to identify them correctly and in a timely manner. With the development of Web 2.0, social media has become a large data source for information on ADEs. The objective of this study is to develop a relation extraction system that uses natural language processing techniques to effectively distinguish between ADEs and non-ADEs in informal text on social media. We develop a feature-based approach that utilizes various lexical, syntactic, and semantic features. Information-gain-based feature selection is performed to address high-dimensional features. Then, we evaluate the effectiveness of four well-known kernel-based approaches (i.e., subset tree kernel, tree kernel, shortest dependency path kernel, and all-paths graph kernel) and several ensembles that are generated by adopting different combination methods (i.e., majority voting, weighted averaging, and stacked generalization). All of the approaches are tested using three data sets: two health-related discussion forums and one general social networking site (i.e., Twitter). When investigating the contribution of each feature subset, the feature-based approach attains the best area under the receiver operating characteristics curve (AUC) values, which are 78.6%, 72.2%, and 79.2% on the three data sets. When individual methods are used, we attain the best AUC values of 82.1%, 73.2%, and 77.0% using the subset tree kernel, shortest dependency path kernel, and feature-based approach on the three data sets, respectively. When using classifier ensembles, we achieve the best AUC values of 84.5%, 77.3%, and 84.5% on the three data sets, outperforming the baselines. Our experimental results indicate that ADE extraction from social media can benefit from feature selection. With respect to the effectiveness of different feature subsets, lexical features and semantic features can enhance the ADE extraction capability. Kernel-based approaches, which can stay away from the feature sparsity issue, are qualified to address the ADE extraction problem. Combining different individual classifiers using suitable combination methods can further enhance the ADE extraction effectiveness. Copyright © 2016 Elsevier B.V. All rights reserved.
Palmprint verification using Lagrangian decomposition and invariant interest points
NASA Astrophysics Data System (ADS)
Gupta, P.; Rattani, A.; Kisku, D. R.; Hwang, C. J.; Sing, J. K.
2011-06-01
This paper presents a palmprint based verification system using SIFT features and Lagrangian network graph technique. We employ SIFT for feature extraction from palmprint images whereas the region of interest (ROI) which has been extracted from wide palm texture at the preprocessing stage, is considered for invariant points extraction. Finally, identity is established by finding permutation matrix for a pair of reference and probe palm graphs drawn on extracted SIFT features. Permutation matrix is used to minimize the distance between two graphs. The propsed system has been tested on CASIA and IITK palmprint databases and experimental results reveal the effectiveness and robustness of the system.
Workforce Competitiveness Collection. "LINCS" Resource Collection News
ERIC Educational Resources Information Center
Literacy Information and Communication System, 2011
2011-01-01
This edition of "'LINCS' Resource Collection News" features the Workforce Competitiveness Collection, covering the topics of workforce education, English language acquisition, and technology. Each month Collections News features one of the three "LINCS" (Literacy Information and Communication System) Resource Collections--Basic…
New nonlinear features for inspection, robotics, and face recognition
NASA Astrophysics Data System (ADS)
Casasent, David P.; Talukder, Ashit
1999-10-01
Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data. Other applications of these new feature spaces in robotics and face recognition are also noted.
ECG Based Heart Arrhythmia Detection Using Wavelet Coherence and Bat Algorithm
NASA Astrophysics Data System (ADS)
Kora, Padmavathi; Sri Rama Krishna, K.
2016-12-01
Atrial fibrillation (AF) is a type of heart abnormality, during the AF electrical discharges in the atrium are rapid, results in abnormal heart beat. The morphology of ECG changes due to the abnormalities in the heart. This paper consists of three major steps for the detection of heart diseases: signal pre-processing, feature extraction and classification. Feature extraction is the key process in detecting the heart abnormality. Most of the ECG detection systems depend on the time domain features for cardiac signal classification. In this paper we proposed a wavelet coherence (WTC) technique for ECG signal analysis. The WTC calculates the similarity between two waveforms in frequency domain. Parameters extracted from WTC function is used as the features of the ECG signal. These features are optimized using Bat algorithm. The Levenberg Marquardt neural network classifier is used to classify the optimized features. The performance of the classifier can be improved with the optimized features.
The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.
Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng
2017-01-01
Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.
Development of a knowledge acquisition tool for an expert system flight status monitor
NASA Technical Reports Server (NTRS)
Disbrow, J. D.; Duke, E. L.; Regenie, V. A.
1986-01-01
Two of the main issues in artificial intelligence today are knowledge acquisition dion and knowledge representation. The Dryden Flight Research Facility of NASA's Ames Research Center is presently involved in the design and implementation of an expert system flight status monitor that will provide expertise and knowledge to aid the flight systems engineer in monitoring today's advanced high-performance aircraft. The flight status monitor can be divided into two sections: the expert system itself and the knowledge acquisition tool. The knowledge acquisition tool, the means it uses to extract knowledge from the domain expert, and how that knowledge is represented for computer use is discussed. An actual aircraft system has been codified by this tool with great success. Future real-time use of the expert system has been facilitated by using the knowledge acquisition tool to easily generate a logically consistent and complete knowledge base.
Development of a knowledge acquisition tool for an expert system flight status monitor
NASA Technical Reports Server (NTRS)
Disbrow, J. D.; Duke, E. L.; Regenie, V. A.
1986-01-01
Two of the main issues in artificial intelligence today are knowledge acquisition and knowledge representation. The Dryden Flight Research Facility of NASA's Ames Research Center is presently involved in the design and implementation of an expert system flight status monitor that will provide expertise and knowledge to aid the flight systems engineer in monitoring today's advanced high-performance aircraft. The flight status monitor can be divided into two sections: the expert system itself and the knowledge acquisition tool. This paper discusses the knowledge acquisition tool, the means it uses to extract knowledge from the domain expert, and how that knowledge is represented for computer use. An actual aircraft system has been codified by this tool with great success. Future real-time use of the expert system has been facilitated by using the knowledge acquisition tool to easily generate a logically consistent and complete knowledge base.
Feature ranking and rank aggregation for automatic sleep stage classification: a comparative study.
Najdi, Shirin; Gharbali, Ali Abdollahi; Fonseca, José Manuel
2017-08-18
Nowadays, sleep quality is one of the most important measures of healthy life, especially considering the huge number of sleep-related disorders. Identifying sleep stages using polysomnographic (PSG) signals is the traditional way of assessing sleep quality. However, the manual process of sleep stage classification is time-consuming, subjective and costly. Therefore, in order to improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. Automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. Since classification accuracy is deeply affected by the extracted features, a poor feature vector will adversely affect the classifier and eventually lead to low classification accuracy. Therefore, special attention should be given to the feature extraction and selection process. In this paper the performance of seven feature selection methods, as well as two feature rank aggregation methods, were compared. Pz-Oz EEG, horizontal EOG and submental chin EMG recordings of 22 healthy males and females were used. A comprehensive feature set including 49 features was extracted from these recordings. The extracted features are among the most common and effective features used in sleep stage classification from temporal, spectral, entropy-based and nonlinear categories. The feature selection methods were evaluated and compared using three criteria: classification accuracy, stability, and similarity. Simulation results show that MRMR-MID achieves the highest classification performance while Fisher method provides the most stable ranking. In our simulations, the performance of the aggregation methods was in the average level, although they are known to generate more stable results and better accuracy. The Borda and RRA rank aggregation methods could not outperform significantly the conventional feature ranking methods. Among conventional methods, some of them slightly performed better than others, although the choice of a suitable technique is dependent on the computational complexity and accuracy requirements of the user.
Extracting Phonological Patterns for L2 Word Learning: The Effect of Poor Phonological Awareness
ERIC Educational Resources Information Center
Hu, Chieh-Fang
2014-01-01
An implicit word learning paradigm was designed to test the hypothesis that children who came to the task of L2 vocabulary acquisition with poorer L1 phonological awareness (PA) are less capable of extracting phonological patterns from L2 and thus have difficulties capitalizing on this knowledge to support L2 vocabulary learning. A group of…
Nadeau, Kyle P; Rice, Tyler B; Durkin, Anthony J; Tromberg, Bruce J
2015-11-01
We present a method for spatial frequency domain data acquisition utilizing a multifrequency synthesis and extraction (MSE) method and binary square wave projection patterns. By illuminating a sample with square wave patterns, multiple spatial frequency components are simultaneously attenuated and can be extracted to determine optical property and depth information. Additionally, binary patterns are projected faster than sinusoids typically used in spatial frequency domain imaging (SFDI), allowing for short (millisecond or less) camera exposure times, and data acquisition speeds an order of magnitude or more greater than conventional SFDI. In cases where sensitivity to superficial layers or scattering is important, the fundamental component from higher frequency square wave patterns can be used. When probing deeper layers, the fundamental and harmonic components from lower frequency square wave patterns can be used. We compared optical property and depth penetration results extracted using square waves to those obtained using sinusoidal patterns on an in vivo human forearm and absorbing tube phantom, respectively. Absorption and reduced scattering coefficient values agree with conventional SFDI to within 1% using both high frequency (fundamental) and low frequency (fundamental and harmonic) spatial frequencies. Depth penetration reflectance values also agree to within 1% of conventional SFDI.
Nadeau, Kyle P.; Rice, Tyler B.; Durkin, Anthony J.; Tromberg, Bruce J.
2015-01-01
Abstract. We present a method for spatial frequency domain data acquisition utilizing a multifrequency synthesis and extraction (MSE) method and binary square wave projection patterns. By illuminating a sample with square wave patterns, multiple spatial frequency components are simultaneously attenuated and can be extracted to determine optical property and depth information. Additionally, binary patterns are projected faster than sinusoids typically used in spatial frequency domain imaging (SFDI), allowing for short (millisecond or less) camera exposure times, and data acquisition speeds an order of magnitude or more greater than conventional SFDI. In cases where sensitivity to superficial layers or scattering is important, the fundamental component from higher frequency square wave patterns can be used. When probing deeper layers, the fundamental and harmonic components from lower frequency square wave patterns can be used. We compared optical property and depth penetration results extracted using square waves to those obtained using sinusoidal patterns on an in vivo human forearm and absorbing tube phantom, respectively. Absorption and reduced scattering coefficient values agree with conventional SFDI to within 1% using both high frequency (fundamental) and low frequency (fundamental and harmonic) spatial frequencies. Depth penetration reflectance values also agree to within 1% of conventional SFDI. PMID:26524682
NASA Astrophysics Data System (ADS)
Poinsot, Audrey; Yang, Fan; Brost, Vincent
2011-02-01
Including multiple sources of information in personal identity recognition and verification gives the opportunity to greatly improve performance. We propose a contactless biometric system that combines two modalities: palmprint and face. Hardware implementations are proposed on the Texas Instrument Digital Signal Processor and Xilinx Field-Programmable Gate Array (FPGA) platforms. The algorithmic chain consists of a preprocessing (which includes palm extraction from hand images), Gabor feature extraction, comparison by Hamming distance, and score fusion. Fusion possibilities are discussed and tested first using a bimodal database of 130 subjects that we designed (uB database), and then two common public biometric databases (AR for face and PolyU for palmprint). High performance has been obtained for recognition and verification purpose: a recognition rate of 97.49% with AR-PolyU database and an equal error rate of 1.10% on the uB database using only two training samples per subject have been obtained. Hardware results demonstrate that preprocessing can easily be performed during the acquisition phase, and multimodal biometric recognition can be treated almost instantly (0.4 ms on FPGA). We show the feasibility of a robust and efficient multimodal hardware biometric system that offers several advantages, such as user-friendliness and flexibility.
New Finger Biometric Method Using Near Infrared Imaging
Lee, Eui Chul; Jung, Hyunwoo; Kim, Daeyeoul
2011-01-01
In this paper, we propose a new finger biometric method. Infrared finger images are first captured, and then feature extraction is performed using a modified Gaussian high-pass filter through binarization, local binary pattern (LBP), and local derivative pattern (LDP) methods. Infrared finger images include the multimodal features of finger veins and finger geometries. Instead of extracting each feature using different methods, the modified Gaussian high-pass filter is fully convolved. Therefore, the extracted binary patterns of finger images include the multimodal features of veins and finger geometries. Experimental results show that the proposed method has an error rate of 0.13%. PMID:22163741
Shape Adaptive, Robust Iris Feature Extraction from Noisy Iris Images
Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah
2013-01-01
In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801
Shape adaptive, robust iris feature extraction from noisy iris images.
Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah
2013-10-01
In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.
Acquisition of speech rhythm in first language.
Polyanskaya, Leona; Ordin, Mikhail
2015-09-01
Analysis of English rhythm in speech produced by children and adults revealed that speech rhythm becomes increasingly more stress-timed as language acquisition progresses. Children reach the adult-like target by 11 to 12 years. The employed speech elicitation paradigm ensured that the sentences produced by adults and children at different ages were comparable in terms of lexical content, segmental composition, and phonotactic complexity. Detected differences between child and adult rhythm and between rhythm in child speech at various ages cannot be attributed to acquisition of phonotactic language features or vocabulary, and indicate the development of language-specific phonetic timing in the course of acquisition.
DOT National Transportation Integrated Search
2011-06-01
This report describes an accuracy assessment of extracted features derived from three : subsets of Quickbird pan-sharpened high resolution satellite image for the area of the : Port of Los Angeles, CA. Visual Learning Systems Feature Analyst and D...
Hattab, Georges; Schlüter, Jan-Philip; Becker, Anke; Nattkemper, Tim W.
2017-01-01
In order to understand gene function in bacterial life cycles, time lapse bioimaging is applied in combination with different marker protocols in so called microfluidics chambers (i.e., a multi-well plate). In one experiment, a series of T images is recorded for one visual field, with a pixel resolution of 60 nm/px. Any (semi-)automatic analysis of the data is hampered by a strong image noise, low contrast and, last but not least, considerable irregular shifts during the acquisition. Image registration corrects such shifts enabling next steps of the analysis (e.g., feature extraction or tracking). Image alignment faces two obstacles in this microscopic context: (a) highly dynamic structural changes in the sample (i.e., colony growth) and (b) an individual data set-specific sample environment which makes the application of landmarks-based alignments almost impossible. We present a computational image registration solution, we refer to as ViCAR: (Vi)sual (C)ues based (A)daptive (R)egistration, for such microfluidics experiments, consisting of (1) the detection of particular polygons (outlined and segmented ones, referred to as visual cues), (2) the adaptive retrieval of three coordinates throughout different sets of frames, and finally (3) an image registration based on the relation of these points correcting both rotation and translation. We tested ViCAR with different data sets and have found that it provides an effective spatial alignment thereby paving the way to extract temporal features pertinent to each resulting bacterial colony. By using ViCAR, we achieved an image registration with 99.9% of image closeness, based on the average rmsd of 4.10−2 pixels, and superior results compared to a state of the art algorithm. PMID:28620411
Driver drowsiness classification using fuzzy wavelet-packet-based feature-extraction algorithm.
Khushaba, Rami N; Kodagoda, Sarath; Lal, Sara; Dissanayake, Gamini
2011-01-01
Driver drowsiness and loss of vigilance are a major cause of road accidents. Monitoring physiological signals while driving provides the possibility of detecting and warning of drowsiness and fatigue. The aim of this paper is to maximize the amount of drowsiness-related information extracted from a set of electroencephalogram (EEG), electrooculogram (EOG), and electrocardiogram (ECG) signals during a simulation driving test. Specifically, we develop an efficient fuzzy mutual-information (MI)- based wavelet packet transform (FMIWPT) feature-extraction method for classifying the driver drowsiness state into one of predefined drowsiness levels. The proposed method estimates the required MI using a novel approach based on fuzzy memberships providing an accurate-information content-estimation measure. The quality of the extracted features was assessed on datasets collected from 31 drivers on a simulation test. The experimental results proved the significance of FMIWPT in extracting features that highly correlate with the different drowsiness levels achieving a classification accuracy of 95%-- 97% on an average across all subjects.
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Wang, Jie; Ming, Dongping; Lv, Guonian
2017-01-01
In this paper, we first propose several novel concepts for object-based image analysis, which include line-based shape regularity, line density, and scale-based best feature value (SBV), based on the region-line primitive association framework (RLPAF). We then propose a raft cultivation area (RCA) extraction method for high spatial resolution (HSR) remote sensing imagery based on multi-scale feature fusion and spatial rule induction. The proposed method includes the following steps: (1) Multi-scale region primitives (segments) are obtained by image segmentation method HBC-SEG, and line primitives (straight lines) are obtained by phase-based line detection method. (2) Association relationships between regions and lines are built based on RLPAF, and then multi-scale RLPAF features are extracted and SBVs are selected. (3) Several spatial rules are designed to extract RCAs within sea waters after land and water separation. Experiments show that the proposed method can successfully extract different-shaped RCAs from HR images with good performance.
NASA Astrophysics Data System (ADS)
Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang
2014-11-01
The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Gao, Wei; Fan, Ming; Zhao, Weijie; Zheng, Bin; Li, Lihua
2017-03-01
This study developed and tested a multi-probe resonance-frequency-based electrical impedance spectroscopy (REIS) system aimed at detection of breast cancer. The REIS system consists of specially designed mechanical supporting device that can be easily lifted to fit women of different height, a seven probe sensor cup, and a computer providing software for system control and management. The sensor cup includes one central probe for direct contact with the nipple, and other six probes uniformly distributed at a distance of 35mm away from the center probe to enable contact with breast skin surface. It takes about 18 seconds for this system to complete a data acquisition process. We utilized this system for examination of breast cancer, collecting a dataset of 289 cases including biopsy verified 74 malignant and 215 benign tumors. After that, 23 REIS based features, including seven frequency, fifteen magnitude features were extracted, and an age feature. To reduce redundancy we selected 6 features using the evolutionary algorithm for classification. The area under a receiver operating characteristic curve (AUC) was computed to assess classifier performance. A multivariable logistic regression method was performed for detection of the tumors. The results of our study showed for the 23 REIS features AUC and ACC, Sensitivity and Specificity of 0.796, 0.727, 0.731 and 0.726, respectively. The AUC and ACC, Sensitivity and Specificity for the 6 REIS features of 0.840, 0.80, 0.703 and 0.833, respectively, and AUC of 0.662 and 0.619 for the frequency and magnitude based REIS features, respectively. The performance of the classifiers using all the 6 features was significantly better than solely using magnitude features (p=3.29e-08) and frequency features (5.61e-07). Smote algorithm was used to expand small samples to balance the dataset, the AUC after data balance of 0.846 increased than the original data classification performance. The results indicated that the REIS system is a promising tool for detection of breast cancer and may be acceptable for clinical implementation.
He, Dengchao; Zhang, Hongjun; Hao, Wenning; Zhang, Rui; Cheng, Kai
2017-07-01
Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network. Our approach adopts word-level attention to achieve better data representation for relation extraction without manually designed features to perform distant supervision instead of fully supervised relation extraction, and it utilizes instance-level attention to tackle the problem of false-positive data. Experimental results demonstrate that our proposed approach is effective and achieves better performance than traditional methods.
Nonredundant sparse feature extraction using autoencoders with receptive fields clustering.
Ayinde, Babajide O; Zurada, Jacek M
2017-09-01
This paper proposes new techniques for data representation in the context of deep learning using agglomerative clustering. Existing autoencoder-based data representation techniques tend to produce a number of encoding and decoding receptive fields of layered autoencoders that are duplicative, thereby leading to extraction of similar features, thus resulting in filtering redundancy. We propose a way to address this problem and show that such redundancy can be eliminated. This yields smaller networks and produces unique receptive fields that extract distinct features. It is also shown that autoencoders with nonnegativity constraints on weights are capable of extracting fewer redundant features than conventional sparse autoencoders. The concept is illustrated using conventional sparse autoencoder and nonnegativity-constrained autoencoders with MNIST digits recognition, NORB normalized-uniform object data and Yale face dataset. Copyright © 2017 Elsevier Ltd. All rights reserved.
The algorithm of fast image stitching based on multi-feature extraction
NASA Astrophysics Data System (ADS)
Yang, Chunde; Wu, Ge; Shi, Jing
2018-05-01
This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.
Tool Wear Feature Extraction Based on Hilbert Marginal Spectrum
NASA Astrophysics Data System (ADS)
Guan, Shan; Song, Weijie; Pang, Hongyang
2017-09-01
In the metal cutting process, the signal contains a wealth of tool wear state information. A tool wear signal’s analysis and feature extraction method based on Hilbert marginal spectrum is proposed. Firstly, the tool wear signal was decomposed by empirical mode decomposition algorithm and the intrinsic mode functions including the main information were screened out by the correlation coefficient and the variance contribution rate. Secondly, Hilbert transform was performed on the main intrinsic mode functions. Hilbert time-frequency spectrum and Hilbert marginal spectrum were obtained by Hilbert transform. Finally, Amplitude domain indexes were extracted on the basis of the Hilbert marginal spectrum and they structured recognition feature vector of tool wear state. The research results show that the extracted features can effectively characterize the different wear state of the tool, which provides a basis for monitoring tool wear condition.
Combined rule extraction and feature elimination in supervised classification.
Liu, Sheng; Patel, Ronak Y; Daga, Pankaj R; Liu, Haining; Fu, Gang; Doerksen, Robert J; Chen, Yixin; Wilkins, Dawn E
2012-09-01
There are a vast number of biology related research problems involving a combination of multiple sources of data to achieve a better understanding of the underlying problems. It is important to select and interpret the most important information from these sources. Thus it will be beneficial to have a good algorithm to simultaneously extract rules and select features for better interpretation of the predictive model. We propose an efficient algorithm, Combined Rule Extraction and Feature Elimination (CRF), based on 1-norm regularized random forests. CRF simultaneously extracts a small number of rules generated by random forests and selects important features. We applied CRF to several drug activity prediction and microarray data sets. CRF is capable of producing performance comparable with state-of-the-art prediction algorithms using a small number of decision rules. Some of the decision rules are biologically significant.
Kumar, Shiu; Sharma, Alok; Tsunoda, Tatsuhiko
2017-12-28
Common spatial pattern (CSP) has been an effective technique for feature extraction in electroencephalography (EEG) based brain computer interfaces (BCIs). However, motor imagery EEG signal feature extraction using CSP generally depends on the selection of the frequency bands to a great extent. In this study, we propose a mutual information based frequency band selection approach. The idea of the proposed method is to utilize the information from all the available channels for effectively selecting the most discriminative filter banks. CSP features are extracted from multiple overlapping sub-bands. An additional sub-band has been introduced that cover the wide frequency band (7-30 Hz) and two different types of features are extracted using CSP and common spatio-spectral pattern techniques, respectively. Mutual information is then computed from the extracted features of each of these bands and the top filter banks are selected for further processing. Linear discriminant analysis is applied to the features extracted from each of the filter banks. The scores are fused together, and classification is done using support vector machine. The proposed method is evaluated using BCI Competition III dataset IVa, BCI Competition IV dataset I and BCI Competition IV dataset IIb, and it outperformed all other competing methods achieving the lowest misclassification rate and the highest kappa coefficient on all three datasets. Introducing a wide sub-band and using mutual information for selecting the most discriminative sub-bands, the proposed method shows improvement in motor imagery EEG signal classification.
Study on identifying deciduous forest by the method of feature space transformation
NASA Astrophysics Data System (ADS)
Zhang, Xuexia; Wu, Pengfei
2009-10-01
The thematic remotely sensed information extraction is always one of puzzling nuts which the remote sensing science faces, so many remote sensing scientists devotes diligently to this domain research. The methods of thematic information extraction include two kinds of the visual interpretation and the computer interpretation, the developing direction of which is intellectualization and comprehensive modularization. The paper tries to develop the intelligent extraction method of feature space transformation for the deciduous forest thematic information extraction in Changping district of Beijing city. The whole Chinese-Brazil resources satellite images received in 2005 are used to extract the deciduous forest coverage area by feature space transformation method and linear spectral decomposing method, and the result from remote sensing is similar to woodland resource census data by Chinese forestry bureau in 2004.
NASA Astrophysics Data System (ADS)
Zhang, Zhifen; Chen, Huabin; Xu, Yanling; Zhong, Jiyong; Lv, Na; Chen, Shanben
2015-08-01
Multisensory data fusion-based online welding quality monitoring has gained increasing attention in intelligent welding process. This paper mainly focuses on the automatic detection of typical welding defect for Al alloy in gas tungsten arc welding (GTAW) by means of analzing arc spectrum, sound and voltage signal. Based on the developed algorithms in time and frequency domain, 41 feature parameters were successively extracted from these signals to characterize the welding process and seam quality. Then, the proposed feature selection approach, i.e., hybrid fisher-based filter and wrapper was successfully utilized to evaluate the sensitivity of each feature and reduce the feature dimensions. Finally, the optimal feature subset with 19 features was selected to obtain the highest accuracy, i.e., 94.72% using established classification model. This study provides a guideline for feature extraction, selection and dynamic modeling based on heterogeneous multisensory data to achieve a reliable online defect detection system in arc welding.
Analytical performance of the various acquisition modes in Orbitrap MS and MS/MS.
Kaufmann, Anton
2018-04-30
Quadrupole Orbitrap instruments (Q Orbitrap) permit high-resolution mass spectrometry (HRMS)-based full scan acquisitions and have a number of acquisition modes where the quadrupole isolates a particular mass range prior to a possible fragmentation and HRMS-based acquisition. Selecting the proper acquisition mode(s) is essential if trace analytes are to be quantified in complex matrix extracts. Depending on the particular requirements, such as sensitivity, selectivity of detection, linear dynamic range, and speed of analysis, different acquisition modes may have to be chosen. This is particularly important in the field of multi-residue analysis (e.g., pesticides or veterinary drugs in food samples) where a large number of analytes within a complex matrix have to be detected and reliably quantified. Meeting the specific detection and quantification performance criteria for every targeted compound may be challenging. It is the aim of this paper to describe the strengths and the limitations of the currently available Q Orbitrap acquisition modes. In addition, the incorporation of targeted acquisitions between full scan experiments is discussed. This approach is intended to integrate compounds that require an additional degree of sensitivity or selectivity into multi-residue methods. This article is protected by copyright. All rights reserved.
A Neuro-Fuzzy System for Extracting Environment Features Based on Ultrasonic Sensors
Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José
2009-01-01
In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160
NASA Astrophysics Data System (ADS)
Cong, Chao; Liu, Dingsheng; Zhao, Lingjun
2008-12-01
This paper discusses a new method for the automatic matching of ground control points (GCPs) between satellite remote sensing Image and digital raster graphic (DRG) in urban areas. The key of this method is to automatically extract tie point pairs according to geographic characters from such heterogeneous images. Since there are big differences between such heterogeneous images respect to texture and corner features, more detail analyzations are performed to find similarities and differences between high resolution remote sensing Image and (DRG). Furthermore a new algorithms based on the fuzzy-c means (FCM) method is proposed to extract linear feature in remote sensing Image. Based on linear feature, crossings and corners extracted from these features are chosen as GCPs. On the other hand, similar method was used to find same features from DRGs. Finally, Hausdorff Distance was adopted to pick matching GCPs from above two GCP groups. Experiences shown the method can extract GCPs from such images with a reasonable RMS error.