NASA Astrophysics Data System (ADS)
Aytaç Korkmaz, Sevcan; Binol, Hamidullah
2018-03-01
Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.
Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.
Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi
2017-09-22
DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.
Kang, Jinbum; Lee, Jae Young; Yoo, Yangmo
2016-06-01
Effective speckle reduction in ultrasound B-mode imaging is important for enhancing the image quality and improving the accuracy in image analysis and interpretation. In this paper, a new feature-enhanced speckle reduction (FESR) method based on multiscale analysis and feature enhancement filtering is proposed for ultrasound B-mode imaging. In FESR, clinical features (e.g., boundaries and borders of lesions) are selectively emphasized by edge, coherence, and contrast enhancement filtering from fine to coarse scales while simultaneously suppressing speckle development via robust diffusion filtering. In the simulation study, the proposed FESR method showed statistically significant improvements in edge preservation, mean structure similarity, speckle signal-to-noise ratio, and contrast-to-noise ratio (CNR) compared with other speckle reduction methods, e.g., oriented speckle reducing anisotropic diffusion (OSRAD), nonlinear multiscale wavelet diffusion (NMWD), the Laplacian pyramid-based nonlinear diffusion and shock filter (LPNDSF), and the Bayesian nonlocal means filter (OBNLM). Similarly, the FESR method outperformed the OSRAD, NMWD, LPNDSF, and OBNLM methods in terms of CNR, i.e., 10.70 ± 0.06 versus 9.00 ± 0.06, 9.78 ± 0.06, 8.67 ± 0.04, and 9.22 ± 0.06 in the phantom study, respectively. Reconstructed B-mode images that were developed using the five speckle reduction methods were reviewed by three radiologists for evaluation based on each radiologist's diagnostic preferences. All three radiologists showed a significant preference for the abdominal liver images obtained using the FESR methods in terms of conspicuity, margin sharpness, artificiality, and contrast, p<0.0001. For the kidney and thyroid images, the FESR method showed similar improvement over other methods. However, the FESR method did not show statistically significant improvement compared with the OBNLM method in margin sharpness for the kidney and thyroid images. These results demonstrate that the proposed FESR method can improve the image quality of ultrasound B-mode imaging by enhancing the visualization of lesion features while effectively suppressing speckle noise.
Drug-target interaction prediction using ensemble learning and dimensionality reduction.
Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong
2017-10-01
Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction. Copyright © 2017 Elsevier Inc. All rights reserved.
Crawford, D C; Bell, D S; Bamber, J C
1993-01-01
A systematic method to compensate for nonlinear amplification of individual ultrasound B-scanners has been investigated in order to optimise performance of an adaptive speckle reduction (ASR) filter for a wide range of clinical ultrasonic imaging equipment. Three potential methods have been investigated: (1) a method involving an appropriate selection of the speckle recognition feature was successful when the scanner signal processing executes simple logarithmic compressions; (2) an inverse transform (decompression) of the B-mode image was effective in correcting for the measured characteristics of image data compression when the algorithm was implemented in full floating point arithmetic; (3) characterising the behaviour of the statistical speckle recognition feature under conditions of speckle noise was found to be the method of choice for implementation of the adaptive speckle reduction algorithm in limited precision integer arithmetic. In this example, the statistical features of variance and mean were investigated. The third method may be implemented on commercially available fast image processing hardware and is also better suited for transfer into dedicated hardware to facilitate real-time adaptive speckle reduction. A systematic method is described for obtaining ASR calibration data from B-mode images of a speckle producing phantom.
NASA Astrophysics Data System (ADS)
Song, Bowen; Zhang, Guopeng; Wang, Huafeng; Zhu, Wei; Liang, Zhengrong
2013-02-01
Various types of features, e.g., geometric features, texture features, projection features etc., have been introduced for polyp detection and differentiation tasks via computer aided detection and diagnosis (CAD) for computed tomography colonography (CTC). Although these features together cover more information of the data, some of them are statistically highly-related to others, which made the feature set redundant and burdened the computation task of CAD. In this paper, we proposed a new dimension reduction method which combines hierarchical clustering and principal component analysis (PCA) for false positives (FPs) reduction task. First, we group all the features based on their similarity using hierarchical clustering, and then PCA is employed within each group. Different numbers of principal components are selected from each group to form the final feature set. Support vector machine is used to perform the classification. The results show that when three principal components were chosen from each group we can achieve an area under the curve of receiver operating characteristics of 0.905, which is as high as the original dataset. Meanwhile, the computation time is reduced by 70% and the feature set size is reduce by 77%. It can be concluded that the proposed method captures the most important information of the feature set and the classification accuracy is not affected after the dimension reduction. The result is promising and further investigation, such as automatically threshold setting, are worthwhile and are under progress.
Method for indexing and retrieving manufacturing-specific digital imagery based on image content
Ferrell, Regina K.; Karnowski, Thomas P.; Tobin, Jr., Kenneth W.
2004-06-15
A method for indexing and retrieving manufacturing-specific digital images based on image content comprises three steps. First, at least one feature vector can be extracted from a manufacturing-specific digital image stored in an image database. In particular, each extracted feature vector corresponds to a particular characteristic of the manufacturing-specific digital image, for instance, a digital image modality and overall characteristic, a substrate/background characteristic, and an anomaly/defect characteristic. Notably, the extracting step includes generating a defect mask using a detection process. Second, using an unsupervised clustering method, each extracted feature vector can be indexed in a hierarchical search tree. Third, a manufacturing-specific digital image associated with a feature vector stored in the hierarchicial search tree can be retrieved, wherein the manufacturing-specific digital image has image content comparably related to the image content of the query image. More particularly, can include two data reductions, the first performed based upon a query vector extracted from a query image. Subsequently, a user can select relevant images resulting from the first data reduction. From the selection, a prototype vector can be calculated, from which a second-level data reduction can be performed. The second-level data reduction can result in a subset of feature vectors comparable to the prototype vector, and further comparable to the query vector. An additional fourth step can include managing the hierarchical search tree by substituting a vector average for several redundant feature vectors encapsulated by nodes in the hierarchical search tree.
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-01-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices. PMID:25177107
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection.
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-11-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.
Mathieson, Luke; Mendes, Alexandre; Marsden, John; Pond, Jeffrey; Moscato, Pablo
2017-01-01
This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.
Online dimensionality reduction using competitive learning and Radial Basis Function network.
Tomenko, Vladimir
2011-06-01
The general purpose dimensionality reduction method should preserve data interrelations at all scales. Additional desired features include online projection of new data, processing nonlinearly embedded manifolds and large amounts of data. The proposed method, called RBF-NDR, combines these features. RBF-NDR is comprised of two modules. The first module learns manifolds by utilizing modified topology representing networks and geodesic distance in data space and approximates sampled or streaming data with a finite set of reference patterns, thus achieving scalability. Using input from the first module, the dimensionality reduction module constructs mappings between observation and target spaces. Introduction of specific loss function and synthesis of the training algorithm for Radial Basis Function network results in global preservation of data structures and online processing of new patterns. The RBF-NDR was applied for feature extraction and visualization and compared with Principal Component Analysis (PCA), neural network for Sammon's projection (SAMANN) and Isomap. With respect to feature extraction, the method outperformed PCA and yielded increased performance of the model describing wastewater treatment process. As for visualization, RBF-NDR produced superior results compared to PCA and SAMANN and matched Isomap. For the Topic Detection and Tracking corpus, the method successfully separated semantically different topics. Copyright © 2011 Elsevier Ltd. All rights reserved.
Solution of the symmetric eigenproblem AX=lambda BX by delayed division
NASA Technical Reports Server (NTRS)
Thurston, G. A.; Bains, N. J. C.
1986-01-01
Delayed division is an iterative method for solving the linear eigenvalue problem AX = lambda BX for a limited number of small eigenvalues and their corresponding eigenvectors. The distinctive feature of the method is the reduction of the problem to an approximate triangular form by systematically dropping quadratic terms in the eigenvalue lambda. The report describes the pivoting strategy in the reduction and the method for preserving symmetry in submatrices at each reduction step. Along with the approximate triangular reduction, the report extends some techniques used in the method of inverse subspace iteration. Examples are included for problems of varying complexity.
Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images
Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung
2013-01-01
Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713
2015-01-01
Background Investigations into novel biomarkers using omics techniques generate large amounts of data. Due to their size and numbers of attributes, these data are suitable for analysis with machine learning methods. A key component of typical machine learning pipelines for omics data is feature selection, which is used to reduce the raw high-dimensional data into a tractable number of features. Feature selection needs to balance the objective of using as few features as possible, while maintaining high predictive power. This balance is crucial when the goal of data analysis is the identification of highly accurate but small panels of biomarkers with potential clinical utility. In this paper we propose a heuristic for the selection of very small feature subsets, via an iterative feature elimination process that is guided by rule-based machine learning, called RGIFE (Rule-guided Iterative Feature Elimination). We use this heuristic to identify putative biomarkers of osteoarthritis (OA), articular cartilage degradation and synovial inflammation, using both proteomic and transcriptomic datasets. Results and discussion Our RGIFE heuristic increased the classification accuracies achieved for all datasets when no feature selection is used, and performed well in a comparison with other feature selection methods. Using this method the datasets were reduced to a smaller number of genes or proteins, including those known to be relevant to OA, cartilage degradation and joint inflammation. The results have shown the RGIFE feature reduction method to be suitable for analysing both proteomic and transcriptomics data. Methods that generate large ‘omics’ datasets are increasingly being used in the area of rheumatology. Conclusions Feature reduction methods are advantageous for the analysis of omics data in the field of rheumatology, as the applications of such techniques are likely to result in improvements in diagnosis, treatment and drug discovery. PMID:25923811
Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Wismüller, Axel
2015-01-01
Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns.
Yang, Jie; Yin, Yingying; Zhang, Zuping; Long, Jun; Dong, Jian; Zhang, Yuqun; Xu, Zhi; Li, Lei; Liu, Jie; Yuan, Yonggui
2018-02-05
Major depressive disorder (MDD) is characterized by dysregulation of distributed structural and functional networks. It is now recognized that structural and functional networks are related at multiple temporal scales. The recent emergence of multimodal fusion methods has made it possible to comprehensively and systematically investigate brain networks and thereby provide essential information for influencing disease diagnosis and prognosis. However, such investigations are hampered by the inconsistent dimensionality features between structural and functional networks. Thus, a semi-multimodal fusion hierarchical feature reduction framework is proposed. Feature reduction is a vital procedure in classification that can be used to eliminate irrelevant and redundant information and thereby improve the accuracy of disease diagnosis. Our proposed framework primarily consists of two steps. The first step considers the connection distances in both structural and functional networks between MDD and healthy control (HC) groups. By adding a constraint based on sparsity regularization, the second step fully utilizes the inter-relationship between the two modalities. However, in contrast to conventional multi-modality multi-task methods, the structural networks were considered to play only a subsidiary role in feature reduction and were not included in the following classification. The proposed method achieved a classification accuracy, specificity, sensitivity, and area under the curve of 84.91%, 88.6%, 81.29%, and 0.91, respectively. Moreover, the frontal-limbic system contributed the most to disease diagnosis. Importantly, by taking full advantage of the complementary information from multimodal neuroimaging data, the selected consensus connections may be highly reliable biomarkers of MDD. Copyright © 2017 Elsevier B.V. All rights reserved.
Nagarajan, Mahesh B.; Coan, Paola; Huber, Markus B.; Diemoz, Paul C.; Wismüller, Axel
2015-01-01
Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns. PMID:25710875
Multiview Locally Linear Embedding for Effective Medical Image Retrieval
Shen, Hualei; Tao, Dacheng; Ma, Dianfu
2013-01-01
Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang
2017-12-01
Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.
Nagarajan, Mahesh B.; Huber, Markus B.; Schlossbauer, Thomas; Leinsinger, Gerda; Krol, Andrzej; Wismüller, Axel
2014-01-01
Objective While dimension reduction has been previously explored in computer aided diagnosis (CADx) as an alternative to feature selection, previous implementations of its integration into CADx do not ensure strict separation between training and test data required for the machine learning task. This compromises the integrity of the independent test set, which serves as the basis for evaluating classifier performance. Methods and Materials We propose, implement and evaluate an improved CADx methodology where strict separation is maintained. This is achieved by subjecting the training data alone to dimension reduction; the test data is subsequently processed with out-of-sample extension methods. Our approach is demonstrated in the research context of classifying small diagnostically challenging lesions annotated on dynamic breast magnetic resonance imaging (MRI) studies. The lesions were dynamically characterized through topological feature vectors derived from Minkowski functionals. These feature vectors were then subject to dimension reduction with different linear and non-linear algorithms applied in conjunction with out-of-sample extension techniques. This was followed by classification through supervised learning with support vector regression. Area under the receiver-operating characteristic curve (AUC) was evaluated as the metric of classifier performance. Results Of the feature vectors investigated, the best performance was observed with Minkowski functional ’perimeter’ while comparable performance was observed with ’area’. Of the dimension reduction algorithms tested with ’perimeter’, the best performance was observed with Sammon’s mapping (0.84 ± 0.10) while comparable performance was achieved with exploratory observation machine (0.82 ± 0.09) and principal component analysis (0.80 ± 0.10). Conclusions The results reported in this study with the proposed CADx methodology present a significant improvement over previous results reported with such small lesions on dynamic breast MRI. In particular, non-linear algorithms for dimension reduction exhibited better classification performance than linear approaches, when integrated into our CADx methodology. We also note that while dimension reduction techniques may not necessarily provide an improvement in classification performance over feature selection, they do allow for a higher degree of feature compaction. PMID:24355697
A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.
Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar
2017-03-01
The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam
2012-01-15
Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.
Infrared face recognition based on LBP histogram and KW feature selection
NASA Astrophysics Data System (ADS)
Xie, Zhihua
2014-07-01
The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shaobu; Lu, Shuai; Zhou, Ning
In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highestmore » similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.« less
Bearing diagnostics: A method based on differential geometry
NASA Astrophysics Data System (ADS)
Tian, Ye; Wang, Zili; Lu, Chen; Wang, Zhipeng
2016-12-01
The structures around bearings are complex, and the working environment is variable. These conditions cause the collected vibration signals to become nonlinear, non-stationary, and chaotic characteristics that make noise reduction, feature extraction, fault diagnosis, and health assessment significantly challenging. Thus, a set of differential geometry-based methods with superiorities in nonlinear analysis is presented in this study. For noise reduction, the Local Projection method is modified by both selecting the neighborhood radius based on empirical mode decomposition and determining noise subspace constrained by neighborhood distribution information. For feature extraction, Hessian locally linear embedding is introduced to acquire manifold features from the manifold topological structures, and singular values of eigenmatrices as well as several specific frequency amplitudes in spectrograms are extracted subsequently to reduce the complexity of the manifold features. For fault diagnosis, information geometry-based support vector machine is applied to classify the fault states. For health assessment, the manifold distance is employed to represent the health information; the Gaussian mixture model is utilized to calculate the confidence values, which directly reflect the health status. Case studies on Lorenz signals and vibration datasets of bearings demonstrate the effectiveness of the proposed methods.
Chen, Yifei; Sun, Yuxing; Han, Bing-Qing
2015-01-01
Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.
Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding
Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping
2015-01-01
Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng
An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classificationmore » rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.« less
Voting strategy for artifact reduction in digital breast tomosynthesis.
Wu, Tao; Moore, Richard H; Kopans, Daniel B
2006-07-01
Artifacts are observed in digital breast tomosynthesis (DBT) reconstructions due to the small number of projections and the narrow angular range that are typically employed in tomosynthesis imaging. In this work, we investigate the reconstruction artifacts that are caused by high-attenuation features in breast and develop several artifact reduction methods based on a "voting strategy." The voting strategy identifies the projection(s) that would introduce artifacts to a voxel and rejects the projection(s) when reconstructing the voxel. Four approaches to the voting strategy were compared, including projection segmentation, maximum contribution deduction, one-step classification, and iterative classification. The projection segmentation method, based on segmentation of high-attenuation features from the projections, effectively reduces artifacts caused by metal and large calcifications that can be reliably detected and segmented from projections. The other three methods are based on the observation that contributions from artifact-inducing projections have higher value than those from normal projections. These methods attempt to identify the projection(s) that would cause artifacts by comparing contributions from different projections. Among the three methods, the iterative classification method provides the best artifact reduction; however, it can generate many false positive classifications that degrade the image quality. The maximum contribution deduction method and one-step classification method both reduce artifacts well from small calcifications, although the performance of artifact reduction is slightly better with the one-step classification. The combination of one-step classification and projection segmentation removes artifacts from both large and small calcifications.
Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua
2013-01-01
Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.
NASA Astrophysics Data System (ADS)
Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina
2014-03-01
We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.
Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.
Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497
Torija, Antonio J; Ruiz, Diego P
2015-02-01
The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dmitriev, Yegor V.; Kozoderov, Vladimir V.; Sokolov, Anton A.
2016-04-01
Collecting and updating forest inventory data play an important part in the forest management. The data can be obtained directly by using exact enough but low efficient ground based methods as well as from the remote sensing measurements. We present applications of airborne hyperspectral remote sensing for the retrieval of such important inventory parameters as the forest species and age composition. The hyperspectral images of the test region were obtained from the airplane equipped by the produced in Russia light-weight airborne video-spectrometer of visible and near infrared spectral range and high resolution photo-camera on the same gyro-stabilized platform. The quality of the thematic processing depends on many factors such as the atmospheric conditions, characteristics of measuring instruments, corrections and preprocessing methods, etc. An important role plays the construction of the classifier together with methods of the reduction of the feature space. The performance of different spectral classification methods is analyzed for the problem of hyperspectral remote sensing of soil and vegetation. For the reduction of the feature space we used the earlier proposed stable feature selection method. The results of the classification of hyperspectral airborne images by using the Multiclass Support Vector Machine method with Gaussian kernel and the parametric Bayesian classifier based on the Gaussian mixture model and their comparative analysis are demonstrated.
Flexible conformable hydrophobized surfaces for turbulent flow drag reduction
NASA Astrophysics Data System (ADS)
Brennan, Joseph C.; Geraldi, Nicasio R.; Morris, Robert H.; Fairhurst, David J.; McHale, Glen; Newton, Michael I.
2015-05-01
In recent years extensive work has been focused onto using superhydrophobic surfaces for drag reduction applications. Superhydrophobic surfaces retain a gas layer, called a plastron, when submerged underwater in the Cassie-Baxter state with water in contact with the tops of surface roughness features. In this state the plastron allows slip to occur across the surface which results in a drag reduction. In this work we report flexible and relatively large area superhydrophobic surfaces produced using two different methods: Large roughness features were created by electrodeposition on copper meshes; Small roughness features were created by embedding carbon nanoparticles (soot) into Polydimethylsiloxane (PDMS). Both samples were made into cylinders with a diameter under 12 mm. To characterize the samples, scanning electron microscope (SEM) images and confocal microscope images were taken. The confocal microscope images were taken with each sample submerged in water to show the extent of the plastron. The hydrophobized electrodeposited copper mesh cylinders showed drag reductions of up to 32% when comparing the superhydrophobic state with a wetted out state. The soot covered cylinders achieved a 30% drag reduction when comparing the superhydrophobic state to a plain cylinder. These results were obtained for turbulent flows with Reynolds numbers 10,000 to 32,500.
Method for the reduction of image content redundancy in large image databases
Tobin, Kenneth William; Karnowski, Thomas P.
2010-03-02
A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.
A regularized approach for geodesic-based semisupervised multimanifold learning.
Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun
2014-05-01
Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.
Shi, Lei; Wan, Youchuan; Gao, Xianjun
2018-01-01
In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721
An indigenous method for closed reduction of pediatric mandibular parasymphysis fracture.
Kumar, Naresh; Singh, Akhilesh Kumar; Pandey, Arun; Verma, Vishal
2015-01-01
Mandibular fractures in children are very rare as compared to adults due to protected anatomic features of child and less exposure to road traffic accidents. Management becomes complicated due to inherent dynamic nature, instability of mixed dentition and fear of surgery. Conservative management can be done with the help of acrylic cap splints along with circum-mandibular wiring, intermaxillary fixation with eyelet wires, arch wires or open reduction and internal fixation with bio-resorbable plates. Different methods have various pros and cons. The choice of anesthesia is also very crucial sometimes. This case report describes a new method of closed reduction with 18 gauge needle simulated as an arch bar performed under local anaesthesia.
An indigenous method for closed reduction of pediatric mandibular parasymphysis fracture
Kumar, Naresh; Singh, Akhilesh Kumar; Pandey, Arun; Verma, Vishal
2015-01-01
Mandibular fractures in children are very rare as compared to adults due to protected anatomic features of child and less exposure to road traffic accidents. Management becomes complicated due to inherent dynamic nature, instability of mixed dentition and fear of surgery. Conservative management can be done with the help of acrylic cap splints along with circum-mandibular wiring, intermaxillary fixation with eyelet wires, arch wires or open reduction and internal fixation with bio-resorbable plates. Different methods have various pros and cons. The choice of anesthesia is also very crucial sometimes. This case report describes a new method of closed reduction with 18 gauge needle simulated as an arch bar performed under local anaesthesia. PMID:27390498
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
Selective reduction of carboxylic acids to aldehydes with hydrosilane via photoredox catalysis.
Zhang, Muliang; Li, Nan; Tao, Xingyu; Ruzi, Rehanguli; Yu, Shouyun; Zhu, Chengjian
2017-09-12
The direct reduction of carboxylic acids to aldehydes with hydrosilane was achieved through visible light photoredox catalysis. The combination of both single electron transfer and hydrogen atom transfer steps offers a novel and convenient approach to selective reduction of carboxylic acids to aldehydes. The method also features mild conditions, high yields, broad substrate scope, and good functional group tolerance, such as alkyne, ester, ketone, amide and amine groups.
Liang, Jianwen; Wei, Denghu; Lin, Ning; Zhu, Youngchun; Li, Xiaona; Zhang, Jingjing; Fan, Long; Qian, Yitai
2014-07-04
Honeycomb porous silicon (hp-Si) has been synthesized by a low temperature (200 °C) magnesiothermic reduction of Na2SiO3·9H2O. This process can be regarded as a general synthesis method for other silicide materials. Significantly, hp-Si features excellent electrochemical properties after graphene coating.
NASA Technical Reports Server (NTRS)
Dasarathy, B. V.
1976-01-01
An algorithm is proposed for dimensionality reduction in the context of clustering techniques based on histogram analysis. The approach is based on an evaluation of the hills and valleys in the unidimensional histograms along the different features and provides an economical means of assessing the significance of the features in a nonparametric unsupervised data environment. The method has relevance to remote sensing applications.
A Hybrid Classification System for Heart Disease Diagnosis Based on the RFRS Method.
Liu, Xiao; Wang, Xiaoli; Su, Qiang; Zhang, Mo; Zhu, Yanhong; Wang, Qiugen; Wang, Qian
2017-01-01
Heart disease is one of the most common diseases in the world. The objective of this study is to aid the diagnosis of heart disease using a hybrid classification system based on the ReliefF and Rough Set (RFRS) method. The proposed system contains two subsystems: the RFRS feature selection system and a classification system with an ensemble classifier. The first system includes three stages: (i) data discretization, (ii) feature extraction using the ReliefF algorithm, and (iii) feature reduction using the heuristic Rough Set reduction algorithm that we developed. In the second system, an ensemble classifier is proposed based on the C4.5 classifier. The Statlog (Heart) dataset, obtained from the UCI database, was used for experiments. A maximum classification accuracy of 92.59% was achieved according to a jackknife cross-validation scheme. The results demonstrate that the performance of the proposed system is superior to the performances of previously reported classification techniques.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Guo, Yanhui; Wei, Jun; Chughtai, Aamer; Hadjiiski, Lubomir M.; Sundaram, Baskaran; Patel, Smita; Kuriakose, Jean W.; Kazerooni, Ella A.
2013-03-01
The curved planar reformation (CPR) method re-samples the vascular structures along the vessel centerline to generate longitudinal cross-section views. The CPR technique has been commonly used in coronary CTA workstation to facilitate radiologists' visual assessment of coronary diseases, but has not yet been used for pulmonary vessel analysis in CTPA due to the complicated tree structures and the vast network of pulmonary vasculature. In this study, a new curved planar reformation and optimal path tracing (CROP) method was developed to facilitate feature extraction and false positive (FP) reduction and improve our PE detection system. PE candidates are first identified in the segmented pulmonary vessels at prescreening. Based on Dijkstra's algorithm, the optimal path (OP) is traced from the pulmonary trunk bifurcation point to each PE candidate. The traced vessel is then straightened and a reformatted volume is generated using CPR. Eleven new features that characterize the intensity, gradient, and topology are extracted from the PE candidate in the CPR volume and combined with the previously developed 9 features to form a new feature space for FP classification. With IRB approval, CTPA of 59 PE cases were retrospectively collected from our patient files (UM set) and 69 PE cases from the PIOPED II data set with access permission. 595 and 800 PEs were manually marked by experienced radiologists as reference standard for the UM and PIOPED set, respectively. At a test sensitivity of 80%, the average FP rate was improved from 18.9 to 11.9 FPs/case with the new method for the PIOPED set when the UM set was used for training. The FP rate was improved from 22.6 to 14.2 FPs/case for the UM set when the PIOPED set was used for training. The improvement in the free response receiver operating characteristic (FROC) curves was statistically significant (p<0.05) by JAFROC analysis, indicating that the new features extracted from the CROP method are useful for FP reduction.
Spectral Data Reduction via Wavelet Decomposition
NASA Technical Reports Server (NTRS)
Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)
2002-01-01
The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.
NASA Astrophysics Data System (ADS)
Tamimi, E.; Ebadi, H.; Kiani, A.
2017-09-01
Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.
3800 Years of Quantitative Precipitation Reconstruction from the Northwest Yucatan Peninsula
Carrillo-Bastos, Alicia; Islebe, Gerald A.; Torrescano-Valle, Nuria
2013-01-01
Precipitation over the last 3800 years has been reconstructed using modern pollen calibration and precipitation data. A transfer function was then performed via the linear method of partial least squares. By calculating precipitation anomalies, it is estimated that precipitation deficits were greater than surpluses, reaching 21% and <9%, respectively. The period from 50 BC to 800 AD was the driest of the record. The drought related to the abandonment of the Maya Preclassic period featured a 21% reduction in precipitation, while the drought of the Maya collapse (800 to 860 AD) featured a reduction of 18%. The Medieval Climatic Anomaly was a period of positive phases (3.8–7.6%). The Little Ice Age was a period of climatic variability, with reductions in precipitation but without deficits. PMID:24391940
Effects of band selection on endmember extraction for forestry applications
NASA Astrophysics Data System (ADS)
Karathanassi, Vassilia; Andreou, Charoula; Andronis, Vassilis; Kolokoussis, Polychronis
2014-10-01
In spectral unmixing theory, data reduction techniques play an important role as hyperspectral imagery contains an immense amount of data, posing many challenging problems such as data storage, computational efficiency, and the so called "curse of dimensionality". Feature extraction and feature selection are the two main approaches for dimensionality reduction. Feature extraction techniques are used for reducing the dimensionality of the hyperspectral data by applying transforms on hyperspectral data. Feature selection techniques retain the physical meaning of the data by selecting a set of bands from the input hyperspectral dataset, which mainly contain the information needed for spectral unmixing. Although feature selection techniques are well-known for their dimensionality reduction potentials they are rarely used in the unmixing process. The majority of the existing state-of-the-art dimensionality reduction methods set criteria to the spectral information, which is derived by the whole wavelength, in order to define the optimum spectral subspace. These criteria are not associated with any particular application but with the data statistics, such as correlation and entropy values. However, each application is associated with specific land c over materials, whose spectral characteristics present variations in specific wavelengths. In forestry for example, many applications focus on tree leaves, in which specific pigments such as chlorophyll, xanthophyll, etc. determine the wavelengths where tree species, diseases, etc., can be detected. For such applications, when the unmixing process is applied, the tree species, diseases, etc., are considered as the endmembers of interest. This paper focuses on investigating the effects of band selection on the endmember extraction by exploiting the information of the vegetation absorbance spectral zones. More precisely, it is explored whether endmember extraction can be optimized when specific sets of initial bands related to leaf spectral characteristics are selected. Experiments comprise application of well-known signal subspace estimation and endmember extraction methods on a hyperspectral imagery that presents a forest area. Evaluation of the extracted endmembers showed that more forest species can be extracted as endmembers using selected bands.
Flexible conformable hydrophobized surfaces for turbulent flow drag reduction
Brennan, Joseph C; Geraldi, Nicasio R; Morris, Robert H; Fairhurst, David J; McHale, Glen; Newton, Michael I
2015-01-01
In recent years extensive work has been focused onto using superhydrophobic surfaces for drag reduction applications. Superhydrophobic surfaces retain a gas layer, called a plastron, when submerged underwater in the Cassie-Baxter state with water in contact with the tops of surface roughness features. In this state the plastron allows slip to occur across the surface which results in a drag reduction. In this work we report flexible and relatively large area superhydrophobic surfaces produced using two different methods: Large roughness features were created by electrodeposition on copper meshes; Small roughness features were created by embedding carbon nanoparticles (soot) into Polydimethylsiloxane (PDMS). Both samples were made into cylinders with a diameter under 12 mm. To characterize the samples, scanning electron microscope (SEM) images and confocal microscope images were taken. The confocal microscope images were taken with each sample submerged in water to show the extent of the plastron. The hydrophobized electrodeposited copper mesh cylinders showed drag reductions of up to 32% when comparing the superhydrophobic state with a wetted out state. The soot covered cylinders achieved a 30% drag reduction when comparing the superhydrophobic state to a plain cylinder. These results were obtained for turbulent flows with Reynolds numbers 10,000 to 32,500. PMID:25975704
How well does multiple OCR error correction generalize?
NASA Astrophysics Data System (ADS)
Lund, William B.; Ringger, Eric K.; Walker, Daniel D.
2013-12-01
As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-10-21
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-01-01
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.
Zhu, Xiangbin; Qiu, Huiling
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved. PMID:27893761
Fang, Chunying; Li, Haifeng; Ma, Lin; Zhang, Mancai
2017-01-01
Pathological speech usually refers to speech distortion resulting from illness or other biological insults. The assessment of pathological speech plays an important role in assisting the experts, while automatic evaluation of speech intelligibility is difficult because it is usually nonstationary and mutational. In this paper, we carry out an independent innovation of feature extraction and reduction, and we describe a multigranularity combined feature scheme which is optimized by the hierarchical visual method. A novel method of generating feature set based on S -transform and chaotic analysis is proposed. There are BAFS (430, basic acoustics feature), local spectral characteristics MSCC (84, Mel S -transform cepstrum coefficients), and chaotic features (12). Finally, radar chart and F -score are proposed to optimize the features by the hierarchical visual fusion. The feature set could be optimized from 526 to 96 dimensions based on NKI-CCRT corpus and 104 dimensions based on SVD corpus. The experimental results denote that new features by support vector machine (SVM) have the best performance, with a recognition rate of 84.4% on NKI-CCRT corpus and 78.7% on SVD corpus. The proposed method is thus approved to be effective and reliable for pathological speech intelligibility evaluation.
Mohammed, Ameer; Zamani, Majid; Bayford, Richard; Demosthenous, Andreas
2017-12-01
In Parkinson's disease (PD), on-demand deep brain stimulation is required so that stimulation is regulated to reduce side effects resulting from continuous stimulation and PD exacerbation due to untimely stimulation. Also, the progressive nature of PD necessitates the use of dynamic detection schemes that can track the nonlinearities in PD. This paper proposes the use of dynamic feature extraction and dynamic pattern classification to achieve dynamic PD detection taking into account the demand for high accuracy, low computation, and real-time detection. The dynamic feature extraction and dynamic pattern classification are selected by evaluating a subset of feature extraction, dimensionality reduction, and classification algorithms that have been used in brain-machine interfaces. A novel dimensionality reduction technique, the maximum ratio method (MRM) is proposed, which provides the most efficient performance. In terms of accuracy and complexity for hardware implementation, a combination having discrete wavelet transform for feature extraction, MRM for dimensionality reduction, and dynamic k-nearest neighbor for classification was chosen as the most efficient. It achieves a classification accuracy of 99.29%, an F1-score of 97.90%, and a choice probability of 99.86%.
Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.
Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin
2017-08-29
This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.
Genetic algorithm for the optimization of features and neural networks in ECG signals classification
NASA Astrophysics Data System (ADS)
Li, Hongqiang; Yuan, Danyang; Ma, Xiangdong; Cui, Dianyin; Cao, Lu
2017-01-01
Feature extraction and classification of electrocardiogram (ECG) signals are necessary for the automatic diagnosis of cardiac diseases. In this study, a novel method based on genetic algorithm-back propagation neural network (GA-BPNN) for classifying ECG signals with feature extraction using wavelet packet decomposition (WPD) is proposed. WPD combined with the statistical method is utilized to extract the effective features of ECG signals. The statistical features of the wavelet packet coefficients are calculated as the feature sets. GA is employed to decrease the dimensions of the feature sets and to optimize the weights and biases of the back propagation neural network (BPNN). Thereafter, the optimized BPNN classifier is applied to classify six types of ECG signals. In addition, an experimental platform is constructed for ECG signal acquisition to supply the ECG data for verifying the effectiveness of the proposed method. The GA-BPNN method with the MIT-BIH arrhythmia database achieved a dimension reduction of nearly 50% and produced good classification results with an accuracy of 97.78%. The experimental results based on the established acquisition platform indicated that the GA-BPNN method achieved a high classification accuracy of 99.33% and could be efficiently applied in the automatic identification of cardiac arrhythmias.
Feature Selection for Classification of Polar Regions Using a Fuzzy Expert System
NASA Technical Reports Server (NTRS)
Penaloza, Mauel A.; Welch, Ronald M.
1996-01-01
Labeling, feature selection, and the choice of classifier are critical elements for classification of scenes and for image understanding. This study examines several methods for feature selection in polar regions, including the list, of a fuzzy logic-based expert system for further refinement of a set of selected features. Six Advanced Very High Resolution Radiometer (AVHRR) Local Area Coverage (LAC) arctic scenes are classified into nine classes: water, snow / ice, ice cloud, land, thin stratus, stratus over water, cumulus over water, textured snow over water, and snow-covered mountains. Sixty-seven spectral and textural features are computed and analyzed by the feature selection algorithms. The divergence, histogram analysis, and discriminant analysis approaches are intercompared for their effectiveness in feature selection. The fuzzy expert system method is used not only to determine the effectiveness of each approach in classifying polar scenes, but also to further reduce the features into a more optimal set. For each selection method,features are ranked from best to worst, and the best half of the features are selected. Then, rules using these selected features are defined. The results of running the fuzzy expert system with these rules show that the divergence method produces the best set features, not only does it produce the highest classification accuracy, but also it has the lowest computation requirements. A reduction of the set of features produced by the divergence method using the fuzzy expert system results in an overall classification accuracy of over 95 %. However, this increase of accuracy has a high computation cost.
Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés
2016-07-15
Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.
Chen, Xi; Kopsaftopoulos, Fotis; Wu, Qi; Ren, He; Chang, Fu-Kuo
2018-04-29
In this work, a data-driven approach for identifying the flight state of a self-sensing wing structure with an embedded multi-functional sensing network is proposed. The flight state is characterized by the structural vibration signals recorded from a series of wind tunnel experiments under varying angles of attack and airspeeds. A large feature pool is created by extracting potential features from the signals covering the time domain, the frequency domain as well as the information domain. Special emphasis is given to feature selection in which a novel filter method is developed based on the combination of a modified distance evaluation algorithm and a variance inflation factor. Machine learning algorithms are then employed to establish the mapping relationship from the feature space to the practical state space. Results from two case studies demonstrate the high identification accuracy and the effectiveness of the model complexity reduction via the proposed method, thus providing new perspectives of self-awareness towards the next generation of intelligent air vehicles.
Lysozyme activity and nitroblue-tetrazolium reduction in leukaemic cells
Catovsky, D.; Galton, D. A. G.
1973-01-01
The cytochemical methods for lysozyme and nitroblue-tetrazolium reduction have been used to study the blast cells of acute myeloid leukaemia. Both proved useful in characterizing the cases with predominant monocytic differentiation. The demonstration of lysozyme activity helped to define two main groups: (a) with predominantly lysozyme-negative cells (myeloblastic-promyelocytic), and (b) with considerable numbers of positive cells (monoblastic-monocytic). In addition this test was also of value in the differentiation of other leukaemic disorders. Reduction of nitroblue-tetrazolium was also a feature of monocytic differentiation. The combination of these two methods with those for myeloperoxidase and non-specific esterase activity contributes to the cytological characterization of acute myeloid leukaemia. Images PMID:4511938
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.
2016-04-01
Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.
A New Direction of Cancer Classification: Positive Effect of Low-Ranking MicroRNAs.
Li, Feifei; Piao, Minghao; Piao, Yongjun; Li, Meijing; Ryu, Keun Ho
2014-10-01
Many studies based on microRNA (miRNA) expression profiles showed a new aspect of cancer classification. Because one characteristic of miRNA expression data is the high dimensionality, feature selection methods have been used to facilitate dimensionality reduction. The feature selection methods have one shortcoming thus far: they just consider the problem of where feature to class is 1:1 or n:1. However, because one miRNA may influence more than one type of cancer, human miRNA is considered to be ranked low in traditional feature selection methods and are removed most of the time. In view of the limitation of the miRNA number, low-ranking miRNAs are also important to cancer classification. We considered both high- and low-ranking features to cover all problems (1:1, n:1, 1:n, and m:n) in cancer classification. First, we used the correlation-based feature selection method to select the high-ranking miRNAs, and chose the support vector machine, Bayes network, decision tree, k-nearest-neighbor, and logistic classifier to construct cancer classification. Then, we chose Chi-square test, information gain, gain ratio, and Pearson's correlation feature selection methods to build the m:n feature subset, and used the selected miRNAs to determine cancer classification. The low-ranking miRNA expression profiles achieved higher classification accuracy compared with just using high-ranking miRNAs in traditional feature selection methods. Our results demonstrate that the m:n feature subset made a positive impression of low-ranking miRNAs in cancer classification.
Low-Dimensional Feature Representation for Instrument Identification
NASA Astrophysics Data System (ADS)
Ihara, Mizuki; Maeda, Shin-Ichi; Ikeda, Kazushi; Ishii, Shin
For monophonic music instrument identification, various feature extraction and selection methods have been proposed. One of the issues toward instrument identification is that the same spectrum is not always observed even in the same instrument due to the difference of the recording condition. Therefore, it is important to find non-redundant instrument-specific features that maintain information essential for high-quality instrument identification to apply them to various instrumental music analyses. For such a dimensionality reduction method, the authors propose the utilization of linear projection methods: local Fisher discriminant analysis (LFDA) and LFDA combined with principal component analysis (PCA). After experimentally clarifying that raw power spectra are actually good for instrument classification, the authors reduced the feature dimensionality by LFDA or by PCA followed by LFDA (PCA-LFDA). The reduced features achieved reasonably high identification performance that was comparable or higher than those by the power spectra and those achieved by other existing studies. These results demonstrated that our LFDA and PCA-LFDA can successfully extract low-dimensional instrument features that maintain the characteristic information of the instruments.
Thomas, Minta; De Brabanter, Kris; De Moor, Bart
2014-05-10
DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.
Emotion computing using Word Mover's Distance features based on Ren_CECps.
Ren, Fuji; Liu, Ning
2018-01-01
In this paper, we propose an emotion separated method(SeTF·IDF) to assign the emotion labels of sentences with different values, which has a better visual effect compared with the values represented by TF·IDF in the visualization of a multi-label Chinese emotional corpus Ren_CECps. Inspired by the enormous improvement of the visualization map propelled by the changed distances among the sentences, we being the first group utilizes the Word Mover's Distance(WMD) algorithm as a way of feature representation in Chinese text emotion classification. Our experiments show that both in 80% for training, 20% for testing and 50% for training, 50% for testing experiments of Ren_CECps, WMD features get the best f1 scores and have a greater increase compared with the same dimension feature vectors obtained by dimension reduction TF·IDF method. Compared experiments in English corpus also show the efficiency of WMD features in the cross-language field.
Effective Feature Selection for Classification of Promoter Sequences.
K, Kouser; P G, Lavanya; Rangarajan, Lalitha; K, Acharya Kshitish
2016-01-01
Exploring novel computational methods in making sense of biological data has not only been a necessity, but also productive. A part of this trend is the search for more efficient in silico methods/tools for analysis of promoters, which are parts of DNA sequences that are involved in regulation of expression of genes into other functional molecules. Promoter regions vary greatly in their function based on the sequence of nucleotides and the arrangement of protein-binding short-regions called motifs. In fact, the regulatory nature of the promoters seems to be largely driven by the selective presence and/or the arrangement of these motifs. Here, we explore computational classification of promoter sequences based on the pattern of motif distributions, as such classification can pave a new way of functional analysis of promoters and to discover the functionally crucial motifs. We make use of Position Specific Motif Matrix (PSMM) features for exploring the possibility of accurately classifying promoter sequences using some of the popular classification techniques. The classification results on the complete feature set are low, perhaps due to the huge number of features. We propose two ways of reducing features. Our test results show improvement in the classification output after the reduction of features. The results also show that decision trees outperform SVM (Support Vector Machine), KNN (K Nearest Neighbor) and ensemble classifier LibD3C, particularly with reduced features. The proposed feature selection methods outperform some of the popular feature transformation methods such as PCA and SVD. Also, the methods proposed are as accurate as MRMR (feature selection method) but much faster than MRMR. Such methods could be useful to categorize new promoters and explore regulatory mechanisms of gene expressions in complex eukaryotic species.
Automated variance reduction for MCNP using deterministic methods.
Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B
2005-01-01
In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.
Defeaturing CAD models using a geometry-based size field and facet-based reduction operators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quadros, William Roshan; Owen, Steven James
2010-04-01
We propose a method to automatically defeature a CAD model by detecting irrelevant features using a geometry-based size field and a method to remove the irrelevant features via facet-based operations on a discrete representation. A discrete B-Rep model is first created by obtaining a faceted representation of the CAD entities. The candidate facet entities are then marked for reduction by using a geometry-based size field. This is accomplished by estimating local mesh sizes based on geometric criteria. If the field value at a facet entity goes below a user specified threshold value then it is identified as an irrelevant featuremore » and is marked for reduction. The reduction of marked facet entities is primarily performed using an edge collapse operator. Care is taken to retain a valid geometry and topology of the discrete model throughout the procedure. The original model is not altered as the defeaturing is performed on a separate discrete model. Associativity between the entities of the discrete model and that of original CAD model is maintained in order to decode the attributes and boundary conditions applied on the original CAD entities onto the mesh via the entities of the discrete model. Example models are presented to illustrate the effectiveness of the proposed approach.« less
Woldegebriel, Michael; Derks, Eduard
2017-01-17
In this work, a novel probabilistic untargeted feature detection algorithm for liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS) using artificial neural network (ANN) is presented. The feature detection process is approached as a pattern recognition problem, and thus, ANN was utilized as an efficient feature recognition tool. Unlike most existing feature detection algorithms, with this approach, any suspected chromatographic profile (i.e., shape of a peak) can easily be incorporated by training the network, avoiding the need to perform computationally expensive regression methods with specific mathematical models. In addition, with this method, we have shown that the high-resolution raw data can be fully utilized without applying any arbitrary thresholds or data reduction, therefore improving the sensitivity of the method for compound identification purposes. Furthermore, opposed to existing deterministic (binary) approaches, this method rather estimates the probability of a feature being present/absent at a given point of interest, thus giving chance for all data points to be propagated down the data analysis pipeline, weighed with their probability. The algorithm was tested with data sets generated from spiked samples in forensic and food safety context and has shown promising results by detecting features for all compounds in a computationally reasonable time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teramoto, Atsushi, E-mail: teramoto@fujita-hu.ac.jp; Fujita, Hiroshi; Yamamuro, Osamu
Purpose: Automated detection of solitary pulmonary nodules using positron emission tomography (PET) and computed tomography (CT) images shows good sensitivity; however, it is difficult to detect nodules in contact with normal organs, and additional efforts are needed so that the number of false positives (FPs) can be further reduced. In this paper, the authors propose an improved FP-reduction method for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks (CNNs). Methods: The overall scheme detects pulmonary nodules using both CT and PET images. In the CT images, a massive region is first detected using anmore » active contour filter, which is a type of contrast enhancement filter that has a deformable kernel shape. Subsequently, high-uptake regions detected by the PET images are merged with the regions detected by the CT images. FP candidates are eliminated using an ensemble method; it consists of two feature extractions, one by shape/metabolic feature analysis and the other by a CNN, followed by a two-step classifier, one step being rule based and the other being based on support vector machines. Results: The authors evaluated the detection performance using 104 PET/CT images collected by a cancer-screening program. The sensitivity in detecting candidates at an initial stage was 97.2%, with 72.8 FPs/case. After performing the proposed FP-reduction method, the sensitivity of detection was 90.1%, with 4.9 FPs/case; the proposed method eliminated approximately half the FPs existing in the previous study. Conclusions: An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images. The authors’ ensemble FP-reduction method eliminated 93% of the FPs; their proposed method using CNN technique eliminates approximately half the FPs existing in the previous study. These results indicate that their method may be useful in the computer-aided detection of pulmonary nodules using PET/CT images.« less
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
Variable importance in nonlinear kernels (VINK): classification of digitized histopathology.
Ginsburg, Shoshana; Ali, Sahirzeeshan; Lee, George; Basavanhally, Ajay; Madabhushi, Anant
2013-01-01
Quantitative histomorphometry is the process of modeling appearance of disease morphology on digitized histopathology images via image-based features (e.g., texture, graphs). Due to the curse of dimensionality, building classifiers with large numbers of features requires feature selection (which may require a large training set) or dimensionality reduction (DR). DR methods map the original high-dimensional features in terms of eigenvectors and eigenvalues, which limits the potential for feature transparency or interpretability. Although methods exist for variable selection and ranking on embeddings obtained via linear DR schemes (e.g., principal components analysis (PCA)), similar methods do not yet exist for nonlinear DR (NLDR) methods. In this work we present a simple yet elegant method for approximating the mapping between the data in the original feature space and the transformed data in the kernel PCA (KPCA) embedding space; this mapping provides the basis for quantification of variable importance in nonlinear kernels (VINK). We show how VINK can be implemented in conjunction with the popular Isomap and Laplacian eigenmap algorithms. VINK is evaluated in the contexts of three different problems in digital pathology: (1) predicting five year PSA failure following radical prostatectomy, (2) predicting Oncotype DX recurrence risk scores for ER+ breast cancers, and (3) distinguishing good and poor outcome p16+ oropharyngeal tumors. We demonstrate that subsets of features identified by VINK provide similar or better classification or regression performance compared to the original high dimensional feature sets.
Wavelet median denoising of ultrasound images
NASA Astrophysics Data System (ADS)
Macey, Katherine E.; Page, Wyatt H.
2002-05-01
Ultrasound images are contaminated with both additive and multiplicative noise, which is modeled by Gaussian and speckle noise respectively. Distinguishing small features such as fallopian tubes in the female genital tract in the noisy environment is problematic. A new method for noise reduction, Wavelet Median Denoising, is presented. Wavelet Median Denoising consists of performing a standard noise reduction technique, median filtering, in the wavelet domain. The new method is tested on 126 images, comprised of 9 original images each with 14 levels of Gaussian or speckle noise. Results for both separable and non-separable wavelets are evaluated, relative to soft-thresholding in the wavelet domain, using the signal-to-noise ratio and subjective assessment. The performance of Wavelet Median Denoising is comparable to that of soft-thresholding. Both methods are more successful in removing Gaussian noise than speckle noise. Wavelet Median Denoising outperforms soft-thresholding for a larger number of cases of speckle noise reduction than of Gaussian noise reduction. Noise reduction is more successful using non-separable wavelets than separable wavelets. When both methods are applied to ultrasound images obtained from a phantom of the female genital tract a small improvement is seen; however, a substantial improvement is required prior to clinical use.
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis
2016-11-01
Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.
NASA Astrophysics Data System (ADS)
Cunha-Filho, A. G.; Briend, Y. P. J.; de Lima, A. M. G.; Donadon, M. V.
2018-05-01
The flutter boundary prediction of complex aeroelastic systems is not an easy task. In some cases, these analyses may become prohibitive due to the high computational cost and time associated with the large number of degrees of freedom of the aeroelastic models, particularly when the aeroelastic model incorporates a control strategy with the aim of suppressing the flutter phenomenon, such as the use of viscoelastic treatments. In this situation, the use of a model reduction method is essential. However, the construction of a modal reduction basis for aeroviscoelastic systems is still a challenge, owing to the inherent frequency- and temperature-dependent behavior of the viscoelastic materials. Thus, the main contribution intended for the present study is to propose an efficient and accurate iterative enriched Ritz basis to deal with aeroviscoelastic systems. The main features and capabilities of the proposed model reduction method are illustrated in the prediction of flutter boundary for a thin three-layer sandwich flat panel and a typical aeronautical stiffened panel, both under supersonic flow.
Houshyarifar, Vahid; Chehel Amirani, Mehdi
2016-08-12
In this paper we present a method to predict Sudden Cardiac Arrest (SCA) with higher order spectral (HOS) and linear (Time) features extracted from heart rate variability (HRV) signal. Predicting the occurrence of SCA is important in order to avoid the probability of Sudden Cardiac Death (SCD). This work is a challenge to predict five minutes before SCA onset. The method consists of four steps: pre-processing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In second step, bispectrum features of HRV signal and time-domain features are obtained. Six features are extracted from bispectrum and two features from time-domain. In the next step, these features are reduced to one feature by the linear discriminant analysis (LDA) technique. Finally, KNN and support vector machine-based classifiers are used to classify the HRV signals. We used two database named, MIT/BIH Sudden Cardiac Death (SCD) Database and Physiobank Normal Sinus Rhythm (NSR). In this work we achieved prediction of SCD occurrence for six minutes before the SCA with the accuracy over 91%.
Tahir, Fahima; Fahiem, Muhammad Abuzar
2014-01-01
The quality of pharmaceutical products plays an important role in pharmaceutical industry as well as in our lives. Usage of defective tablets can be harmful for patients. In this research we proposed a nondestructive method to identify defective and nondefective tablets using their surface morphology. Three different environmental factors temperature, humidity and moisture are analyzed to evaluate the performance of the proposed method. Multiple textural features are extracted from the surface of the defective and nondefective tablets. These textural features are gray level cooccurrence matrix, run length matrix, histogram, autoregressive model and HAAR wavelet. Total textural features extracted from images are 281. We performed an analysis on all those 281, top 15, and top 2 features. Top 15 features are extracted using three different feature reduction techniques: chi-square, gain ratio and relief-F. In this research we have used three different classifiers: support vector machine, K-nearest neighbors and naïve Bayes to calculate the accuracies against proposed method using two experiments, that is, leave-one-out cross-validation technique and train test models. We tested each classifier against all selected features and then performed the comparison of their results. The experimental work resulted in that in most of the cases SVM performed better than the other two classifiers.
Chen, Xi; Wu, Qi; Ren, He; Chang, Fu-Kuo
2018-01-01
In this work, a data-driven approach for identifying the flight state of a self-sensing wing structure with an embedded multi-functional sensing network is proposed. The flight state is characterized by the structural vibration signals recorded from a series of wind tunnel experiments under varying angles of attack and airspeeds. A large feature pool is created by extracting potential features from the signals covering the time domain, the frequency domain as well as the information domain. Special emphasis is given to feature selection in which a novel filter method is developed based on the combination of a modified distance evaluation algorithm and a variance inflation factor. Machine learning algorithms are then employed to establish the mapping relationship from the feature space to the practical state space. Results from two case studies demonstrate the high identification accuracy and the effectiveness of the model complexity reduction via the proposed method, thus providing new perspectives of self-awareness towards the next generation of intelligent air vehicles. PMID:29710832
Model reductions using a projection formulation
NASA Technical Reports Server (NTRS)
De Villemagne, Christian; Skelton, Robert E.
1987-01-01
A new methodology for model reduction of MIMO systems exploits the notion of an oblique projection. A reduced model is uniquely defined by a projector whose range space and orthogonal to the null space are chosen among the ranges of generalized controllability and observability matrices. The reduced order models match various combinations (chosen by the designer) of four types of parameters of the full order system associated with (1) low frequency response, (2) high frequency response, (3) low frequency power spectral density, and (4) high frequency power spectral density. Thus, the proposed method is a computationally simple substitute for many existing methods, has an extreme flexibility to embrace combinations of existing methods and offers some new features.
[Lithology feature extraction of CASI hyperspectral data based on fractal signal algorithm].
Tang, Chao; Chen, Jian-Ping; Cui, Jing; Wen, Bo-Tao
2014-05-01
Hyperspectral data is characterized by combination of image and spectrum and large data volume dimension reduction is the main research direction. Band selection and feature extraction is the primary method used for this objective. In the present article, the authors tested methods applied for the lithology feature extraction from hyperspectral data. Based on the self-similarity of hyperspectral data, the authors explored the application of fractal algorithm to lithology feature extraction from CASI hyperspectral data. The "carpet method" was corrected and then applied to calculate the fractal value of every pixel in the hyperspectral data. The results show that fractal information highlights the exposed bedrock lithology better than the original hyperspectral data The fractal signal and characterized scale are influenced by the spectral curve shape, the initial scale selection and iteration step. At present, research on the fractal signal of spectral curve is rare, implying the necessity of further quantitative analysis and investigation of its physical implications.
Voltage-induced reduction of graphene oxide
NASA Astrophysics Data System (ADS)
Faucett, Austin C.
Graphene Oxide (GO) is being widely researched as a precursor for the mass production of graphene, and as a versatile material in its own right for flexible electronics, chemical sensors, and energy harvesting applications. Reduction of GO, an electrically insulating material, into reduced graphene oxide (rGO) restores electrical conductivity via removal of oxygen-containing functional groups. Here, a reduction method using an applied electrical bias, known as voltage-induced reduction, is explored. Voltage-induced reduction can be performed under ambient conditions and avoids the use of hazardous chemicals or high temperatures common with standard methods, but little is known about the reduction mechanisms and the quality of rGO produced with this method. This work performs extensive structural and electrical characterization of voltage-reduced GO (V-rGO) and shows that it is competitive with standard methods. Beyond its potential use as a facile and eco-friendly processing approach, V-rGO reduction also offers record high-resolution patterning capabilities. In this work, the spatial resolution limits of voltage-induced reduction, performed using a conductive atomic force microscope probe, are explored. It is shown that arbitrary V-rGO conductive features can be patterned into insulating GO with nanoscale resolution. The localization of voltage-induced reduction to length scales < 10 nm allows studies of reduction reaction kinetics, using electrical current obtained in-situ, with statistical robustness. Methods for patterning V-rGO nanoribbons are then developed. After presenting sub-10nm patterning of V-rGO nanoribbons in GO single sheets and films, the performance of V-rGO nanoribbon field effect transistors (FETs) are demonstrated. Preliminary measurements show an increase in electrical current on/off ratios as compared to large-area rGO FETs, indicating transport gap modulation that is possibly due to quantum confinement effects.
Comparison of spike-sorting algorithms for future hardware implementation.
Gibson, Sarah; Judy, Jack W; Markovic, Dejan
2008-01-01
Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.
New Cogging Torque Reduction Methods for Permanent Magnet Machine
NASA Astrophysics Data System (ADS)
Bahrim, F. S.; Sulaiman, E.; Kumar, R.; Jusoh, L. I.
2017-08-01
Permanent magnet type motors (PMs) especially permanent magnet synchronous motor (PMSM) are expanding its limbs in industrial application system and widely used in various applications. The key features of this machine include high power and torque density, extending speed range, high efficiency, better dynamic performance and good flux-weakening capability. Nevertheless, high in cogging torque, which may cause noise and vibration, is one of the threat of the machine performance. Therefore, with the aid of 3-D finite element analysis (FEA) and simulation using JMAG Designer, this paper proposed new method for cogging torque reduction. Based on the simulation, methods of combining the skewing with radial pole pairing method and skewing with axial pole pairing method reduces the cogging torque effect up to 71.86% and 65.69% simultaneously.
NASA Astrophysics Data System (ADS)
Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo
2017-03-01
We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.
Selecting relevant 3D image features of margin sharpness and texture for lung nodule retrieval.
Ferreira, José Raniery; de Azevedo-Marques, Paulo Mazzoncini; Oliveira, Marcelo Costa
2017-03-01
Lung cancer is the leading cause of cancer-related deaths in the world. Its diagnosis is a challenge task to specialists due to several aspects on the classification of lung nodules. Therefore, it is important to integrate content-based image retrieval methods on the lung nodule classification process, since they are capable of retrieving similar cases from databases that were previously diagnosed. However, this mechanism depends on extracting relevant image features in order to obtain high efficiency. The goal of this paper is to perform the selection of 3D image features of margin sharpness and texture that can be relevant on the retrieval of similar cancerous and benign lung nodules. A total of 48 3D image attributes were extracted from the nodule volume. Border sharpness features were extracted from perpendicular lines drawn over the lesion boundary. Second-order texture features were extracted from a cooccurrence matrix. Relevant features were selected by a correlation-based method and a statistical significance analysis. Retrieval performance was assessed according to the nodule's potential malignancy on the 10 most similar cases and by the parameters of precision and recall. Statistical significant features reduced retrieval performance. Correlation-based method selected 2 margin sharpness attributes and 6 texture attributes and obtained higher precision compared to all 48 extracted features on similar nodule retrieval. Feature space dimensionality reduction of 83 % obtained higher retrieval performance and presented to be a computationaly low cost method of retrieving similar nodules for the diagnosis of lung cancer.
Real-time image annotation by manifold-based biased Fisher discriminant analysis
NASA Astrophysics Data System (ADS)
Ji, Rongrong; Yao, Hongxun; Wang, Jicheng; Sun, Xiaoshuai; Liu, Xianming
2008-01-01
Automatic Linguistic Annotation is a promising solution to bridge the semantic gap in content-based image retrieval. However, two crucial issues are not well addressed in state-of-art annotation algorithms: 1. The Small Sample Size (3S) problem in keyword classifier/model learning; 2. Most of annotation algorithms can not extend to real-time online usage due to their low computational efficiencies. This paper presents a novel Manifold-based Biased Fisher Discriminant Analysis (MBFDA) algorithm to address these two issues by transductive semantic learning and keyword filtering. To address the 3S problem, Co-Training based Manifold learning is adopted for keyword model construction. To achieve real-time annotation, a Bias Fisher Discriminant Analysis (BFDA) based semantic feature reduction algorithm is presented for keyword confidence discrimination and semantic feature reduction. Different from all existing annotation methods, MBFDA views image annotation from a novel Eigen semantic feature (which corresponds to keywords) selection aspect. As demonstrated in experiments, our manifold-based biased Fisher discriminant analysis annotation algorithm outperforms classical and state-of-art annotation methods (1.K-NN Expansion; 2.One-to-All SVM; 3.PWC-SVM) in both computational time and annotation accuracy with a large margin.
Swartz, R. Andrew
2013-01-01
This paper investigates the time series representation methods and similarity measures for sensor data feature extraction and structural damage pattern recognition. Both model-based time series representation and dimensionality reduction methods are studied to compare the effectiveness of feature extraction for damage pattern recognition. The evaluation of feature extraction methods is performed by examining the separation of feature vectors among different damage patterns and the pattern recognition success rate. In addition, the impact of similarity measures on the pattern recognition success rate and the metrics for damage localization are also investigated. The test data used in this study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case datasets and damage test data with different damage modalities are used. The simulation results show that both time series representation methods and similarity measures have significant impact on the pattern recognition success rate. PMID:24191136
Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals
Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu
2012-01-01
Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017
Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition
NASA Astrophysics Data System (ADS)
Hafizhelmi Kamaru Zaman, Fadhlan
2018-03-01
Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2001-01-01
Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many pattern recognition problems. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers. Therefore, reducing those correlations while keeping the classifiers' performance levels high is an important area of research. In this article, we explore input decimation (ID), a method which selects feature subsets for their ability to discriminate among the classes and uses them to decouple the base classifiers. We provide a summary of the theoretical benefits of correlation reduction, along with results of our method on two underwater sonar data sets, three benchmarks from the Probenl/UCI repositories, and two synthetic data sets. The results indicate that input decimated ensembles (IDEs) outperform ensembles whose base classifiers use all the input features; randomly selected subsets of features; and features created using principal components analysis, on a wide range of domains.
NASA Technical Reports Server (NTRS)
Bendura, R. J.; Renfroe, P. G.
1974-01-01
A detailed discussion of the application of a previously method to determine vehicle flight attitude using a single camera onboard the vehicle is presented with emphasis on the digital computer program format and data reduction techniques. Application requirements include film and earth-related coordinates of at least two landmarks (or features), location of the flight vehicle with respect to the earth, and camera characteristics. Included in this report are a detailed discussion of the program input and output format, a computer program listing, a discussion of modifications made to the initial method, a step-by-step basic data reduction procedure, and several example applications. The computer program is written in FORTRAN 4 language for the Control Data 6000 series digital computer.
Workshop on Jet Exhaust Noise Reduction for Tactical Aircraft - NASA Perspective
NASA Technical Reports Server (NTRS)
Huff, Dennis L.; Henderson, Brenda S.
2007-01-01
Jet noise from supersonic, high performance aircraft is a significant problem for takeoff and landing operations near air bases and aircraft carriers. As newer aircraft with higher thrust and performance are introduced, the noise tends to increase due to higher jet exhaust velocities. Jet noise has been a subject of research for over 55 years. Commercial subsonic aircraft benefit from changes to the engine cycle that reduce the exhaust velocities and result in significant noise reduction. Most of the research programs over the past few decades have concentrated on commercial aircraft. Progress has been made by introducing new engines with design features that reduce the noise. NASA has recently started a new program called "Fundamental Aeronautics" where three projects (subsonic fixed wing, subsonic rotary wing, and supersonics) address aircraft noise. For the supersonics project, a primary goal is to understand the underlying physics associated with jet noise so that improved noise prediction tools and noise reduction methods can be developed for a wide range of applications. Highlights from the supersonics project are presented including prediction methods for broadband shock noise, flow measurement methods, and noise reduction methods. Realistic expectations are presented based on past history that indicates significant jet noise reduction cannot be achieved without major changes to the engine cycle. NASA s past experience shows a few EPNdB (effective perceived noise level in decibels) can be achieved using low noise design features such as chevron nozzles. Minimal thrust loss can be expected with these nozzles (< 0.5%) and they may be retrofitted on existing engines. In the long term, it is desirable to use variable cycle engines that can be optimized for lower jet noise during takeoff operations and higher thrust for operational performance. It is also suggested that noise experts be included early in the design process for engine nozzle systems to participate in decisions that may impact the jet noise.
Automatic Speech Acquisition and Recognition for Spacesuit Audio Systems
NASA Technical Reports Server (NTRS)
Ye, Sherry
2015-01-01
NASA has a widely recognized but unmet need for novel human-machine interface technologies that can facilitate communication during astronaut extravehicular activities (EVAs), when loud noises and strong reverberations inside spacesuits make communication challenging. WeVoice, Inc., has developed a multichannel signal-processing method for speech acquisition in noisy and reverberant environments that enables automatic speech recognition (ASR) technology inside spacesuits. The technology reduces noise by exploiting differences between the statistical nature of signals (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, ASR accuracy can be improved to the level at which crewmembers will find the speech interface useful. System components and features include beam forming/multichannel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, and ASR decoding. Arithmetic complexity models were developed and will help designers of real-time ASR systems select proper tasks when confronted with constraints in computational resources. In Phase I of the project, WeVoice validated the technology. The company further refined the technology in Phase II and developed a prototype for testing and use by suited astronauts.
NASA Astrophysics Data System (ADS)
García Plaza, E.; Núñez López, P. J.
2018-01-01
The wavelet packet transform method decomposes a time signal into several independent time-frequency signals called packets. This enables the temporary location of transient events occurring during the monitoring of the cutting processes, which is advantageous in monitoring condition and fault diagnosis. This paper proposes the monitoring of surface roughness using a single low cost sensor that is easily implemented in numerical control machine tools in order to make on-line decisions on workpiece surface finish quality. Packet feature extraction in vibration signals was applied to correlate the sensor signals to measured surface roughness. For the successful application of the WPT method, mother wavelets, packet decomposition level, and appropriate packet selection methods should be considered, but are poorly understood aspects in the literature. In this novel contribution, forty mother wavelets, optimal decomposition level, and packet reduction methods were analysed, as well as identifying the effective frequency range providing the best packet feature extraction for monitoring surface finish. The results show that mother wavelet biorthogonal 4.4 in decomposition level L3 with the fusion of the orthogonal vibration components (ax + ay + az) were the best option in the vibration signal and surface roughness correlation. The best packets were found in the medium-high frequency DDA (6250-9375 Hz) and high frequency ADA (9375-12500 Hz) ranges, and the feed acceleration component ay was the primary source of information. The packet reduction methods forfeited packets with relevant features to the signal, leading to poor results for the prediction of surface roughness. WPT is a robust vibration signal processing method for the monitoring of surface roughness using a single sensor without other information sources, satisfactory results were obtained in comparison to other processing methods with a low computational cost.
Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Joerg
2013-01-01
Direct observation of vocal fold vibration is indispensable for a clinical diagnosis of voice disorders. Among current imaging techniques, high-speed videoendoscopy constitutes a state-of-the-art method capturing several thousand frames per second of the vocal folds during phonation. Recently, a method for extracting descriptive features from phonovibrograms, a two-dimensional image containing the spatio-temporal pattern of vocal fold dynamics, was presented. The derived features are closely related to a clinically established protocol for functional assessment of pathologic voices. The discriminative power of these features for different pathologic findings and configurations has not been assessed yet. In the current study, a collective of 220 subjects is considered for two- and multi-class problems of healthy and pathologic findings. The performance of the proposed feature set is compared to conventional feature reduction routines and was found to clearly outperform these. As such, the proposed procedure shows great potential for diagnostical issues of vocal fold disorders.
Emotion computing using Word Mover’s Distance features based on Ren_CECps
2018-01-01
In this paper, we propose an emotion separated method(SeTF·IDF) to assign the emotion labels of sentences with different values, which has a better visual effect compared with the values represented by TF·IDF in the visualization of a multi-label Chinese emotional corpus Ren_CECps. Inspired by the enormous improvement of the visualization map propelled by the changed distances among the sentences, we being the first group utilizes the Word Mover’s Distance(WMD) algorithm as a way of feature representation in Chinese text emotion classification. Our experiments show that both in 80% for training, 20% for testing and 50% for training, 50% for testing experiments of Ren_CECps, WMD features get the best f1 scores and have a greater increase compared with the same dimension feature vectors obtained by dimension reduction TF·IDF method. Compared experiments in English corpus also show the efficiency of WMD features in the cross-language field. PMID:29624573
The Optical Gravitational Lensing Experiment
NASA Technical Reports Server (NTRS)
Udalski, A.; Szymanski, M.; Kaluzny, J.; Kubiak, M.; Mateo, Mario
1992-01-01
The technical features are described of the Optical Gravitational Lensing Experiment, which aims to detect a statistically significant number of microlensing events toward the Galactic bulge. Clusters of galaxies observed during the 1992 season are listed and discussed and the reduction methods are described. Future plans are addressed.
Spectral Regression Discriminant Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Pan, Y.; Wu, J.; Huang, H.; Liu, J.
2012-08-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Ishida, Shigeki; Mori, Atsuo; Shinji, Masato
The main method to reduce the blasting charge noise which occurs in a tunnel under construction is to install the sound insulation door in the tunnel. However, the numerical analysis technique to predict the accurate effect of the transmission loss in the sound insulation door is not established. In this study, we measured the blasting charge noise and the vibration of the sound insulation door in the tunnel with the blasting charge, and performed analysis and modified acoustic feature. In addition, we reproduced the noise reduction effect of the sound insulation door by statistical energy analysis method and confirmed that numerical simulation is possible by this procedure.
Comparative study of feature selection with ensemble learning using SOM variants
NASA Astrophysics Data System (ADS)
Filali, Ameni; Jlassi, Chiraz; Arous, Najet
2017-03-01
Ensemble learning has succeeded in the growth of stability and clustering accuracy, but their runtime prohibits them from scaling up to real-world applications. This study deals the problem of selecting a subset of the most pertinent features for every cluster from a dataset. The proposed method is another extension of the Random Forests approach using self-organizing maps (SOM) variants to unlabeled data that estimates the out-of-bag feature importance from a set of partitions. Every partition is created using a various bootstrap sample and a random subset of the features. Then, we show that the process internal estimates are used to measure variable pertinence in Random Forests are also applicable to feature selection in unsupervised learning. This approach aims to the dimensionality reduction, visualization and cluster characterization at the same time. Hence, we provide empirical results on nineteen benchmark data sets indicating that RFS can lead to significant improvement in terms of clustering accuracy, over several state-of-the-art unsupervised methods, with a very limited subset of features. The approach proves promise to treat with very broad domains.
A practical salient region feature based 3D multi-modality registration method for medical images
NASA Astrophysics Data System (ADS)
Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang
2006-03-01
We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.
Vision-based localization of the center of mass of large space debris via statistical shape analysis
NASA Astrophysics Data System (ADS)
Biondi, G.; Mauro, S.; Pastorelli, S.
2017-08-01
The current overpopulation of artificial objects orbiting the Earth has increased the interest of the space agencies on planning missions for de-orbiting the largest inoperative satellites. Since this kind of operations involves the capture of the debris, the accurate knowledge of the position of their center of mass is a fundamental safety requirement. As ground observations are not sufficient to reach the required accuracy level, this information should be acquired in situ just before any contact between the chaser and the target. Some estimation methods in the literature rely on the usage of stereo cameras for tracking several features of the target surface. The actual positions of these features are estimated together with the location of the center of mass by state observers. The principal drawback of these methods is related to possible sudden disappearances of one or more features from the field of view of the cameras. An alternative method based on 3D Kinematic registration is presented in this paper. The method, which does not suffer of the mentioned drawback, considers a preliminary reduction of the inaccuracies in detecting features by the usage of statistical shape analysis.
Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.
Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng
2018-01-01
In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.
Spoof Detection for Finger-Vein Recognition System Using NIR Camera.
Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung
2017-10-01
Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods.
Spoof Detection for Finger-Vein Recognition System Using NIR Camera
Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung
2017-01-01
Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods. PMID:28974031
Adaptive sequential Bayesian classification using Page's test
NASA Astrophysics Data System (ADS)
Lynch, Robert S., Jr.; Willett, Peter K.
2002-03-01
In this paper, the previously introduced Mean-Field Bayesian Data Reduction Algorithm is extended for adaptive sequential hypothesis testing utilizing Page's test. In general, Page's test is well understood as a method of detecting a permanent change in distribution associated with a sequence of observations. However, the relationship between detecting a change in distribution utilizing Page's test with that of classification and feature fusion is not well understood. Thus, the contribution of this work is based on developing a method of classifying an unlabeled vector of fused features (i.e., detect a change to an active statistical state) as quickly as possible given an acceptable mean time between false alerts. In this case, the developed classification test can be thought of as equivalent to performing a sequential probability ratio test repeatedly until a class is decided, with the lower log-threshold of each test being set to zero and the upper log-threshold being determined by the expected distance between false alerts. It is of interest to estimate the delay (or, related stopping time) to a classification decision (the number of time samples it takes to classify the target), and the mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm. Results are demonstrated by plotting the delay to declaring the target class versus the mean time between false alerts, and are shown using both different numbers of simulated training data and different numbers of relevant features for each class.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172
NASA Astrophysics Data System (ADS)
Sahiner, Berkman; Petrick, Nicholas; Chan, Heang-Ping; Paquerault, Sophie; Helvie, Mark A.; Hadjiiski, Lubomir M.
2001-07-01
We used the correspondence of detected structures on two views of the same breast for false-positive (FP) reduction in computerized detection of mammographic masses. For each initially detected object on one view, we considered all possible pairings with objects on the other view that fell within a radial band defined by the nipple-to-object distances. We designed a 'correspondence classifier' to classify these pairs as either the same mass (a TP-TP pair) or a mismatch (a TP-FP, FP-TP or FP-FP pair). For each pair, similarity measures of morphological and texture features were derived and used as input features in the correspondence classifier. Two-view mammograms from 94 cases were used as a preliminary data set. Initial detection provided 6.3 FPs/image at 96% sensitivity. Further FP reduction in single view resulted in 1.9 FPs/image at 80% sensitivity and 1.1 FPs/image at 70% sensitivity. By combining single-view detection with the correspondence classifier, detection accuracy improved to 1.5 FPs/image at 80% sensitivity and 0.7 FPs/image at 70% sensitivity. Our preliminary results indicate that the correspondence of geometric, morphological, and textural features of a mass on two different views provides valuable additional information for reducing FPs.
Multilayer Extreme Learning Machine With Subnetwork Nodes for Representation Learning.
Yang, Yimin; Wu, Q M Jonathan
2016-11-01
The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks, provides efficient unified learning solutions for the applications of clustering, regression, and classification. It presents competitive accuracy with superb efficiency in many applications. However, ELM with subnetwork nodes architecture has not attracted much research attentions. Recently, many methods have been proposed for supervised/unsupervised dimension reduction or representation learning, but these methods normally only work for one type of problem. This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that: 1) the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning and 2) experimental results on ten image datasets and 16 classification datasets show that, compared to other conventional feature learning methods, the proposed ML-ELM with subnetwork nodes performs competitively or much better than other feature learning methods.
Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.
Ly, Cheng
2015-12-01
Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.
System Complexity Reduction via Feature Selection
ERIC Educational Resources Information Center
Deng, Houtao
2011-01-01
This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree…
Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua
2014-01-01
The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.
Assessing clutter reduction in parallel coordinates using image processing techniques
NASA Astrophysics Data System (ADS)
Alhamaydh, Heba; Alzoubi, Hussein; Almasaeid, Hisham
2018-01-01
Information visualization has appeared as an important research field for multidimensional data and correlation analysis in recent years. Parallel coordinates (PCs) are one of the popular techniques to visual high-dimensional data. A problem with the PCs technique is that it suffers from crowding, a clutter which hides important data and obfuscates the information. Earlier research has been conducted to reduce clutter without loss in data content. We introduce the use of image processing techniques as an approach for assessing the performance of clutter reduction techniques in PC. We use histogram analysis as our first measure, where the mean feature of the color histograms of the possible alternative orderings of coordinates for the PC images is calculated and compared. The second measure is the extracted contrast feature from the texture of PC images based on gray-level co-occurrence matrices. The results show that the best PC image is the one that has the minimal mean value of the color histogram feature and the maximal contrast value of the texture feature. In addition to its simplicity, the proposed assessment method has the advantage of objectively assessing alternative ordering of PC visualization.
Reduction experiment of iron scale by adding waste plastics.
Zhang, Chongmin; Chen, Shuwen; Miao, Xincheng; Yuan, Hao
2009-01-01
The special features of waste plastics in China are huge in total amount, various in type and dispersive in deposition. Therefore, it is necessary to try some new ways that are fit to Chinese situation for disposing waste plastics as metallurgical raw materials more effectively and flexibly. Owing to its high ferrous content and less impurity, the iron scale became ideal raw material to produce pure iron powder. One of the methods to produce pure iron powder is Hoganas Method, by which, after one or multistage of reduction steps, the iron scale can be reduced pure iron powder. However, combining utilization of waste plastics and iron powder production, a series of reduction experiments were arranged and investigated, which is hoped to take use of both thermal and chemical energy contained in waste plastics as well as to improve the reducing condition of iron scale, and hence to develop a new metallurgical way of disposing waste plastics. The results show that under these experimental conditions, the thermal-decomposition of water plastics can conduce to an increase of porosity in the reduction systems. Moreover, better thermodynamics and kinetics conditions for the reduction of scale can be reached. As a result, the reduction rate is increased.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
NASA Astrophysics Data System (ADS)
Jiang, Li; Shi, Tielin; Xuan, Jianping
2012-05-01
Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.
The JCMT Gould Belt Survey: a quantitative comparison between SCUBA-2 data reduction methods
NASA Astrophysics Data System (ADS)
Mairs, S.; Johnstone, D.; Kirk, H.; Graves, S.; Buckle, J.; Beaulieu, S. F.; Berry, D. S.; Broekhoven-Fiene, H.; Currie, M. J.; Fich, M.; Hatchell, J.; Jenness, T.; Mottram, J. C.; Nutter, D.; Pattle, K.; Pineda, J. E.; Salji, C.; di Francesco, J.; Hogerheijde, M. R.; Ward-Thompson, D.; JCMT Gould Belt survey Team
2015-12-01
Performing ground-based submillimetre observations is a difficult task as the measurements are subject to absorption and emission from water vapour in the Earth's atmosphere and time variation in weather and instrument stability. Removing these features and other artefacts from the data is a vital process which affects the characteristics of the recovered astronomical structure we seek to study. In this paper, we explore two data reduction methods for data taken with the Submillimetre Common-User Bolometer Array-2 (SCUBA-2) at the James Clerk Maxwell Telescope (JCMT). The JCMT Legacy Reduction 1 (JCMT LR1) and The Gould Belt Legacy Survey Legacy Release 1 (GBS LR1) reduction both use the same software (STARLINK) but differ in their choice of data reduction parameters. We find that the JCMT LR1 reduction is suitable for determining whether or not compact emission is present in a given region and the GBS LR1 reduction is tuned in a robust way to uncover more extended emission, which better serves more in-depth physical analyses of star-forming regions. Using the GBS LR1 method, we find that compact sources are recovered well, even at a peak brightness of only three times the noise, whereas the reconstruction of larger objects requires much care when drawing boundaries around the expected astronomical signal in the data reduction process. Incorrect boundaries can lead to false structure identification or it can cause structure to be missed. In the JCMT LR1 reduction, the extent of the true structure of objects larger than a point source is never fully recovered.
Awais, Muhammad; Badruddin, Nasreen; Drieberg, Micheal
2017-08-31
Driver drowsiness is a major cause of fatal accidents, injury, and property damage, and has become an area of substantial research attention in recent years. The present study proposes a method to detect drowsiness in drivers which integrates features of electrocardiography (ECG) and electroencephalography (EEG) to improve detection performance. The study measures differences between the alert and drowsy states from physiological data collected from 22 healthy subjects in a driving simulator-based study. A monotonous driving environment is used to induce drowsiness in the participants. Various time and frequency domain feature were extracted from EEG including time domain statistical descriptors, complexity measures and power spectral measures. Features extracted from the ECG signal included heart rate (HR) and heart rate variability (HRV), including low frequency (LF), high frequency (HF) and LF/HF ratio. Furthermore, subjective sleepiness scale is also assessed to study its relationship with drowsiness. We used paired t -tests to select only statistically significant features ( p < 0.05), that can differentiate between the alert and drowsy states effectively. Significant features of both modalities (EEG and ECG) are then combined to investigate the improvement in performance using support vector machine (SVM) classifier. The other main contribution of this paper is the study on channel reduction and its impact to the performance of detection. The proposed method demonstrated that combining EEG and ECG has improved the system's performance in discriminating between alert and drowsy states, instead of using them alone. Our channel reduction analysis revealed that an acceptable level of accuracy (80%) could be achieved by combining just two electrodes (one EEG and one ECG), indicating the feasibility of a system with improved wearability compared with existing systems involving many electrodes. Overall, our results demonstrate that the proposed method can be a viable solution for a practical driver drowsiness system that is both accurate and comfortable to wear.
Badruddin, Nasreen
2017-01-01
Driver drowsiness is a major cause of fatal accidents, injury, and property damage, and has become an area of substantial research attention in recent years. The present study proposes a method to detect drowsiness in drivers which integrates features of electrocardiography (ECG) and electroencephalography (EEG) to improve detection performance. The study measures differences between the alert and drowsy states from physiological data collected from 22 healthy subjects in a driving simulator-based study. A monotonous driving environment is used to induce drowsiness in the participants. Various time and frequency domain feature were extracted from EEG including time domain statistical descriptors, complexity measures and power spectral measures. Features extracted from the ECG signal included heart rate (HR) and heart rate variability (HRV), including low frequency (LF), high frequency (HF) and LF/HF ratio. Furthermore, subjective sleepiness scale is also assessed to study its relationship with drowsiness. We used paired t-tests to select only statistically significant features (p < 0.05), that can differentiate between the alert and drowsy states effectively. Significant features of both modalities (EEG and ECG) are then combined to investigate the improvement in performance using support vector machine (SVM) classifier. The other main contribution of this paper is the study on channel reduction and its impact to the performance of detection. The proposed method demonstrated that combining EEG and ECG has improved the system’s performance in discriminating between alert and drowsy states, instead of using them alone. Our channel reduction analysis revealed that an acceptable level of accuracy (80%) could be achieved by combining just two electrodes (one EEG and one ECG), indicating the feasibility of a system with improved wearability compared with existing systems involving many electrodes. Overall, our results demonstrate that the proposed method can be a viable solution for a practical driver drowsiness system that is both accurate and comfortable to wear. PMID:28858220
NASA Technical Reports Server (NTRS)
Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San
1994-01-01
This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.
Reduction of background noise induced by wind tunnel jet exit vanes
NASA Technical Reports Server (NTRS)
Martin, R. M.; Brooks, T. F.; Hoad, D. R.
1985-01-01
The NASA-Langley 4 x 7 m wind tunnel develops low frequency flow pulsations at certain velocity ranges during open throat mode operation, affecting the aerodynamics of the flow and degrading the resulting model test data. Triangular vanes attached to the trailing edge of flat steel rails, mounted 10 cm from the inside of the jet exit walls, have been used to reduce this effect; attention is presently given to methods used to reduce the inherent noise generation of the vanes while retaining their pulsation reduction features.
Bai, Juan; Xiao, Xue; Xue, Yuan-Yuan; Jiang, Jia-Xing; Zeng, Jing-Hui; Li, Xi-Fei; Chen, Yu
2018-06-13
Rationally designing and manipulating composition and morphology of precious metal-based bimetallic nanostructures can markedly enhance their electrocatalytic performance, including selectivity, activity, and durability. We herein report the synthesis of bimetallic PtRh alloy nanodendrites (ANDs) with tunable composition by a facile complex-reduction synthetic method under hydrothermal conditions. The structural/morphologic features, formation mechanism, and electrocatalytic performance of PtRh ANDs are investigated thoroughly by various physical characterization and electrochemical methods. The preformed Rh crystal nuclei effectively catalyze the reduction of Pt 2+ precursor, resulting in PtRh alloy generation due to the catalytic growth and atoms interdiffusion process. The Pt atoms deposition distinctly interferes in Rh atoms deposition on Rh crystal nuclei, resulting in dendritic morphology of PtRh ANDs. For the ethanol oxidation reaction (EOR), PtRh ANDs display the chemical composition and solution pH co-dependent electrocatalytic activity. Because of the alloy effect and particular morphologic feature, Pt 1 Rh 1 ANDs with optimized composition exhibit better reactivity and stability for the EOR than commercial Pt nanocrystals electrocatalyst.
Reduction of streamflow monitoring networks by a reference point approach
NASA Astrophysics Data System (ADS)
Cetinkaya, Cem P.; Harmancioglu, Nilgun B.
2014-05-01
Adoption of an integrated approach to water management strongly forces policy and decision-makers to focus on hydrometric monitoring systems as well. Existing hydrometric networks need to be assessed and revised against the requirements on water quantity data to support integrated management. One of the questions that a network assessment study should resolve is whether a current monitoring system can be consolidated in view of the increased expenditures in time, money and effort imposed on the monitoring activity. Within the last decade, governmental monitoring agencies in Turkey have foreseen an audit on all their basin networks in view of prevailing economic pressures. In particular, they question how they can decide whether monitoring should be continued or terminated at a particular site in a network. The presented study is initiated to address this question by examining the applicability of a method called “reference point approach” (RPA) for network assessment and reduction purposes. The main objective of the study is to develop an easily applicable and flexible network reduction methodology, focusing mainly on the assessment of the “performance” of existing streamflow monitoring networks in view of variable operational purposes. The methodology is applied to 13 hydrometric stations in the Gediz Basin, along the Aegean coast of Turkey. The results have shown that the simplicity of the method, in contrast to more complicated computational techniques, is an asset that facilitates the involvement of decision makers in application of the methodology for a more interactive assessment procedure between the monitoring agency and the network designer. The method permits ranking of hydrometric stations with regard to multiple objectives of monitoring and the desired attributes of the basin network. Another distinctive feature of the approach is that it also assists decision making in cases with limited data and metadata. These features of the RPA approach highlight its advantages over the existing network assessment and reduction methods.
van der Ster, Björn J P; Bennis, Frank C; Delhaas, Tammo; Westerhof, Berend E; Stok, Wim J; van Lieshout, Johannes J
2017-01-01
Introduction: In the initial phase of hypovolemic shock, mean blood pressure (BP) is maintained by sympathetically mediated vasoconstriction rendering BP monitoring insensitive to detect blood loss early. Late detection can result in reduced tissue oxygenation and eventually cellular death. We hypothesized that a machine learning algorithm that interprets currently used and new hemodynamic parameters could facilitate in the detection of impending hypovolemic shock. Method: In 42 (27 female) young [mean (sd): 24 (4) years], healthy subjects central blood volume (CBV) was progressively reduced by application of -50 mmHg lower body negative pressure until the onset of pre-syncope. A support vector machine was trained to classify samples into normovolemia (class 0), initial phase of CBV reduction (class 1) or advanced CBV reduction (class 2). Nine models making use of different features were computed to compare sensitivity and specificity of different non-invasive hemodynamic derived signals. Model features included : volumetric hemodynamic parameters (stroke volume and cardiac output), BP curve dynamics, near-infrared spectroscopy determined cortical brain oxygenation, end-tidal carbon dioxide pressure, thoracic bio-impedance, and middle cerebral artery transcranial Doppler (TCD) blood flow velocity. Model performance was tested by quantifying the predictions with three methods : sensitivity and specificity, absolute error, and quantification of the log odds ratio of class 2 vs. class 0 probability estimates. Results: The combination with maximal sensitivity and specificity for classes 1 and 2 was found for the model comprising volumetric features (class 1: 0.73-0.98 and class 2: 0.56-0.96). Overall lowest model error was found for the models comprising TCD curve hemodynamics. Using probability estimates the best combination of sensitivity for class 1 (0.67) and specificity (0.87) was found for the model that contained the TCD cerebral blood flow velocity derived pulse height. The highest combination for class 2 was found for the model with the volumetric features (0.72 and 0.91). Conclusion: The most sensitive models for the detection of advanced CBV reduction comprised data that describe features from volumetric parameters and from cerebral blood flow velocity hemodynamics. In a validated model of hemorrhage in humans these parameters provide the best indication of the progression of central hypovolemia.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
Functional feature embedded space mapping of fMRI data.
Hu, Jin; Tian, Jie; Yang, Lei
2006-01-01
We have proposed a new method for fMRI data analysis which is called Functional Feature Embedded Space Mapping (FFESM). Our work mainly focuses on the experimental design with periodic stimuli which can be described by a number of Fourier coefficients in the frequency domain. A nonlinear dimension reduction technique Isomap is applied to the high dimensional features obtained from frequency domain of the fMRI data for the first time. Finally, the presence of activated time series is identified by the clustering method in which the information theoretic criterion of minimum description length (MDL) is used to estimate the number of clusters. The feasibility of our algorithm is demonstrated by real human experiments. Although we focus on analyzing periodic fMRI data, the approach can be extended to analyze non-periodic fMRI data (event-related fMRI) by replacing the Fourier analysis with a wavelet analysis.
Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform
NASA Astrophysics Data System (ADS)
Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah
2017-02-01
Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.
Histogram-based adaptive gray level scaling for texture feature classification of colorectal polyps
NASA Astrophysics Data System (ADS)
Pomeroy, Marc; Lu, Hongbing; Pickhardt, Perry J.; Liang, Zhengrong
2018-02-01
Texture features have played an ever increasing role in computer aided detection (CADe) and diagnosis (CADx) methods since their inception. Texture features are often used as a method of false positive reduction for CADe packages, especially for detecting colorectal polyps and distinguishing them from falsely tagged residual stool and healthy colon wall folds. While texture features have shown great success there, the performance of texture features for CADx have lagged behind primarily because of the more similar features among different polyps types. In this paper, we present an adaptive gray level scaling and compare it to the conventional equal-spacing of gray level bins. We use a dataset taken from computed tomography colonography patients, with 392 polyp regions of interest (ROIs) identified and have a confirmed diagnosis through pathology. Using the histogram information from the entire ROI dataset, we generate the gray level bins such that each bin contains roughly the same number of voxels Each image ROI is the scaled down to two different numbers of gray levels, using both an equal spacing of Hounsfield units for each bin, and our adaptive method. We compute a set of texture features from the scaled images including 30 gray level co-occurrence matrix (GLCM) features and 11 gray level run length matrix (GLRLM) features. Using a random forest classifier to distinguish between hyperplastic polyps and all others (adenomas and adenocarcinomas), we find that the adaptive gray level scaling can improve performance based on the area under the receiver operating characteristic curve by up to 4.6%.
A comparative study of new and current methods for dental micro-CT image denoising
Lashgari, Mojtaba; Qin, Jie; Swain, Michael
2016-01-01
Objectives: The aim of the current study was to evaluate the application of two advanced noise-reduction algorithms for dental micro-CT images and to implement a comparative analysis of the performance of new and current denoising algorithms. Methods: Denoising was performed using gaussian and median filters as the current filtering approaches and the block-matching and three-dimensional (BM3D) method and total variation method as the proposed new filtering techniques. The performance of the denoising methods was evaluated quantitatively using contrast-to-noise ratio (CNR), edge preserving index (EPI) and blurring indexes, as well as qualitatively using the double-stimulus continuous quality scale procedure. Results: The BM3D method had the best performance with regard to preservation of fine textural features (CNREdge), non-blurring of the whole image (blurring index), the clinical visual score in images with very fine features and the overall visual score for all types of images. On the other hand, the total variation method provided the best results with regard to smoothing of images in texture-free areas (CNRTex-free) and in preserving the edges and borders of image features (EPI). Conclusions: The BM3D method is the most reliable technique for denoising dental micro-CT images with very fine textural details, such as shallow enamel lesions, in which the preservation of the texture and fine features is of the greatest importance. On the other hand, the total variation method is the technique of choice for denoising images without very fine textural details in which the clinician or researcher is interested mainly in anatomical features and structural measurements. PMID:26764583
Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography
NASA Astrophysics Data System (ADS)
Borg, Leise; Jørgensen, Jakob S.; Frikel, Jürgen; Sporring, Jon
2017-12-01
Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance to sample is derived and verified numerically and with experimental data. The model accurately describes the arising variable-truncation artifacts across simulated variations of the experimental setup. Three variable-truncation artifact-reduction methods are proposed, all aimed at addressing sinogram discontinuities that are shown to be the source of the streaks. The ‘reduction to limited angle’ (RLA) method simply keeps only non-truncated projections; the ‘detector-directed smoothing’ (DDS) method smooths the discontinuities; while the ‘reflexive boundary condition’ (RBC) method enforces a zero derivative at the discontinuities. Experimental results using both simulated and real data show that the proposed methods effectively reduce variable-truncation artifacts. The RBC method is found to provide the best artifact reduction and preservation of image features using both visual and quantitative assessment. The analysis and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray tomography experiments.
A DFT-Based Method of Feature Extraction for Palmprint Recognition
NASA Astrophysics Data System (ADS)
Choge, H. Kipsang; Karungaru, Stephen G.; Tsuge, Satoru; Fukumi, Minoru
Over the last quarter century, research in biometric systems has developed at a breathtaking pace and what started with the focus on the fingerprint has now expanded to include face, voice, iris, and behavioral characteristics such as gait. Palmprint is one of the most recent additions, and is currently the subject of great research interest due to its inherent uniqueness, stability, user-friendliness and ease of acquisition. This paper describes an effective and procedurally simple method of palmprint feature extraction specifically for palmprint recognition, although verification experiments are also conducted. This method takes advantage of the correspondences that exist between prominent palmprint features or objects in the spatial domain with those in the frequency or Fourier domain. Multi-dimensional feature vectors are formed by extracting a GA-optimized set of points from the 2-D Fourier spectrum of the palmprint images. The feature vectors are then used for palmprint recognition, before and after dimensionality reduction via the Karhunen-Loeve Transform (KLT). Experiments performed using palmprint images from the ‘PolyU Palmprint Database’ indicate that using a compact set of DFT coefficients, combined with KLT and data preprocessing, produces a recognition accuracy of more than 98% and can provide a fast and effective technique for personal identification.
Ghosh, Tonmoy; Fattah, Shaikh Anowarul; Wahid, Khan A
2018-01-01
Wireless capsule endoscopy (WCE) is the most advanced technology to visualize whole gastrointestinal (GI) tract in a non-invasive way. But the major disadvantage here, it takes long reviewing time, which is very laborious as continuous manual intervention is necessary. In order to reduce the burden of the clinician, in this paper, an automatic bleeding detection method for WCE video is proposed based on the color histogram of block statistics, namely CHOBS. A single pixel in WCE image may be distorted due to the capsule motion in the GI tract. Instead of considering individual pixel values, a block surrounding to that individual pixel is chosen for extracting local statistical features. By combining local block features of three different color planes of RGB color space, an index value is defined. A color histogram, which is extracted from those index values, provides distinguishable color texture feature. A feature reduction technique utilizing color histogram pattern and principal component analysis is proposed, which can drastically reduce the feature dimension. For bleeding zone detection, blocks are classified using extracted local features that do not incorporate any computational burden for feature extraction. From extensive experimentation on several WCE videos and 2300 images, which are collected from a publicly available database, a very satisfactory bleeding frame and zone detection performance is achieved in comparison to that obtained by some of the existing methods. In the case of bleeding frame detection, the accuracy, sensitivity, and specificity obtained from proposed method are 97.85%, 99.47%, and 99.15%, respectively, and in the case of bleeding zone detection, 95.75% of precision is achieved. The proposed method offers not only low feature dimension but also highly satisfactory bleeding detection performance, which even can effectively detect bleeding frame and zone in a continuous WCE video data.
NASA Astrophysics Data System (ADS)
Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.
2012-01-01
The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.
Free-jet acoustic investigation of high-radius-ratio coannular plug nozzles
NASA Technical Reports Server (NTRS)
Knott, P. R.; Janardan, B. A.; Majjigi, R. K.; Bhutiani, P. K.; Vogt, P. G.
1984-01-01
The experimental and analytical results of a scale model simulated flight acoustic exploratory investigation of high radius ratio coannular plug nozzles with inverted velocity and temperature profiles are summarized. Six coannular plug nozzle configurations and a baseline convergent conical nozzle were tested for simulated flight acoustic evaluation. The nozzles were tested over a range of test conditions that are typical of a Variable Cycle Engine for application to advanced high speed aircraft. It was found that in simulate flight, the high radius ratio coannular plug nozzles maintain their jet noise and shock noise reduction features previously observed in static testing. The presence of nozzle bypass struts will not significantly affect the acousticn noise reduction features of a General Electric type nozzle design. A unique coannular plug nozzle flight acoustic spectral prediction method was identified and found to predict the measured results quite well. Special laser velocimeter and acoustic measurements were performed which have given new insights into the jet and shock noise reduction mechanisms of coannular plug nozzles with regard to identifying further benificial research efforts.
Metal artifact reduction using a patch-based reconstruction for digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Borges, Lucas R.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.
2017-03-01
Digital breast tomosynthesis (DBT) is rapidly emerging as the main clinical tool for breast cancer screening. Although several reconstruction methods for DBT are described by the literature, one common issue is the interplane artifacts caused by out-of-focus features. For breasts containing highly attenuating features, such as surgical clips and large calcifications, the artifacts are even more apparent and can limit the detection and characterization of lesions by the radiologist. In this work, we propose a novel method of combining backprojected data into tomographic slices using a patch-based approach, commonly used in denoising. Preliminary tests were performed on a geometry phantom and on an anthropomorphic phantom containing metal inserts. The reconstructed images were compared to a commercial reconstruction solution. Qualitative assessment of the reconstructed images provides evidence that the proposed method reduces artifacts while maintaining low noise levels. Objective assessment supports the visual findings. The artifact spread function shows that the proposed method is capable of suppressing artifacts generated by highly attenuating features. The signal difference to noise ratio shows that the noise levels of the proposed and commercial methods are comparable, even though the commercial method applies post-processing filtering steps, which were not implemented on the proposed method. Thus, the proposed method can produce tomosynthesis reconstructions with reduced artifacts and low noise levels.
Guo, Qingbin; He, Yi; Sun, Tonghua; Wang, Yalin; Jia, Jinping
2014-07-15
A method combining Na2SO3 assisted electrochemical reduction and direct electrochemical reduction using Fe(II)(EDTA) solution was proposed to simultaneously remove NOx and SO2 from flue gas. Activated carbon was used as catalyst to accelerate the process. This new system features (a) direct conversion of NOx and SO2 to harmless N2 and SO4(2-); (b) fast regeneration of Fe(II)(EDTA); (c) minimum use of chemical reagents; and (d) recovery of the reduction by-product (Na2SO4). Fe(II)(EDTA) solution was continuously recycled and reused during entire process, and no harmful waste was generated. Approximately 99% NOx and 98% SO2 were removed under the optimal condition. The stability test showed that the system operation was reliable. Copyright © 2014 Elsevier B.V. All rights reserved.
A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations
Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia
2015-01-01
Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738
NASA Astrophysics Data System (ADS)
Wang, Dong
2016-03-01
Gears are the most commonly used components in mechanical transmission systems. Their failures may cause transmission system breakdown and result in economic loss. Identification of different gear crack levels is important to prevent any unexpected gear failure because gear cracks lead to gear tooth breakage. Signal processing based methods mainly require expertize to explain gear fault signatures which is usually not easy to be achieved by ordinary users. In order to automatically identify different gear crack levels, intelligent gear crack identification methods should be developed. The previous case studies experimentally proved that K-nearest neighbors based methods exhibit high prediction accuracies for identification of 3 different gear crack levels under different motor speeds and loads. In this short communication, to further enhance prediction accuracies of existing K-nearest neighbors based methods and extend identification of 3 different gear crack levels to identification of 5 different gear crack levels, redundant statistical features are constructed by using Daubechies 44 (db44) binary wavelet packet transform at different wavelet decomposition levels, prior to the use of a K-nearest neighbors method. The dimensionality of redundant statistical features is 620, which provides richer gear fault signatures. Since many of these statistical features are redundant and highly correlated with each other, dimensionality reduction of redundant statistical features is conducted to obtain new significant statistical features. At last, the K-nearest neighbors method is used to identify 5 different gear crack levels under different motor speeds and loads. A case study including 3 experiments is investigated to demonstrate that the developed method provides higher prediction accuracies than the existing K-nearest neighbors based methods for recognizing different gear crack levels under different motor speeds and loads. Based on the new significant statistical features, some other popular statistical models including linear discriminant analysis, quadratic discriminant analysis, classification and regression tree and naive Bayes classifier, are compared with the developed method. The results show that the developed method has the highest prediction accuracies among these statistical models. Additionally, selection of the number of new significant features and parameter selection of K-nearest neighbors are thoroughly investigated.
NASA Astrophysics Data System (ADS)
Lesniak, J. M.; Hupse, R.; Blanc, R.; Karssemeijer, N.; Székely, G.
2012-08-01
False positive (FP) marks represent an obstacle for effective use of computer-aided detection (CADe) of breast masses in mammography. Typically, the problem can be approached either by developing more discriminative features or by employing different classifier designs. In this paper, the usage of support vector machine (SVM) classification for FP reduction in CADe is investigated, presenting a systematic quantitative evaluation against neural networks, k-nearest neighbor classification, linear discriminant analysis and random forests. A large database of 2516 film mammography examinations and 73 input features was used to train the classifiers and evaluate for their performance on correctly diagnosed exams as well as false negatives. Further, classifier robustness was investigated using varying training data and feature sets as input. The evaluation was based on the mean exam sensitivity in 0.05-1 FPs on normals on the free-response receiver operating characteristic curve (FROC), incorporated into a tenfold cross validation framework. It was found that SVM classification using a Gaussian kernel offered significantly increased detection performance (P = 0.0002) compared to the reference methods. Varying training data and input features, SVMs showed improved exploitation of large feature sets. It is concluded that with the SVM-based CADe a significant reduction of FPs is possible outperforming other state-of-the-art approaches for breast mass CADe.
Thermally tailored gradient topography surface on elastomeric thin films.
Roy, Sudeshna; Bhandaru, Nandini; Das, Ritopa; Harikrishnan, G; Mukherjee, Rabibrata
2014-05-14
We report a simple method for creating a nanopatterned surface with continuous variation in feature height on an elastomeric thin film. The technique is based on imprinting the surface of a film of thermo-curable elastomer (Sylgard 184), which has continuous variation in cross-linking density introduced by means of differential heating. This results in variation of viscoelasticity across the length of the surface and the film exhibits differential partial relaxation after imprinting with a flexible stamp and subjecting it to an externally applied stress for a transient duration. An intrinsic perfect negative replica of the stamp pattern is initially created over the entire film surface as long as the external force remains active. After the external force is withdrawn, there is partial relaxation of the applied stresses, which is manifested as reduction in amplitude of the imprinted features. Due to the spatial viscoelasticity gradient, the extent of stress relaxation induced feature height reduction varies across the length of the film (L), resulting in a surface with a gradient topography with progressively varying feature heights (hF). The steepness of the gradient can be controlled by varying the temperature gradient as well as the duration of precuring of the film prior to imprinting. The method has also been utilized for fabricating wettability gradient surfaces using a high aspect ratio biomimetic stamp. The use of a flexible stamp allows the technique to be extended for creating a gradient topography on nonplanar surfaces as well. We also show that the gradient surfaces with regular structures can be used in combinatorial studies related to pattern directed dewetting.
American fuel cell market development
NASA Astrophysics Data System (ADS)
Gillis, E. A.
1992-01-01
Over the past three decades several attempts have been made to introduce fuel cells into commercial markets. The prospective users recognized the attractive features of fuel cells, however they were unwilling to pay a premium for the features other than the easily-calculated fuel cost savings. There was no accepted method for a user to calculate and the accrue the economic value of the other features. The situation is changing. The Clean Air Act signed into law by President Bush on November 15, 1990, mandates a nation wide reduction in SO 2, NO x and ozone emissions. This law affects specific utilities for SO 2 reduction, and specific regions of the country for NO x and ozone reductions — the latter affecting the utility-, industrial- and transportation-sectors in these regions. The Act does not direct how the reductions are to be achieved; but it specifically establishes a trading market for emission allowances whereby an organization that reduces emissions below its target can sell its unused allowance to another organization. In addition to the Clean Air Act, there are other environmental issues emerging such as controls on CO 2 emissions, possible expansion of the list of controlled emissions, mandated use of alternative fuels in specific transportation districts and restrictions on electrical transmission systems. All of these so-called 'environmental externalities' are now recognized as having a real cost that can be quantified, and factored in to calculations to determine the relative economic standing of various technologies. This in turn justifies a premium price for fuel cells hence the renewed interest in the technology by the utility and transportation market segments.
PCA based feature reduction to improve the accuracy of decision tree c4.5 classification
NASA Astrophysics Data System (ADS)
Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.
2018-03-01
Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.
Alternative method for variable aspect ratio vias using a vortex mask
NASA Astrophysics Data System (ADS)
Schepis, Anthony R.; Levinson, Zac; Burbine, Andrew; Smith, Bruce W.
2014-03-01
Historically IC (integrated circuit) device scaling has bridged the gap between technology nodes. Device size reduction is enabled by increased pattern density, enhancing functionality and effectively reducing cost per chip. Exemplifying this trend are aggressive reductions in memory cell sizes that have resulted in systems with diminishing area between bit/word lines. This affords an even greater challenge in the patterning of contact level features that are inherently difficult to resolve because of their relatively small area and complex aerial image. To accommodate these trends, semiconductor device design has shifted toward the implementation of elliptical contact features. This empowers designers to maximize the use of free device space, preserving contact area and effectively reducing the via dimension just along a single axis. It is therefore critical to provide methods that enhance the resolving capacity of varying aspect ratio vias for implementation in electronic design systems. Vortex masks, characterized by their helically induced propagation of light and consequent dark core, afford great potential for the patterning of such features when coupled with a high resolution negative tone resist system. This study investigates the integration of a vortex mask in a 193nm immersion (193i) lithography system and qualifies its ability to augment aspect ratio through feature density using aerial image vector simulation. It was found that vortex fabricated vias provide a distinct resolution advantage over traditionally patterned contact features employing a 6% attenuated phase shift mask (APM). 1:1 features were resolvable at 110nm pitch with a 38nm critical dimension (CD) and 110nm depth of focus (DOF) at 10% exposure latitude (EL). Furthermore, iterative source-mask optimization was executed as means to augment aspect ratio. By employing mask asymmetries and directionally biased sources aspect ratios ranging between 1:1 and 2:1 were achievable, however, this range is ultimately dictated by pitch employed.
Mesial temporal lobe epilepsy lateralization using SPHARM-based features of hippocampus and SVM
NASA Astrophysics Data System (ADS)
Esmaeilzadeh, Mohammad; Soltanian-Zadeh, Hamid; Jafari-Khouzani, Kourosh
2012-02-01
This paper improves the Lateralization (identification of the epileptogenic hippocampus) accuracy in Mesial Temporal Lobe Epilepsy (mTLE). In patients with this kind of epilepsy, usually one of the brain's hippocampi is the focus of the epileptic seizures, and resection of the seizure focus is the ultimate treatment to control or reduce the seizures. Moreover, the epileptogenic hippocampus is prone to shrinkage and deformation; therefore, shape analysis of the hippocampus is advantageous in the preoperative assessment for the Lateralization. The method utilized for shape analysis is the Spherical Harmonics (SPHARM). In this method, the shape of interest is decomposed using a set of bases functions and the obtained coefficients of expansion are the features describing the shape. To perform shape comparison and analysis, some pre- and post-processing steps such as "alignment of different subjects' hippocampi" and the "reduction of feature-space dimension" are required. To this end, first order ellipsoid is used for alignment. For dimension reduction, we propose to keep only the SPHARM coefficients with maximum conformity to the hippocampus shape. Then, using these coefficients of normal and epileptic subjects along with 3D invariants, specific lateralization indices are proposed. Consequently, the 1536 SPHARM coefficients of each subject are summarized into 3 indices, where for each index the negative (positive) value shows that the left (right) hippocampus is deformed (diseased). Employing these indices, the best achieved lateralization accuracy for clustering and classification algorithms are 85% and 92%, respectively. This is a significant improvement compared to the conventional volumetric method.
Intra-operative adjustment of standard planes in C-arm CT image data.
Brehler, Michael; Görres, Joseph; Franke, Jochen; Barth, Karl; Vetter, Sven Y; Grützner, Paul A; Meinzer, Hans-Peter; Wolf, Ivo; Nabers, Diana
2016-03-01
With the help of an intra-operative mobile C-arm CT, medical interventions can be verified and corrected, avoiding the need for a post-operative CT and a second intervention. An exact adjustment of standard plane positions is necessary for the best possible assessment of the anatomical regions of interest but the mobility of the C-arm causes the need for a time-consuming manual adjustment. In this article, we present an automatic plane adjustment at the example of calcaneal fractures. We developed two feature detection methods (2D and pseudo-3D) based on SURF key points and also transferred the SURF approach to 3D. Combined with an atlas-based registration, our algorithm adjusts the standard planes of the calcaneal C-arm images automatically. The robustness of the algorithms is evaluated using a clinical data set. Additionally, we tested the algorithm's performance for two registration approaches, two resolutions of C-arm images and two methods for metal artifact reduction. For the feature extraction, the novel 3D-SURF approach performs best. As expected, a higher resolution ([Formula: see text] voxel) leads also to more robust feature points and is therefore slightly better than the [Formula: see text] voxel images (standard setting of device). Our comparison of two different artifact reduction methods and the complete removal of metal in the images shows that our approach is highly robust against artifacts and the number and position of metal implants. By introducing our fast algorithmic processing pipeline, we developed the first steps for a fully automatic assistance system for the assessment of C-arm CT images.
NASA Astrophysics Data System (ADS)
Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua
2017-04-01
Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.
Diffusive tunneling for alleviating Knudsen-layer reactivity reduction under hydrodynamic mix
NASA Astrophysics Data System (ADS)
Tang, Xianzhu; McDevitt, Chris; Guo, Zehua
2017-10-01
Hydrodynamic mix will produce small features for intermixed deuterium-tritium fuel and inert pusher materials. The geometrical characteristics of the mix feature have a large impact on Knudsen layer yield reduction. We considered two features. One is planar structure, and the other is fuel cells segmented by inert pusher material which can be represented by a spherical DT bubble enclosed by a pusher shell. The truly 3D fuel feature, the spherical bubble, has the largest degree of yield reduction, due to fast ions being lost in all directions. The planar fuel structure, which can be regarded as 1D features, has modest amount of potential for yield degradation. While the increasing yield reduction with increasing Knudsen number of the fuel region is straightforwardly anticipated, we also show, by a combination of direct simulation and simple model, that once the pusher materials is stretched sufficiently thin by hydrodynamic mix, the fast fuel ions diffusively tunnel through them with minimal energy loss, so the Knudsen layer yield reduction becomes alleviated. This yield recovery can occur in a chunk-mixed plasma, way before the far more stringent, asymptotic limit of an atomically homogenized fuel and pusher assembly. Work supported by LANL LDRD program.
NASA Astrophysics Data System (ADS)
Zhang, Peng; Peng, Jing; Sims, S. Richard F.
2005-05-01
In ATR applications, each feature is a convolution of an image with a filter. It is important to use most discriminant features to produce compact representations. We propose two novel subspace methods for dimension reduction to address limitations associated with Fukunaga-Koontz Transform (FKT). The first method, Scatter-FKT, assumes that target is more homogeneous, while clutter can be anything other than target and anywhere. Thus, instead of estimating a clutter covariance matrix, Scatter-FKT computes a clutter scatter matrix that measures the spread of clutter from the target mean. We choose dimensions along which the difference in variation between target and clutter is most pronounced. When the target follows a Gaussian distribution, Scatter-FKT can be viewed as a generalization of FKT. The second method, Optimal Bayesian Subspace, is derived from the optimal Bayesian classifier. It selects dimensions such that the minimum Bayes error rate can be achieved. When both target and clutter follow Gaussian distributions, OBS computes optimal subspace representations. We compare our methods against FKT using character image as well as IR data.
Breast Cancer Detection with Reduced Feature Set.
Mert, Ahmet; Kılıç, Niyazi; Bilgili, Erdem; Akan, Aydin
2015-01-01
This paper explores feature reduction properties of independent component analysis (ICA) on breast cancer decision support system. Wisconsin diagnostic breast cancer (WDBC) dataset is reduced to one-dimensional feature vector computing an independent component (IC). The original data with 30 features and reduced one feature (IC) are used to evaluate diagnostic accuracy of the classifiers such as k-nearest neighbor (k-NN), artificial neural network (ANN), radial basis function neural network (RBFNN), and support vector machine (SVM). The comparison of the proposed classification using the IC with original feature set is also tested on different validation (5/10-fold cross-validations) and partitioning (20%-40%) methods. These classifiers are evaluated how to effectively categorize tumors as benign and malignant in terms of specificity, sensitivity, accuracy, F-score, Youden's index, discriminant power, and the receiver operating characteristic (ROC) curve with its criterion values including area under curve (AUC) and 95% confidential interval (CI). This represents an improvement in diagnostic decision support system, while reducing computational complexity.
Nazarzadeh, Kimia; Arjunan, Sridhar P; Kumar, Dinesh K; Das, Debi Prasad
2016-08-01
In this study, we have analyzed the accelerometer data recorded during gait analysis of Parkinson disease patients for detecting freezing of gait (FOG) episodes. The proposed method filters the recordings for noise reduction of the leg movement changes and computes the wavelet coefficients to detect FOG events. Publicly available FOG database was used and the technique was evaluated using receiver operating characteristic (ROC) analysis. Results show a higher performance of the wavelet feature in discrimination of the FOG events from the background activity when compared with the existing technique.
Dimensionality Reduction Through Classifier Ensembles
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Tumer, Kagan; Norwig, Peter (Technical Monitor)
1999-01-01
In data mining, one often needs to analyze datasets with a very large number of attributes. Performing machine learning directly on such data sets is often impractical because of extensive run times, excessive complexity of the fitted model (often leading to overfitting), and the well-known "curse of dimensionality." In practice, to avoid such problems, feature selection and/or extraction are often used to reduce data dimensionality prior to the learning step. However, existing feature selection/extraction algorithms either evaluate features by their effectiveness across the entire data set or simply disregard class information altogether (e.g., principal component analysis). Furthermore, feature extraction algorithms such as principal components analysis create new features that are often meaningless to human users. In this article, we present input decimation, a method that provides "feature subsets" that are selected for their ability to discriminate among the classes. These features are subsequently used in ensembles of classifiers, yielding results superior to single classifiers, ensembles that use the full set of features, and ensembles based on principal component analysis on both real and synthetic datasets.
Pessi, Jenni; Lassila, Ilkka; Meriläinen, Antti; Räikkönen, Heikki; Hæggström, Edward; Yliruusi, Jouko
2016-08-01
We introduce a robust, stable, and reproducible method to produce nanoparticles based on expansion of supercritical solutions using carbon dioxide as a solvent. The method, controlled expansion of supercritical solution (CESS), uses controlled mass transfer, flow, pressure reduction, and particle collection in dry ice. CESS offers control over the crystallization process as the pressure in the system is reduced according to a specific profile. Particle formation takes place before the exit nozzle, and condensation is the main mechanism for postnucleation particle growth. A 2-step gradient pressure reduction is used to prevent Mach disk formation and particle growth by coagulation. Controlled particle growth keeps the production process stable. With CESS, we produced piroxicam nanoparticles, 60 mg/h, featuring narrow size distribution (176 ± 53 nm). Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Studies of reaction geometry in oxidation and reduction of the alkaline silver electrode
NASA Technical Reports Server (NTRS)
Butler, E. A.; Blackham, A. U.
1971-01-01
Two methods of surface area estimations of sintered silver electrodes have given roughness factors of 58 and 81. One method is based on constant current oxidation, the other is based on potentiostatic oxidation. Examination of both wire and sintered silver electrodes via scanning electron microscopy at various stages of oxidation have shown that important structural features are mounds of oxide. In potentiostatic oxidations these appear to form on sites instantaneously nucleated while in constant current oxidations progressive nucleation is indicated.
Spectrometer Baseline Control Via Spatial Filtering
NASA Technical Reports Server (NTRS)
Burleigh, M. R.; Richey, C. R.; Rinehart, S. A.; Quijada, M. A.; Wollack, E. J.
2016-01-01
An absorptive half-moon aperture mask is experimentally explored as a broad-bandwidth means of eliminating spurious spectral features arising from reprocessed radiation in an infrared Fourier transform spectrometer. In the presence of the spatial filter, an order of magnitude improvement in the fidelity of the spectrometer baseline is observed. The method is readily accommodated within the context of commonly employed instrument configurations and leads to a factor of two reduction in optical throughput. A detailed discussion of the underlying mechanism and limitations of the method are provided.
NASA Astrophysics Data System (ADS)
Werdiningsih, Indah; Zaman, Badrus; Nuqoba, Barry
2017-08-01
This paper presents classification of brain cancer using wavelet transformation and Adaptive Neighborhood Based Modified Backpropagation (ANMBP). Three stages of the processes, namely features extraction, features reduction, and classification process. Wavelet transformation is used for feature extraction and ANMBP is used for classification process. The result of features extraction is feature vectors. Features reduction used 100 energy values per feature and 10 energy values per feature. Classifications of brain cancer are normal, alzheimer, glioma, and carcinoma. Based on simulation results, 10 energy values per feature can be used to classify brain cancer correctly. The correct classification rate of proposed system is 95 %. This research demonstrated that wavelet transformation can be used for features extraction and ANMBP can be used for classification of brain cancer.
Perestroika and Change in Soviet Weapons Acquisition
1990-06-01
attempted to transfer features of defense industry-mainly managers and methods-to the civilian sector to improve productivity and output. On assuming...to the defense complex absorbed defense managers ’ attention, diverted defense production capacity, and redirected new investment to food processing...equipment drew on additional management andproduction caaite and required more diversion of invest- mnet. Defense procurement reductions arising from
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.
2016-10-01
In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
Lewandowski, Zdzisław
2015-09-01
The project aimed at finding the answers to the following two questions: to what extent does a change in size, height or width of the selected facial features influence the assessment of likeness between an original female composite portrait and a modified one? And how does the sex of the person who judges the images have an impact on the perception of likeness of facial features? The first stage of the project consisted of creating the image of the averaged female faces. Then the basic facial features like eyes, nose and mouth were cut out of the averaged face and each of these features was transformed in three ways: its size was changed by reduction or enlargement, its height was modified through reduction or enlargement of the above-mentioned features and its width was altered through widening or narrowing. In each out of six feature alternation methods, intensity of modification reached up to 20% of the original size with changes every 2%. The features altered in such a way were again stuck onto the original faces and retouched. The third stage consisted of the assessment, performed by the judges of both sexes, of the extent of likeness between the averaged composite portrait (without any changes) and the modified portraits. The results indicate that there are significant differences in the assessment of likeness of the portraits with some features modified to the original ones. The images with changes in the size and height of the nose received the lowest scores on the likeness scale, which indicates that these changes were perceived by the subjects as the most important. The photos with changes in the height of lip vermillion thickness (the lip height), lip width and the height and width of eye slit, in turn, received high scores of likeness, in spite of big changes, which signifies that these modifications were perceived as less important when compared to the other features investigated.
NASA Astrophysics Data System (ADS)
Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.
2015-06-01
Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.
Age and gender classification in the wild with unsupervised feature learning
NASA Astrophysics Data System (ADS)
Wan, Lihong; Huo, Hong; Fang, Tao
2017-03-01
Inspired by unsupervised feature learning (UFL) within the self-taught learning framework, we propose a method based on UFL, convolution representation, and part-based dimensionality reduction to handle facial age and gender classification, which are two challenging problems under unconstrained circumstances. First, UFL is introduced to learn selective receptive fields (filters) automatically by applying whitening transformation and spherical k-means on random patches collected from unlabeled data. The learning process is fast and has no hyperparameters to tune. Then, the input image is convolved with these filters to obtain filtering responses on which local contrast normalization is applied. Average pooling and feature concatenation are then used to form global face representation. Finally, linear discriminant analysis with part-based strategy is presented to reduce the dimensions of the global representation and to improve classification performances further. Experiments on three challenging databases, namely, Labeled faces in the wild, Gallagher group photos, and Adience, demonstrate the effectiveness of the proposed method relative to that of state-of-the-art approaches.
Elhaj, Fatin A; Salim, Naomie; Harris, Arief R; Swee, Tan Tian; Ahmed, Taqwa
2016-04-01
Arrhythmia is a cardiac condition caused by abnormal electrical activity of the heart, and an electrocardiogram (ECG) is the non-invasive method used to detect arrhythmias or heart abnormalities. Due to the presence of noise, the non-stationary nature of the ECG signal (i.e. the changing morphology of the ECG signal with respect to time) and the irregularity of the heartbeat, physicians face difficulties in the diagnosis of arrhythmias. The computer-aided analysis of ECG results assists physicians to detect cardiovascular diseases. The development of many existing arrhythmia systems has depended on the findings from linear experiments on ECG data which achieve high performance on noise-free data. However, nonlinear experiments characterize the ECG signal more effectively sense, extract hidden information in the ECG signal, and achieve good performance under noisy conditions. This paper investigates the representation ability of linear and nonlinear features and proposes a combination of such features in order to improve the classification of ECG data. In this study, five types of beat classes of arrhythmia as recommended by the Association for Advancement of Medical Instrumentation are analyzed: non-ectopic beats (N), supra-ventricular ectopic beats (S), ventricular ectopic beats (V), fusion beats (F) and unclassifiable and paced beats (U). The characterization ability of nonlinear features such as high order statistics and cumulants and nonlinear feature reduction methods such as independent component analysis are combined with linear features, namely, the principal component analysis of discrete wavelet transform coefficients. The features are tested for their ability to differentiate different classes of data using different classifiers, namely, the support vector machine and neural network methods with tenfold cross-validation. Our proposed method is able to classify the N, S, V, F and U arrhythmia classes with high accuracy (98.91%) using a combined support vector machine and radial basis function method. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Fast detection of vascular plaque in optical coherence tomography images using a reduced feature set
NASA Astrophysics Data System (ADS)
Prakash, Ammu; Ocana Macias, Mariano; Hewko, Mark; Sowa, Michael; Sherif, Sherif
2018-03-01
Optical coherence tomography (OCT) images are capable of detecting vascular plaque by using the full set of 26 Haralick textural features and a standard K-means clustering algorithm. However, the use of the full set of 26 textural features is computationally expensive and may not be feasible for real time implementation. In this work, we identified a reduced set of 3 textural feature which characterizes vascular plaque and used a generalized Fuzzy C-means clustering algorithm. Our work involves three steps: 1) the reduction of a full set 26 textural feature to a reduced set of 3 textural features by using genetic algorithm (GA) optimization method 2) the implementation of an unsupervised generalized clustering algorithm (Fuzzy C-means) on the reduced feature space, and 3) the validation of our results using histology and actual photographic images of vascular plaque. Our results show an excellent match with histology and actual photographic images of vascular tissue. Therefore, our results could provide an efficient pre-clinical tool for the detection of vascular plaque in real time OCT imaging.
Clinical Analysis of Midfacial Fractures
Yamamoto, Kazuhiko; Matsusue, Yumiko; Horita, Satoshi; Murakami, Kazuhiro; Sugiura, Tsutomu; Kirita, Tadaaki
2014-01-01
Purpose: To analyze the features of midfacial fractures. Methods: Data of 320 patients treated for midfacial fractures during the past 10 years were retrospectively analyzed. Results: Patients were 192 male and 128 female. Their age ranged from 1 to 96 years old with the average of 42.1. Injury most frequently occurred by traffic accidents in 168 patients, followed by falls in 78, assaults in 31 and sports in 25. Pattern of the fractures was classified into zygoma in 159 patients, alveolus in 60, multiple sites in 54, maxilla in 45 and nasal bone in 2. Facial injury severity scale ranged from 1 to 12 with the average of 1.52. Injuries to other sites of the body were found in 90 patients. Fractures of multiple sites showed higher facial injury severity scale and were associated with injuries to other sites of the body at a higher rate. Observation was most frequently chosen in 153 patients, followed by open reduction and internal fixation in 72, intramaxillary fixation in 43 and transcutaneous reduction in 26. Conclusions: Midfacial fractures showed a variety of features in terms of the site and severity and associated injuries. Understanding these features is important to manage these patients properly. PMID:24757396
Difference in EUV photoresist design towards reduction of LWR and LCDU
NASA Astrophysics Data System (ADS)
Jiang, Jing; De Simone, Danilo; Vandenberghe, Geert
2017-03-01
Pattern fidelity of EUV lithography is crucial for high resolution features, since small variation can affect device performance and even cause short or open circuit. For 1D features, dense lines and contact holes are the most common features for active, metal and contact layer, therefore line width roughness (LWR) and local critical dimension uniformity (LCDU) are important indexes to monitor. Both LWR and LCDU are greatly influenced by photon and acid shot noise. In addition, LWR is also affected by resist mechanical properties, like pattern collapse. In this study, we studied the influence of different chemically amplified resist components, such as polymer, PAG and quencher for both types and concentrations in order to understand the relative extent of influences of deprotection, acid diffusion, and base neutralization on pattern fidelity. However, conventional methods to approach higher resolution or low LWR/LCDU by sacrificing the dose are not sustainable. In order to continue to improve resist performance, a new component, metal salt sensitizer, is introduced into the resist system. This metal salt is able to achieve 30% dose reduction by increasing EUV absorption, maintaining LWR. We believe metal sensitizer might give us a new way to challenge the RLS trade-off.
NASA Astrophysics Data System (ADS)
Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku
2012-03-01
This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.
Uterine Fibroid Embolization for Symptomatic Fibroids: Study at a Teaching Hospital in Kenya
Mutai, John Kiprop; Vinayak, Sudhir; Stones, William; Hacking, Nigel; Mariara, Charles
2015-01-01
Objective: Characterization of magnetic (MRI) features in women undergoing uterine fibroid embolization (UFE) and identification of clinical correlates in an African population. Materials and Methods: Patients with symptomatic fibroids who are selected to undergo UFE at the hospital formed the study population. The baseline MRI features, baseline symptom score, short-term imaging outcome, and mid-term symptom scores were analyzed for interval changes. Assessment of potential associations between short-term imaging features and mid-term symptom scores was also done. Results: UFE resulted in statistically significant reduction (P < 0.001) of dominant fibroid, uterine volumes, and reduction of symptom severity scores, which were 43.7%, 40.1%, and 37.8%, respectively. Also, 59% of respondents had more than 10 fibroids. The predominant location of the dominant fibroid was intramural. No statistically significant association was found between clinical and radiological outcome. Conclusion: The response of uterine fibroids to embolization in the African population is not different from the findings reported in other studies from the west. The presence of multiple and large fibroids in this study is consistent with the case mix described in other studies of African-American populations. Patient counseling should emphasize the independence of volume reduction and symptom improvement. Though volume changes are of relevance for the radiologist in understanding the evolution of the condition and identifying potential technical treatment failures, it should not be the main basis of evaluation of treatment success. PMID:25883858
Zheng, Weili; Ackley, Elena S; Martínez-Ramón, Manel; Posse, Stefan
2013-02-01
In previous works, boosting aggregation of classifier outputs from discrete brain areas has been demonstrated to reduce dimensionality and improve the robustness and accuracy of functional magnetic resonance imaging (fMRI) classification. However, dimensionality reduction and classification of mixed activation patterns of multiple classes remain challenging. In the present study, the goals were (a) to reduce dimensionality by combining feature reduction at the voxel level and backward elimination of optimally aggregated classifiers at the region level, (b) to compare region selection for spatially aggregated classification using boosting and partial least squares regression methods and (c) to resolve mixed activation patterns using probabilistic prediction of individual tasks. Brain activation maps from interleaved visual, motor, auditory and cognitive tasks were segmented into 144 functional regions. Feature selection reduced the number of feature voxels by more than 50%, leaving 95 regions. The two aggregation approaches further reduced the number of regions to 30, resulting in more than 75% reduction of classification time and misclassification rates of less than 3%. Boosting and partial least squares (PLS) were compared to select the most discriminative and the most task correlated regions, respectively. Successful task prediction in mixed activation patterns was feasible within the first block of task activation in real-time fMRI experiments. This methodology is suitable for sparsifying activation patterns in real-time fMRI and for neurofeedback from distributed networks of brain activation. Copyright © 2013 Elsevier Inc. All rights reserved.
An optimization method for defects reduction in fiber laser keyhole welding
NASA Astrophysics Data System (ADS)
Ai, Yuewei; Jiang, Ping; Shao, Xinyu; Wang, Chunming; Li, Peigen; Mi, Gaoyang; Liu, Yang; Liu, Wei
2016-01-01
Laser welding has been widely used in automotive, power, chemical, nuclear and aerospace industries. The quality of welded joints is closely related to the existing defects which are primarily determined by the welding process parameters. This paper proposes a defects optimization method that takes the formation mechanism of welding defects and weld geometric features into consideration. The analysis of welding defects formation mechanism aims to investigate the relationship between welding defects and process parameters, and weld features are considered to identify the optimal process parameters for the desired welded joints with minimum defects. The improved back-propagation neural network possessing good modeling for nonlinear problems is adopted to establish the mathematical model and the obtained model is solved by genetic algorithm. The proposed method is validated by macroweld profile, microstructure and microhardness in the confirmation tests. The results show that the proposed method is effective at reducing welding defects and obtaining high-quality joints for fiber laser keyhole welding in practical production.
Boon, K H; Khalil-Hani, M; Malarvili, M B
2018-01-01
This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity. Copyright © 2017 Elsevier B.V. All rights reserved.
Ghosh, Tonmoy; Wahid, Khan A.
2018-01-01
Wireless capsule endoscopy (WCE) is the most advanced technology to visualize whole gastrointestinal (GI) tract in a non-invasive way. But the major disadvantage here, it takes long reviewing time, which is very laborious as continuous manual intervention is necessary. In order to reduce the burden of the clinician, in this paper, an automatic bleeding detection method for WCE video is proposed based on the color histogram of block statistics, namely CHOBS. A single pixel in WCE image may be distorted due to the capsule motion in the GI tract. Instead of considering individual pixel values, a block surrounding to that individual pixel is chosen for extracting local statistical features. By combining local block features of three different color planes of RGB color space, an index value is defined. A color histogram, which is extracted from those index values, provides distinguishable color texture feature. A feature reduction technique utilizing color histogram pattern and principal component analysis is proposed, which can drastically reduce the feature dimension. For bleeding zone detection, blocks are classified using extracted local features that do not incorporate any computational burden for feature extraction. From extensive experimentation on several WCE videos and 2300 images, which are collected from a publicly available database, a very satisfactory bleeding frame and zone detection performance is achieved in comparison to that obtained by some of the existing methods. In the case of bleeding frame detection, the accuracy, sensitivity, and specificity obtained from proposed method are 97.85%, 99.47%, and 99.15%, respectively, and in the case of bleeding zone detection, 95.75% of precision is achieved. The proposed method offers not only low feature dimension but also highly satisfactory bleeding detection performance, which even can effectively detect bleeding frame and zone in a continuous WCE video data. PMID:29468094
An extended algebraic reconstruction technique (E-ART) for dual spectral CT.
Zhao, Yunsong; Zhao, Xing; Zhang, Peng
2015-03-01
Compared with standard computed tomography (CT), dual spectral CT (DSCT) has many advantages for object separation, contrast enhancement, artifact reduction, and material composition assessment. But it is generally difficult to reconstruct images from polychromatic projections acquired by DSCT, because of the nonlinear relation between the polychromatic projections and the images to be reconstructed. This paper first models the DSCT reconstruction problem as a nonlinear system problem; and then extend the classic ART method to solve the nonlinear system. One feature of the proposed method is its flexibility. It fits for any scanning configurations commonly used and does not require consistent rays for different X-ray spectra. Another feature of the proposed method is its high degree of parallelism, which means that the method is suitable for acceleration on GPUs (graphic processing units) or other parallel systems. The method is validated with numerical experiments from simulated noise free and noisy data. High quality images are reconstructed with the proposed method from the polychromatic projections of DSCT. The reconstructed images are still satisfactory even if there are certain errors in the estimated X-ray spectra.
Variable cross-section windings for efficiency improvement of electric machines
NASA Astrophysics Data System (ADS)
Grachev, P. Yu; Bazarov, A. A.; Tabachinskiy, A. S.
2018-02-01
Implementation of energy-saving technologies in industry is impossible without efficiency improvement of electric machines. The article considers the ways of efficiency improvement and mass and dimensions reduction of electric machines with electronic control. Features of compact winding design for stators and armatures are described. Influence of compact winding on thermal and electrical process is given. Finite element method was used in computer simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sommer, A., E-mail: a.sommer@lte.uni-saarland.de; Farle, O., E-mail: o.farle@lte.uni-saarland.de; Dyczij-Edlinger, R., E-mail: edlinger@lte.uni-saarland.de
2015-10-15
This paper presents a fast numerical method for computing certified far-field patterns of phased antenna arrays over broad frequency bands as well as wide ranges of steering and look angles. The proposed scheme combines finite-element analysis, dual-corrected model-order reduction, and empirical interpolation. To assure the reliability of the results, improved a posteriori error bounds for the radiated power and directive gain are derived. Both the reduced-order model and the error-bounds algorithm feature offline–online decomposition. A real-world example is provided to demonstrate the efficiency and accuracy of the suggested approach.
Berisha, Visar; Wang, Shuai; LaCross, Amy; Liss, Julie
2015-01-01
Changes in some lexical features of language have been associated with the onset and progression of Alzheimer's disease. Here we describe a method to extract key features from discourse transcripts, which we evaluated on non-scripted news conferences from President Ronald Reagan, who was diagnosed with Alzheimer's disease in 1994, and President George Herbert Walker Bush, who has no known diagnosis of Alzheimer's disease. Key word counts previously associated with cognitive decline in Alzheimer's disease were extracted and regression analyses were conducted. President Reagan showed a significant reduction in the number of unique words over time and a significant increase in conversational fillers and non-specific nouns over time. There was no significant trend in these features for President Bush.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zha, Yikun; Wei, Jingsong; Gan, Fuxi
2013-09-01
Maskless laser direct writing lithography has been applied in the fabrication of optical elements and electric-optical devices. With the development of technology, the feature size of the elements and devices is required to reduce down to nanoscale. Increasing the numerical aperture of converging lens and shortening the laser wavelength are good methods to obtain the small spot and reduce the feature size to nanoscale, while this will cause the reduction of the depth of focus. The reduction of depth of focus will lead to some difficulties in the focusing and tracking servo controlling during the high speed laser direct writing lithography. In this work, the combination of the diffractive optical elements and the nonlinear absorption inorganic resist thin films cannot only extend the depth of focus, but also reduce the feature size of the lithographic marks down to nanoscale. By using the five-zone annular phase-only binary pupil filter as the diffractive optical elements and AgInSbTe as the nonlinear absorption inorganic resist thin film, the depth of focus cannot only extend to 7.39 times that of the focused spot, but also reduce the lithographic feature size down to 54.6 nm. The ill-effect of sidelobe on the lithography is also eliminated by the nonlinear reverse saturable absorption and the phase change threshold lithographic characteristics.
Evaluation of space shuttle main engine fluid dynamic frequency response characteristics
NASA Technical Reports Server (NTRS)
Gardner, T. G.
1980-01-01
In order to determine the POGO stability characteristics of the space shuttle main engine liquid oxygen (LOX) system, the fluid dynamic frequency response functions between elements in the SSME LOX system was evaluated, both analytically and experimentally. For the experimental data evaluation, a software package was written for the Hewlett-Packard 5451C Fourier analyzer. The POGO analysis software is documented and consists of five separate segments. Each segment is stored on the 5451C disc as an individual program and performs its own unique function. Two separate data reduction methods, a signal calibration, coherence or pulser signal based frequency response function blanking, and automatic plotting features are included in the program. The 5451C allows variable parameter transfer from program to program. This feature is used to advantage and requires only minimal user interface during the data reduction process. Experimental results are included and compared with the analytical predictions in order to adjust the general model and arrive at a realistic simulation of the POGO characteristics.
Ensemble Learning Method for Outlier Detection and its Application to Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Nun, Isadora; Protopapas, Pavlos; Sim, Brandon; Chen, Wesley
2016-09-01
Outlier detection is necessary for automated data analysis, with specific applications spanning almost every domain from financial markets to epidemiology to fraud detection. We introduce a novel mixture of the experts outlier detection model, which uses a dynamically trained, weighted network of five distinct outlier detection methods. After dimensionality reduction, individual outlier detection methods score each data point for “outlierness” in this new feature space. Our model then uses dynamically trained parameters to weigh the scores of each method, allowing for a finalized outlier score. We find that the mixture of experts model performs, on average, better than any single expert model in identifying both artificially and manually picked outliers. This mixture model is applied to a data set of astronomical light curves, after dimensionality reduction via time series feature extraction. Our model was tested using three fields from the MACHO catalog and generated a list of anomalous candidates. We confirm that the outliers detected using this method belong to rare classes, like Novae, He-burning, and red giant stars; other outlier light curves identified have no available information associated with them. To elucidate their nature, we created a website containing the light-curve data and information about these objects. Users can attempt to classify the light curves, give conjectures about their identities, and sign up for follow up messages about the progress made on identifying these objects. This user submitted data can be used further train of our mixture of experts model. Our code is publicly available to all who are interested.
Park, Sang Cheol; Chapman, Brian E; Zheng, Bin
2011-06-01
This study developed a computer-aided detection (CAD) scheme for pulmonary embolism (PE) detection and investigated several approaches to improve CAD performance. In the study, 20 computed tomography examinations with various lung diseases were selected, which include 44 verified PE lesions. The proposed CAD scheme consists of five basic steps: 1) lung segmentation; 2) PE candidate extraction using an intensity mask and tobogganing region growing; 3) PE candidate feature extraction; 4) false-positive (FP) reduction using an artificial neural network (ANN); and 5) a multifeature-based k-nearest neighbor for positive/negative classification. In this study, we also investigated the following additional methods to improve CAD performance: 1) grouping 2-D detected features into a single 3-D object; 2) selecting features with a genetic algorithm (GA); and 3) limiting the number of allowed suspicious lesions to be cued in one examination. The results showed that 1) CAD scheme using tobogganing, an ANN, and grouping method achieved the maximum detection sensitivity of 79.2%; 2) the maximum scoring method achieved the superior performance over other scoring fusion methods; 3) GA was able to delete "redundant" features and further improve CAD performance; and 4) limiting the maximum number of cued lesions in an examination reduced FP rate by 5.3 times. Combining these approaches, CAD scheme achieved 63.2% detection sensitivity with 18.4 FP lesions per examination. The study suggested that performance of CAD schemes for PE detection depends on many factors that include 1) optimizing the 2-D region grouping and scoring methods; 2) selecting the optimal feature set; and 3) limiting the number of allowed cueing lesions per examination.
Sensitivity Analysis for Probabilistic Neural Network Structure Reduction.
Kowalski, Piotr A; Kusy, Maciej
2018-05-01
In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the probabilistic neural network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.
Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E
2014-01-01
This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.
A Modified Sparse Representation Method for Facial Expression Recognition.
Wang, Wei; Xu, LiHong
2016-01-01
In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.
A Modified Sparse Representation Method for Facial Expression Recognition
Wang, Wei; Xu, LiHong
2016-01-01
In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878
Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy
NASA Technical Reports Server (NTRS)
Ford, G. E.
1986-01-01
To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.
Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken
2014-03-01
We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer dataset that comes along with the included Butterfly R package. In the included R script, a univariate feature selection method is used for the dimension reduction step, but in the future we wish to use a more powerful multivariate feature reduction method based on neural networks (Kriesel, 2007). A script written in R (designed to run on R studio) accompanies this article that implements this algorithm and is available at http://butterflygeraci.codeplex.com/. For details on the R package or for help installing the software refer to the accompanying document, Supporting Material and Appendix.
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications.
A new axial smoothing method based on elastic mapping
NASA Astrophysics Data System (ADS)
Yang, J.; Huang, S. C.; Lin, K. P.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C. K.; Phelps, M. E.
1996-12-01
New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency=0.5, 0.7, 1.0/spl times/Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.
Combined data mining/NIR spectroscopy for purity assessment of lime juice
NASA Astrophysics Data System (ADS)
Shafiee, Sahameh; Minaei, Saeid
2018-06-01
This paper reports the data mining study on the NIR spectrum of lime juice samples to determine their purity (natural or synthetic). NIR spectra for 72 pure and synthetic lime juice samples were recorded in reflectance mode. Sample outliers were removed using PCA analysis. Different data mining techniques for feature selection (Genetic Algorithm (GA)) and classification (including the radial basis function (RBF) network, Support Vector Machine (SVM), and Random Forest (RF) tree) were employed. Based on the results, SVM proved to be the most accurate classifier as it achieved the highest accuracy (97%) using the raw spectrum information. The classifier accuracy dropped to 93% when selected feature vector by GA search method was applied as classifier input. It can be concluded that some relevant features which produce good performance with the SVM classifier are removed by feature selection. Also, reduced spectra using PCA do not show acceptable performance (total accuracy of 66% by RBFNN), which indicates that dimensional reduction methods such as PCA do not always lead to more accurate results. These findings demonstrate the potential of data mining combination with near-infrared spectroscopy for monitoring lime juice quality in terms of natural or synthetic nature.
Feature reduction and payload location with WAM steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.; Lubenko, Ivans
2009-02-01
WAM steganalysis is a feature-based classifier for detecting LSB matching steganography, presented in 2006 by Goljan et al. and demonstrated to be sensitive even to small payloads. This paper makes three contributions to the development of the WAM method. First, we benchmark some variants of WAM in a number of sets of cover images, and we are able to quantify the significance of differences in results between different machine learning algorithms based on WAM features. It turns out that, like many of its competitors, WAM is not effective in certain types of cover, and furthermore it is hard to predict which types of cover are suitable for WAM steganalysis. Second, we demonstrate that only a few the features used in WAM steganalysis do almost all of the work, so that a simplified WAM steganalyser can be constructed in exchange for a little less detection power. Finally, we demonstrate how the WAM method can be extended to provide forensic tools to identify the location (and potentially content) of LSB matching payload, given a number of stego images with payload placed in the same locations. Although easily evaded, this is a plausible situation if the same stego key is mistakenly re-used for embedding in multiple images.
Visualizing phylogenetic tree landscapes.
Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A
2017-02-02
Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D projections significantly increase the fit between the tree-to-tree distances and can facilitate the interpretation of the relationship among phylogenetic trees. We demonstrate that the choice of dimensionality reduction method can significantly influence the spatial relationship among a large set of competing phylogenetic trees. We highlight the importance of selecting a dimensionality reduction method to visualize large multi-locus phylogenetic landscapes and demonstrate that 3D projections of mitochondrial tree landscapes better capture the relationship among the trees being compared.
Kolchinsky, A; Lourenço, A; Li, L; Rocha, L M
2013-01-01
Drug-drug interaction (DDI) is a major cause of morbidity and mortality. DDI research includes the study of different aspects of drug interactions, from in vitro pharmacology, which deals with drug interaction mechanisms, to pharmaco-epidemiology, which investigates the effects of DDI on drug efficacy and adverse drug reactions. Biomedical literature mining can aid both kinds of approaches by extracting relevant DDI signals from either the published literature or large clinical databases. However, though drug interaction is an ideal area for translational research, the inclusion of literature mining methodologies in DDI workflows is still very preliminary. One area that can benefit from literature mining is the automatic identification of a large number of potential DDIs, whose pharmacological mechanisms and clinical significance can then be studied via in vitro pharmacology and in populo pharmaco-epidemiology. We implemented a set of classifiers for identifying published articles relevant to experimental pharmacokinetic DDI evidence. These documents are important for identifying causal mechanisms behind putative drug-drug interactions, an important step in the extraction of large numbers of potential DDIs. We evaluate performance of several linear classifiers on PubMed abstracts, under different feature transformation and dimensionality reduction methods. In addition, we investigate the performance benefits of including various publicly-available named entity recognition features, as well as a set of internally-developed pharmacokinetic dictionaries. We found that several classifiers performed well in distinguishing relevant and irrelevant abstracts. We found that the combination of unigram and bigram textual features gave better performance than unigram features alone, and also that normalization transforms that adjusted for feature frequency and document length improved classification. For some classifiers, such as linear discriminant analysis (LDA), proper dimensionality reduction had a large impact on performance. Finally, the inclusion of NER features and dictionaries was found not to help classification.
NASA Astrophysics Data System (ADS)
Wang, Longbiao; Odani, Kyohei; Kai, Atsuhiko
2012-12-01
A blind dereverberation method based on power spectral subtraction (SS) using a multi-channel least mean squares algorithm was previously proposed to suppress the reverberant speech without additive noise. The results of isolated word speech recognition experiments showed that this method achieved significant improvements over conventional cepstral mean normalization (CMN) in a reverberant environment. In this paper, we propose a blind dereverberation method based on generalized spectral subtraction (GSS), which has been shown to be effective for noise reduction, instead of power SS. Furthermore, we extend the missing feature theory (MFT), which was initially proposed to enhance the robustness of additive noise, to dereverberation. A one-stage dereverberation and denoising method based on GSS is presented to simultaneously suppress both the additive noise and nonstationary multiplicative noise (reverberation). The proposed dereverberation method based on GSS with MFT is evaluated on a large vocabulary continuous speech recognition task. When the additive noise was absent, the dereverberation method based on GSS with MFT using only 2 microphones achieves a relative word error reduction rate of 11.4 and 32.6% compared to the dereverberation method based on power SS and the conventional CMN, respectively. For the reverberant and noisy speech, the dereverberation and denoising method based on GSS achieves a relative word error reduction rate of 12.8% compared to the conventional CMN with GSS-based additive noise reduction method. We also analyze the effective factors of the compensation parameter estimation for the dereverberation method based on SS, such as the number of channels (the number of microphones), the length of reverberation to be suppressed, and the length of the utterance used for parameter estimation. The experimental results showed that the SS-based method is robust in a variety of reverberant environments for both isolated and continuous speech recognition and under various parameter estimation conditions.
Parsing Flowcharts and Series-Parallel Graphs
1978-11-01
descriptions of the graph. This possible multiplicity is undesirable in most practical applications, a fact that makes parti%:ularly useful reduction...to parse TT networks, some of the features that make this parsing method useful in other cases are more natually introduced in the context of this...as Figure 4.5 shows. This multiplicity is due to the associativity of consecutive Two Terminal Series and Two Terminal Parallel compositions. In spite
Neurodynamic evaluation of hearing aid features using EEG correlates of listening effort.
Bernarding, Corinna; Strauss, Daniel J; Hannemann, Ronny; Seidler, Harald; Corona-Strauss, Farah I
2017-06-01
In this study, we propose a novel estimate of listening effort using electroencephalographic data. This method is a translation of our past findings, gained from the evoked electroencephalographic activity, to the oscillatory EEG activity. To test this technique, electroencephalographic data from experienced hearing aid users with moderate hearing loss were recorded, wearing hearing aids. The investigated hearing aid settings were: a directional microphone combined with a noise reduction algorithm in a medium and a strong setting, the noise reduction setting turned off, and a setting using omnidirectional microphones without any noise reduction. The results suggest that the electroencephalographic estimate of listening effort seems to be a useful tool to map the exerted effort of the participants. In addition, the results indicate that a directional processing mode can reduce the listening effort in multitalker listening situations.
Mohebbi, Maryam; Ghassemian, Hassan; Asl, Babak Mohammadzadeh
2011-05-01
This paper aims to propose an effective paroxysmal atrial fibrillation (PAF) predictor which is based on the analysis of the heart rate variability (HRV) signal. Predicting the onset of PAF, based on non-invasive techniques, is clinically important and can be invaluable in order to avoid useless therapeutic interventions and to minimize the risks for the patients. This method consists of four steps: Preprocessing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In the next step, the recurrence plot (RP) of HRV signal is obtained and six features are extracted to characterize the basic patterns of the RP. These features consist of length of longest diagonal segments, average length of the diagonal lines, entropy, trapping time, length of longest vertical line, and recurrence trend. In the third step, these features are reduced to three features by the linear discriminant analysis (LDA) technique. Using LDA not only reduces the number of the input features, but also increases the classification accuracy by selecting the most discriminating features. Finally, a support vector machine-based classifier is used to classify the HRV signals. The performance of the proposed method in prediction of PAF episodes was evaluated using the Atrial Fibrillation Prediction Database which consists of both 30-minutes ECG recordings end just prior to the onset of PAF and segments at least 45 min distant from any PAF events. The obtained sensitivity, specificity, and positive predictivity were 96.55%, 100%, and 100%, respectively.
The depth estimation of 3D face from single 2D picture based on manifold learning constraints
NASA Astrophysics Data System (ADS)
Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia
2018-04-01
The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.
Land-Use Portfolio Modeler, Version 1.0
Taketa, Richard; Hong, Makiko
2010-01-01
Natural hazards pose significant threats to the public safety and economic health of many communities throughout the world. Community leaders and decision-makers continually face the challenges of planning and allocating limited resources to invest in protecting their communities against catastrophic losses from natural-hazard events. Public efforts to assess community vulnerability and encourage loss-reduction measures through mitigation often focused on either aggregating site-specific estimates or adopting standards based upon broad assumptions about regional risks. The site-specific method usually provided the most accurate estimates, but was prohibitively expensive, whereas regional risk assessments were often too general to be of practical use. Policy makers lacked a systematic and quantitative method for conducting a regional-scale risk assessment of natural hazards. In response, Bernknopf and others developed the portfolio model, an intermediate-scale approach to assessing natural-hazard risks and mitigation policy alternatives. The basis for the portfolio-model approach was inspired by financial portfolio theory, which prescribes a method of optimizing return on investment while reducing risk by diversifying investments in different security types. In this context, a security type represents a unique combination of features and hazard-risk level, while financial return is defined as the reduction in losses resulting from an investment in mitigation of chosen securities. Features are selected for mitigation and are modeled like investment portfolios. Earth-science and economic data for the features are combined and processed in order to analyze each of the portfolios, which are then used to evaluate the benefits of mitigating the risk in selected locations. Ultimately, the decision maker seeks to choose a portfolio representing a mitigation policy that maximizes the expected return-on-investment, while minimizing the uncertainty associated with that return-on-investment. The portfolio model, now known as the Land-Use Portfolio Model (LUPM), provided the framework for the development of the Land-Use Portfolio Modeler, Version 1.0 software (LUPM v1.0). The software provides a geographic information system (GIS)-based modeling tool for evaluating alternative risk-reduction mitigation strategies for specific natural-hazard events. The modeler uses information about a specific natural-hazard event and the features exposed to that event within the targeted study region to derive a measure of a given mitigation strategy`s effectiveness. Harnessing the spatial capabilities of a GIS enables the tool to provide a rich, interactive mapping environment in which users can create, analyze, visualize, and compare different
Topology optimization of two-dimensional elastic wave barriers
NASA Astrophysics Data System (ADS)
Van hoorickx, C.; Sigmund, O.; Schevenels, M.; Lazarov, B. S.; Lombaert, G.
2016-08-01
Topology optimization is a method that optimally distributes material in a given design domain. In this paper, topology optimization is used to design two-dimensional wave barriers embedded in an elastic halfspace. First, harmonic vibration sources are considered, and stiffened material is inserted into a design domain situated between the source and the receiver to minimize wave transmission. At low frequencies, the stiffened material reflects and guides waves away from the surface. At high frequencies, destructive interference is obtained that leads to high values of the insertion loss. To handle harmonic sources at a frequency in a given range, a uniform reduction of the response over a frequency range is pursued. The minimal insertion loss over the frequency range of interest is maximized. The resulting design contains features at depth leading to a reduction of the insertion loss at the lowest frequencies and features close to the surface leading to a reduction at the highest frequencies. For broadband sources, the average insertion loss in a frequency range is optimized. This leads to designs that especially reduce the response at high frequencies. The designs optimized for the frequency averaged insertion loss are found to be sensitive to geometric imperfections. In order to obtain a robust design, a worst case approach is followed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viayan, B.; Dimitrijevic, N. M.; Rajh, T.
Titania nanotubes having diameters 8 to 12 nm and lengths of 50-300 nm were prepared using a hydrothermal method. Further, the titania nanotubes were calcined over the temperature range 200-800 C in order to enhance their photocatalytic properties by altering their morphology. The calcined titania nanotubes were characterized by using X-ray diffraction and surface area analysis and their morphological features were studied by scanning and transmission electron microscopy. Nanotubes calcined at 400 C showed the maximum extent of photocatalyitc reduction of carbon dioxide to methane, whereas samples calcined at 600 C produced maximum photocatalytic oxidation of acetaldehyde. Electron paramagnetic resonancemore » (EPR) spectroscopy was used to interrogate the effects of nanotube structure on the charge separation and trapping as a function of calcination temperature. EPR results indicated that undercoordinated titania sites are associated with maximum CO{sub 2} reduction occurring in nanotubes calcined at 400 C. Despite the collapse of the nantube structure to form nanorods and the concomitant loss of surface area, the enhanced charge separation associated with increased crystallinity promoted high rates of oxidation of acetaldehyde in titania materials calcined at 600 C. These results illustrate that calcination temperature allows us to tune the morphological and surface features of the titania nanostructures for particular photocatalytic reactions.« less
A comparative intelligibility study of single-microphone noise reduction algorithms.
Hu, Yi; Loizou, Philipos C
2007-09-01
The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.
A hybrid method for classifying cognitive states from fMRI data.
Parida, S; Dehuri, S; Cho, S-B; Cacha, L A; Poznanski, R R
2015-09-01
Functional magnetic resonance imaging (fMRI) makes it possible to detect brain activities in order to elucidate cognitive-states. The complex nature of fMRI data requires under-standing of the analyses applied to produce possible avenues for developing models of cognitive state classification and improving brain activity prediction. While many models of classification task of fMRI data analysis have been developed, in this paper, we present a novel hybrid technique through combining the best attributes of genetic algorithms (GAs) and ensemble decision tree technique that consistently outperforms all other methods which are being used for cognitive-state classification. Specifically, this paper illustrates the combined effort of decision-trees ensemble and GAs for feature selection through an extensive simulation study and discusses the classification performance with respect to fMRI data. We have shown that our proposed method exhibits significant reduction of the number of features with clear edge classification accuracy over ensemble of decision-trees.
NASA Astrophysics Data System (ADS)
Wutsqa, D. U.; Marwah, M.
2017-06-01
In this paper, we consider spatial operation median filter to reduce the noise in the cervical images yielded by colposcopy tool. The backpropagation neural network (BPNN) model is applied to the colposcopy images to classify cervical cancer. The classification process requires an image extraction by using a gray level co-occurrence matrix (GLCM) method to obtain image features that are used as inputs of BPNN model. The advantage of noise reduction is evaluated by comparing the performances of BPNN models with and without spatial operation median filter. The experimental result shows that the spatial operation median filter can improve the accuracy of the BPNN model for cervical cancer classification.
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
Computer-aided detection of bladder mass within non-contrast-enhanced region of CT Urography (CTU)
NASA Astrophysics Data System (ADS)
Cha, Kenny H.; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Weizer, Alon; Zhou, Chuan
2016-03-01
We are developing a computer-aided detection system for bladder cancer in CT urography (CTU). We have previously developed methods for detection of bladder masses within the contrast-enhanced region of the bladder. In this study, we investigated methods for detection of bladder masses within the non-contrast enhanced region. The bladder was first segmented using a newly developed deep-learning convolutional neural network in combination with level sets. The non-contrast-enhanced region was separated from the contrast-enhanced region with a maximum-intensityprojection- based method. The non-contrast region was smoothed and a gray level threshold was employed to segment the bladder wall and potential masses. The bladder wall was transformed into a straightened thickness profile, which was analyzed to identify lesion candidates as a prescreening step. The lesion candidates were segmented using our autoinitialized cascaded level set (AI-CALS) segmentation method, and 27 morphological features were extracted for each candidate. Stepwise feature selection with simplex optimization and leave-one-case-out resampling were used for training and validation of a false positive (FP) classifier. In each leave-one-case-out cycle, features were selected from the training cases and a linear discriminant analysis (LDA) classifier was designed to merge the selected features into a single score for classification of the left-out test case. A data set of 33 cases with 42 biopsy-proven lesions in the noncontrast enhanced region was collected. During prescreening, the system obtained 83.3% sensitivity at an average of 2.4 FPs/case. After feature extraction and FP reduction by LDA, the system achieved 81.0% sensitivity at 2.0 FPs/case, and 73.8% sensitivity at 1.5 FPs/case.
Lysevych, M; Tan, H H; Karouta, F; Fu, L; Jagadish, C
2013-04-08
In this paper we report a method to overcome the limitations of gain-saturation and two-photon absorption faced by developers of high power single mode InP-based lasers and semiconductor optical amplifiers (SOA) including those based on wide-waveguide or slab-coupled optical waveguide laser (SCOWL) technology. The method is based on Y-coupling design of the laser cavity. The reduction in gain-saturation and two-photon absorption in the merged beam laser structures (MBL) are obtained by reducing the intensity of electromagnetic field in the laser cavity. Standard ridge-waveguide lasers and MBLs were fabricated, tested and compared. Despite a slightly higher threshold current, the reduced gain-saturation in MBLs results in higher output power. The MBLs also produced a single spatial mode, as well as a strongly dominating single spectral mode which is the inherent feature of MBL-type cavity.
Exponential convergence through linear finite element discretization of stratified subdomains
NASA Astrophysics Data System (ADS)
Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali
2016-10-01
Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.
IMMAN: free software for information theory-based chemometric analysis.
Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo
2015-05-01
The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA supervised algorithms. Graphic representation for Shannon's distribution of MD calculating software.
Megjhani, Murad; Terilli, Kalijah; Frey, Hans-Peter; Velazquez, Angela G; Doyle, Kevin William; Connolly, Edward Sander; Roh, David Jinou; Agarwal, Sachin; Claassen, Jan; Elhadad, Noemie; Park, Soojin
2018-01-01
Accurate prediction of delayed cerebral ischemia (DCI) after subarachnoid hemorrhage (SAH) can be critical for planning interventions to prevent poor neurological outcome. This paper presents a model using convolution dictionary learning to extract features from physiological data available from bedside monitors. We develop and validate a prediction model for DCI after SAH, demonstrating improved precision over standard methods alone. 488 consecutive SAH admissions from 2006 to 2014 to a tertiary care hospital were included. Models were trained on 80%, while 20% were set aside for validation testing. Modified Fisher Scale was considered the standard grading scale in clinical use; baseline features also analyzed included age, sex, Hunt-Hess, and Glasgow Coma Scales. An unsupervised approach using convolution dictionary learning was used to extract features from physiological time series (systolic blood pressure and diastolic blood pressure, heart rate, respiratory rate, and oxygen saturation). Classifiers (partial least squares and linear and kernel support vector machines) were trained on feature subsets of the derivation dataset. Models were applied to the validation dataset. The performances of the best classifiers on the validation dataset are reported by feature subset. Standard grading scale (mFS): AUC 0.54. Combined demographics and grading scales (baseline features): AUC 0.63. Kernel derived physiologic features: AUC 0.66. Combined baseline and physiologic features with redundant feature reduction: AUC 0.71 on derivation dataset and 0.78 on validation dataset. Current DCI prediction tools rely on admission imaging and are advantageously simple to employ. However, using an agnostic and computationally inexpensive learning approach for high-frequency physiologic time series data, we demonstrated that we could incorporate individual physiologic data to achieve higher classification accuracy.
Qian, Jianjun; Yang, Jian; Xu, Yong
2013-09-01
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.
Zhu, Chengzhou; Fu, Shaofang; Song, Junhua; ...
2017-02-06
In this study, self-assembled M–N-doped carbon nanotube aerogels with single-atom catalyst feature are for the first time reported through one-step hydrothermal route and subsequent facile annealing treatment. By taking advantage of the porous nanostructures, 1D nanotubes as well as single-atom catalyst feature, the resultant Fe–N-doped carbon nanotube aerogels exhibit excellent oxygen reduction reaction electrocatalytic performance even better than commercial Pt/C in alkaline solution.
A complex noise reduction method for improving visualization of SD-OCT skin biomedical images
NASA Astrophysics Data System (ADS)
Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Khramov, Alexander G.
2014-05-01
In this paper we consider the original method of solving noise reduction problem for visualization's quality improvement of SD-OCT skin and tumors biomedical images. The principal advantages of OCT are high resolution and possibility of in vivo analysis. We propose a two-stage algorithm: 1) process of raw one-dimensional A-scans of SD-OCT and 2) remove a noise from the resulting B(C)-scans. The general mathematical methods of SD-OCT are unstable: if the noise of the CCD is 1.6% of the dynamic range then result distortions are already 25-40% of the dynamic range. We use at the first stage a resampling of A-scans and simple linear filters to reduce the amount of data and remove the noise of the CCD camera. The efficiency, improving productivity and conservation of the axial resolution when using this approach are showed. At the second stage we use an effective algorithms based on Hilbert-Huang Transform for more accurately noise peaks removal. The effectiveness of the proposed approach for visualization of malignant and benign skin tumors (melanoma, BCC etc.) and a significant improvement of SNR level for different methods of noise reduction are showed. Also in this study we consider a modification of this method depending of a specific hardware and software features of used OCT setup. The basic version does not require any hardware modifications of existing equipment. The effectiveness of proposed method for 3D visualization of tissues can simplify medical diagnosis in oncology.
Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo
2017-05-16
Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J; Tsui, B; Noo, F
Purpose: To develop a feature-preserving model based image reconstruction (MBIR) method that improves performance in pancreatic lesion classification at equal or reduced radiation dose. Methods: A set of pancreatic lesion models was created with both benign and premalignant lesion types. These two classes of lesions are distinguished by their fine internal structures; their delineation is therefore crucial to the task of pancreatic lesion classification. To reduce image noise while preserving the features of the lesions, we developed a MBIR method with curvature-based regularization. The novel regularization encourages formation of smooth surfaces that model both the exterior shape and the internalmore » features of pancreatic lesions. Given that the curvature depends on the unknown image, image reconstruction or denoising becomes a non-convex optimization problem; to address this issue an iterative-reweighting scheme was used to calculate and update the curvature using the image from the previous iteration. Evaluation was carried out with insertion of the lesion models into the pancreas of a patient CT image. Results: Visual inspection was used to compare conventional TV regularization with our curvature-based regularization. Several penalty-strengths were considered for TV regularization, all of which resulted in erasing portions of the septation (thin partition) in a premalignant lesion. At matched noise variance (50% noise reduction in the patient stomach region), the connectivity of the septation was well preserved using the proposed curvature-based method. Conclusion: The curvature-based regularization is able to reduce image noise while simultaneously preserving the lesion features. This method could potentially improve task performance for pancreatic lesion classification at equal or reduced radiation dose. The result is of high significance for longitudinal surveillance studies of patients with pancreatic cysts, which may develop into pancreatic cancer. The Senior Author receives financial support from Siemens GmbH Healthcare.« less
Yosipof, Abraham; Guedes, Rita C; García-Sosa, Alfonso T
2018-01-01
Data mining approaches can uncover underlying patterns in chemical and pharmacological property space decisive for drug discovery and development. Two of the most common approaches are visualization and machine learning methods. Visualization methods use dimensionality reduction techniques in order to reduce multi-dimension data into 2D or 3D representations with a minimal loss of information. Machine learning attempts to find correlations between specific activities or classifications for a set of compounds and their features by means of recurring mathematical models. Both models take advantage of the different and deep relationships that can exist between features of compounds, and helpfully provide classification of compounds based on such features or in case of visualization methods uncover underlying patterns in the feature space. Drug-likeness has been studied from several viewpoints, but here we provide the first implementation in chemoinformatics of the t-Distributed Stochastic Neighbor Embedding (t-SNE) method for the visualization and the representation of chemical space, and the use of different machine learning methods separately and together to form a new ensemble learning method called AL Boost. The models obtained from AL Boost synergistically combine decision tree, random forests (RF), support vector machine (SVM), artificial neural network (ANN), k nearest neighbors (kNN), and logistic regression models. In this work, we show that together they form a predictive model that not only improves the predictive force but also decreases bias. This resulted in a corrected classification rate of over 0.81, as well as higher sensitivity and specificity rates for the models. In addition, separation and good models were also achieved for disease categories such as antineoplastic compounds and nervous system diseases, among others. Such models can be used to guide decision on the feature landscape of compounds and their likeness to either drugs or other characteristics, such as specific or multiple disease-category(ies) or organ(s) of action of a molecule.
The effect of alkaline pretreatment methods on cellulose structure and accessibility
Bali, Garima; Meng, Xianzhi; Deneff, Jacob I.; ...
2014-11-24
The effects of different alkaline pretreatments on cellulose structural features and accessibility are compared and correlated with the enzymatic hydrolysis of Populus. The pretreatments are shown to modify polysaccharides and lignin content to enhance the accessibility for cellulase enzymes. The highest increase in the cellulose accessibility was observed in dilute sodium hydroxide, followed by methods using ammonia soaking and lime (Ca(OH) 2). The biggest increase of cellulose accessibility occurs during the first 10 min of pretreatment, with further increases at a slower rate as severity increases. Low temperature ammonia soaking at longer residence times dissolved a major portion of hemicellulosemore » and exhibited higher cellulose accessibility than high temperature soaking. Moreover, the most significant reduction of degree of polymerization (DP) occurred for dilute sodium hydroxide (NaOH) and ammonia pretreated Populus samples. The study thus identifies important cellulose structural features and relevant parameters related to biomass recalcitrance.« less
Compressed learning and its applications to subcellular localization.
Zheng, Zhong-Long; Guo, Li; Jia, Jiong; Xie, Chen-Mao; Zeng, Wen-Cai; Yang, Jie
2011-09-01
One of the main challenges faced by biological applications is to predict protein subcellular localization in automatic fashion accurately. To achieve this in these applications, a wide variety of machine learning methods have been proposed in recent years. Most of them focus on finding the optimal classification scheme and less of them take the simplifying the complexity of biological systems into account. Traditionally, such bio-data are analyzed by first performing a feature selection before classification. Motivated by CS (Compressed Sensing) theory, we propose the methodology which performs compressed learning with a sparseness criterion such that feature selection and dimension reduction are merged into one analysis. The proposed methodology decreases the complexity of biological system, while increases protein subcellular localization accuracy. Experimental results are quite encouraging, indicating that the aforementioned sparse methods are quite promising in dealing with complicated biological problems, such as predicting the subcellular localization of Gram-negative bacterial proteins.
CW-SSIM kernel based random forest for image classification
NASA Astrophysics Data System (ADS)
Fan, Guangzhe; Wang, Zhou; Wang, Jiheng
2010-07-01
Complex wavelet structural similarity (CW-SSIM) index has been proposed as a powerful image similarity metric that is robust to translation, scaling and rotation of images, but how to employ it in image classification applications has not been deeply investigated. In this paper, we incorporate CW-SSIM as a kernel function into a random forest learning algorithm. This leads to a novel image classification approach that does not require a feature extraction or dimension reduction stage at the front end. We use hand-written digit recognition as an example to demonstrate our algorithm. We compare the performance of the proposed approach with random forest learning based on other kernels, including the widely adopted Gaussian and the inner product kernels. Empirical evidences show that the proposed method is superior in its classification power. We also compared our proposed approach with the direct random forest method without kernel and the popular kernel-learning method support vector machine. Our test results based on both simulated and realworld data suggest that the proposed approach works superior to traditional methods without the feature selection procedure.
Neural Network Target Identification System for False Alarm Reduction
NASA Technical Reports Server (NTRS)
Ye, David; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feed forward back propagation neural network (NN) is then trained to classify each feature vector and remove false positives. This paper discusses the test of the system performance and parameter optimizations process which adapts the system to various targets and datasets. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar image dataset.
Recognizing emotions from EEG subbands using wavelet analysis.
Candra, Henry; Yuwono, Mitchell; Handojoseno, Ardi; Chai, Rifai; Su, Steven; Nguyen, Hung T
2015-01-01
Objectively recognizing emotions is a particularly important task to ensure that patients with emotional symptoms are given the appropriate treatments. The aim of this study was to develop an emotion recognition system using Electroencephalogram (EEG) signals to identify four emotions including happy, sad, angry, and relaxed. We approached this objective by firstly investigating the relevant EEG frequency band followed by deciding the appropriate feature extraction method. Two features were considered namely: 1. Wavelet Energy, and 2. Wavelet Entropy. EEG Channels reduction was then implemented to reduce the complexity of the features. The ground truth emotional states of each subject were inferred using Russel's circumplex model of emotion, that is, by mapping the subjectively reported degrees of valence (pleasure) and arousal to the appropriate emotions - for example, an emotion with high valence and high arousal is equivalent to a `happy' emotional state, while low valence and low arousal is equivalent to a `sad' emotional state. The Support Vector Machine (SVM) classifier was then used for mapping each feature vector into corresponding discrete emotions. The results presented in this study indicated thatWavelet features extracted from alpha, beta and gamma bands seem to provide the necessary information for describing the aforementioned emotions. Using the DEAP (Dataset for Emotion Analysis using electroencephalogram, Physiological and Video Signals), our proposed method achieved an average sensitivity and specificity of 77.4% ± 14.1% and 69.1% ± 12.8%, respectively.
NASA Astrophysics Data System (ADS)
Nie, Ning; Zhang, Liuyang; Fu, Junwei; Cheng, Bei; Yu, Jiaguo
2018-05-01
Photocatalytic reduction of CO2 into hydrocarbon fuels has been regarded as a promising approach to ease the greenhouse effect and the energy shortage. Herein, an electrostatic self-assembly method was exploited to prepare g-C3N4/ZnO composite microsphere. This method simply utilized the opposite surface charge of each component, achieving a hierarchical structure with intimate contact between them. A much improved photocatalytic CO2 reduction activity was attained. The CH3OH production rate was 1.32 μmol h-1 g-1, which was 2.1 and 4.1 times more than that of the pristine ZnO and g-C3N4, respectively. This facile design bestowed the g-C3N4/ZnO composite an extended light adsorption caused by multi-light scattering effect. It also guaranteed the uniform distribution of g-C3N4 nanosheets on the surface of ZnO microspheres, maximizing their advantage and synergistic effect. Most importantly, the preeminent performance was proposed and validated based on the direct Z-scheme. The recombination rate was considerably suppressed. This work features the meliority of constructing hierarchical direct Z-scheme structures in photocatalytic CO2 reduction reactions.
Intelligent Color Vision System for Ripeness Classification of Oil Palm Fresh Fruit Bunch
Fadilah, Norasyikin; Mohamad-Saleh, Junita; Halim, Zaini Abdul; Ibrahim, Haidi; Ali, Syed Salim Syed
2012-01-01
Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category. PMID:23202043
Intelligent color vision system for ripeness classification of oil palm fresh fruit bunch.
Fadilah, Norasyikin; Mohamad-Saleh, Junita; Abdul Halim, Zaini; Ibrahim, Haidi; Syed Ali, Syed Salim
2012-10-22
Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category.
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.
NASA Astrophysics Data System (ADS)
Torrado, Jesús; Hu, Bin; Achúcarro, Ana
2017-10-01
We update the search for features in the cosmic microwave background (CMB) power spectrum due to transient reductions in the speed of sound, using Planck 2015 CMB temperature and polarization data. We enlarge the parameter space to much higher oscillatory frequencies of the feature, and define a robust prior independent of the ansatz for the reduction, guaranteed to reproduce the assumptions of the theoretical model. This prior exhausts the regime in which features coming from a Gaussian reduction are easily distinguishable from the baseline cosmology. We find a fit to the ℓ≈20 - 40 minus /plus structure in Planck TT power spectrum, as well as features spanning along higher ℓ's (ℓ≈100 - 1500 ). None of those fits is statistically significant, either in terms of their improvement of the likelihood or in terms of the Bayes ratio. For the higher-ℓ ones, their oscillatory frequency (and their amplitude to a lesser extent) is tightly constrained, so they can be considered robust, falsifiable predictions for their correlated features in the CMB bispectrum. We compute said correlated features, and assess their signal to noise and correlation with the secondary bispectrum of the correlation between the gravitational lensing of the CMB and the integrated Sachs-Wolfe effect. We compare our findings to the shape-agnostic oscillatory template tested in Planck 2015, and we comment on some tantalizing coincidences with some of the traits described in Planck's 2015 bispectrum data.
Liu, Z; Sun, J; Smith, M; Smith, L; Warr, R
2013-11-01
Computer-assisted diagnosis (CAD) of malignant melanoma (MM) has been advocated to help clinicians to achieve a more objective and reliable assessment. However, conventional CAD systems examine only the features extracted from digital photographs of lesions. Failure to incorporate patients' personal information constrains the applicability in clinical settings. To develop a new CAD system to improve the performance of automatic diagnosis of melanoma, which, for the first time, incorporates digital features of lesions with important patient metadata into a learning process. Thirty-two features were extracted from digital photographs to characterize skin lesions. Patients' personal information, such as age, gender and, lesion site, and their combinations, was quantified as metadata. The integration of digital features and metadata was realized through an extended Laplacian eigenmap, a dimensionality-reduction method grouping lesions with similar digital features and metadata into the same classes. The diagnosis reached 82.1% sensitivity and 86.1% specificity when only multidimensional digital features were used, but improved to 95.2% sensitivity and 91.0% specificity after metadata were incorporated appropriately. The proposed system achieves a level of sensitivity comparable with experienced dermatologists aided by conventional dermoscopes. This demonstrates the potential of our method for assisting clinicians in diagnosing melanoma, and the benefit it could provide to patients and hospitals by greatly reducing unnecessary excisions of benign naevi. This paper proposes an enhanced CAD system incorporating clinical metadata into the learning process for automatic classification of melanoma. Results demonstrate that the additional metadata and the mechanism to incorporate them are useful for improving CAD of melanoma. © 2013 British Association of Dermatologists.
NASA Astrophysics Data System (ADS)
Lai, Chunren; Guo, Shengwen; Cheng, Lina; Wang, Wensheng; Wu, Kai
2017-02-01
It's very important to differentiate the temporal lobe epilepsy (TLE) patients from healthy people and localize the abnormal brain regions of the TLE patients. The cortical features and changes can reveal the unique anatomical patterns of brain regions from the structural MR images. In this study, structural MR images from 28 normal controls (NC), 18 left TLE (LTLE), and 21 right TLE (RTLE) were acquired, and four types of cortical feature, namely cortical thickness (CTh), cortical surface area (CSA), gray matter volume (GMV), and mean curvature (MCu), were explored for discriminative analysis. Three feature selection methods, the independent sample t-test filtering, the sparse-constrained dimensionality reduction model (SCDRM), and the support vector machine-recursive feature elimination (SVM-RFE), were investigated to extract dominant regions with significant differences among the compared groups for classification using the SVM classifier. The results showed that the SVM-REF achieved the highest performance (most classifications with more than 92% accuracy), followed by the SCDRM, and the t-test. Especially, the surface area and gray volume matter exhibited prominent discriminative ability, and the performance of the SVM was improved significantly when the four cortical features were combined. Additionally, the dominant regions with higher classification weights were mainly located in temporal and frontal lobe, including the inferior temporal, entorhinal cortex, fusiform, parahippocampal cortex, middle frontal and frontal pole. It was demonstrated that the cortical features provided effective information to determine the abnormal anatomical pattern and the proposed method has the potential to improve the clinical diagnosis of the TLE.
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications. PMID:25885290
Stellar refraction - A tool to monitor the height of the tropopause from space
NASA Technical Reports Server (NTRS)
Schuerman, D. W.; Giovane, F.; Greenberg, J. M.
1975-01-01
Calculations of stellar refraction for a setting or rising star as viewed from a spacecraft show that the tropopause is a discernible feature in a plot of refraction vs time. The height of the tropopause is easily obtained from such a plot. Since the refraction suffered by the starlight appears to be measurable with some precision from orbital altitudes, this technique is suggested as a method for remotely monitoring the height of the tropopause. Although limited to nighttime measurements, the method is independent of supporting data or model fitting and easily lends itself to on-line data reduction.
Methods in Astronomical Image Processing
NASA Astrophysics Data System (ADS)
Jörsäter, S.
A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future
Willeboordse, Maartje; van de Kant, Kim D. G.; Tan, Frans E. S.; Mulkens, Sandra; Schellings, Julia; Crijns, Yvonne; van der Ploeg, Liesbeth; van Schayck, Constant P.; Dompeling, Edward
2016-01-01
Background There is increasing evidence that obesity is related to asthma development and severity. However, it is largely unknown whether weight reduction can influence asthma management, especially in children. Objective To determine the effects of a multifactorial weight reduction intervention on asthma management in overweight/obese children with (a high risk of developing) asthma. Methods An 18-month weight-reduction randomized controlled trial was conducted in 87 children with overweight/obesity and asthma. Every six months, measurements of anthropometry, lung function, lifestyle parameters and inflammatory markers were assessed. Analyses were performed with linear mixed models for longitudinal analyses. Results After 18 months, the body mass index-standard deviation score decreased by -0.14±0.29 points (p<0.01) in the intervention group and -0.12±0.34 points (p<0.01) in the control group. This change over time did not differ between groups (p>0.05). Asthma features (including asthma control and asthma-related quality of life) and lung function indices (static and dynamic) improved significantly over time in both groups. The FVC% predicted improved over time by 10.1 ± 8.7% in the intervention group (p<0.001), which was significantly greater than the 6.1 ± 8.4% in the control group (p<0.05). Conclusions & clinical relevance Clinically relevant improvements in body weight, lung function and asthma features were found in both the intervention and control group, although some effects were more pronounced in the intervention group (FVC, asthma control, and quality of life). This implies that a weight reduction intervention could be clinically beneficial for children with asthma. Trial Registration ClinicalTrials.gov NCT00998413 PMID:27294869
NASA Astrophysics Data System (ADS)
Kakkos, I.; Gkiatis, K.; Bromis, K.; Asvestas, P. A.; Karanasiou, I. S.; Ventouras, E. M.; Matsopoulos, G. K.
2017-11-01
The detection of an error is the cognitive evaluation of an action outcome that is considered undesired or mismatches an expected response. Brain activity during monitoring of correct and incorrect responses elicits Event Related Potentials (ERPs) revealing complex cerebral responses to deviant sensory stimuli. Development of accurate error detection systems is of great importance both concerning practical applications and in investigating the complex neural mechanisms of decision making. In this study, data are used from an audio identification experiment that was implemented with two levels of complexity in order to investigate neurophysiological error processing mechanisms in actors and observers. To examine and analyse the variations of the processing of erroneous sensory information for each level of complexity we employ Support Vector Machines (SVM) classifiers with various learning methods and kernels using characteristic ERP time-windowed features. For dimensionality reduction and to remove redundant features we implement a feature selection framework based on Sequential Forward Selection (SFS). The proposed method provided high accuracy in identifying correct and incorrect responses both for actors and for observers with mean accuracy of 93% and 91% respectively. Additionally, computational time was reduced and the effects of the nesting problem usually occurring in SFS of large feature sets were alleviated.
Sengur, Abdulkadir
2008-03-01
In the last two decades, the use of artificial intelligence methods in medical analysis is increasing. This is mainly because the effectiveness of classification and detection systems have improved a great deal to help the medical experts in diagnosing. In this work, we investigate the use of principal component analysis (PCA), artificial immune system (AIS) and fuzzy k-NN to determine the normal and abnormal heart valves from the Doppler heart sounds. The proposed heart valve disorder detection system is composed of three stages. The first stage is the pre-processing stage. Filtering, normalization and white de-noising are the processes that were used in this stage. The feature extraction is the second stage. During feature extraction stage, wavelet packet decomposition was used. As a next step, wavelet entropy was considered as features. For reducing the complexity of the system, PCA was used for feature reduction. In the classification stage, AIS and fuzzy k-NN were used. To evaluate the performance of the proposed methodology, a comparative study is realized by using a data set containing 215 samples. The validation of the proposed method is measured by using the sensitivity and specificity parameters; 95.9% sensitivity and 96% specificity rate was obtained.
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.
An efficient visualization method for analyzing biometric data
NASA Astrophysics Data System (ADS)
Rahmes, Mark; McGonagle, Mike; Yates, J. Harlan; Henning, Ronda; Hackett, Jay
2013-05-01
We introduce a novel application for biometric data analysis. This technology can be used as part of a unique and systematic approach designed to augment existing processing chains. Our system provides image quality control and analysis capabilities. We show how analysis and efficient visualization are used as part of an automated process. The goal of this system is to provide a unified platform for the analysis of biometric images that reduce manual effort and increase the likelihood of a match being brought to an examiner's attention from either a manual or lights-out application. We discuss the functionality of FeatureSCOPE™ which provides an efficient tool for feature analysis and quality control of biometric extracted features. Biometric databases must be checked for accuracy for a large volume of data attributes. Our solution accelerates review of features by a factor of up to 100 times. Review of qualitative results and cost reduction is shown by using efficient parallel visual review for quality control. Our process automatically sorts and filters features for examination, and packs these into a condensed view. An analyst can then rapidly page through screens of features and flag and annotate outliers as necessary.
Mohebbi, Maryam; Ghassemian, Hassan; Asl, Babak Mohammadzadeh
2011-01-01
This paper aims to propose an effective paroxysmal atrial fibrillation (PAF) predictor which is based on the analysis of the heart rate variability (HRV) signal. Predicting the onset of PAF, based on non-invasive techniques, is clinically important and can be invaluable in order to avoid useless therapeutic interventions and to minimize the risks for the patients. This method consists of four steps: Preprocessing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In the next step, the recurrence plot (RP) of HRV signal is obtained and six features are extracted to characterize the basic patterns of the RP. These features consist of length of longest diagonal segments, average length of the diagonal lines, entropy, trapping time, length of longest vertical line, and recurrence trend. In the third step, these features are reduced to three features by the linear discriminant analysis (LDA) technique. Using LDA not only reduces the number of the input features, but also increases the classification accuracy by selecting the most discriminating features. Finally, a support vector machine-based classifier is used to classify the HRV signals. The performance of the proposed method in prediction of PAF episodes was evaluated using the Atrial Fibrillation Prediction Database which consists of both 30-minutes ECG recordings end just prior to the onset of PAF and segments at least 45 min distant from any PAF events. The obtained sensitivity, specificity, and positive predictivity were 96.55%, 100%, and 100%, respectively. PMID:22606666
Recovery of sparse translation-invariant signals with continuous basis pursuit
Ekanadham, Chaitanya; Tranchina, Daniel; Simoncelli, Eero
2013-01-01
We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which in turn offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality. PMID:24352562
Omitted variable bias in crash reduction factors.
DOT National Transportation Integrated Search
2015-09-01
Transportation planners and traffic engineers are increasingly turning to crash reduction factors to evaluate changes in road : geometric and design features in order to reduce crashes. Crash reduction factors are typically estimated based on segment...
Fast traffic sign recognition with a rotation invariant binary pattern based feature.
Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun
2015-01-19
Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed.
Motorcyclists safety system to avoid rear end collisions based on acoustic signatures
NASA Astrophysics Data System (ADS)
Muzammel, M.; Yusoff, M. Zuki; Malik, A. Saeed; Mohamad Saad, M. Naufal; Meriaudeau, F.
2017-03-01
In many Asian countries, motorcyclists have a higher fatality rate as compared to other vehicles. Among many other factors, rear end collisions are also contributing for these fatalities. Collision detection systems can be useful to minimize these accidents. However, the designing of efficient and cost effective collision detection system for motorcyclist is still a major challenge. In this paper, an acoustic information based, cost effective and efficient collision detection system is proposed for motorcycle applications. The proposed technique uses the Short time Fourier Transform (STFT) to extract the features from the audio signal and Principal component analysis (PCA) has been used to reduce the feature vector length. The reduction of feature length, further increases the performance of this technique. The proposed technique has been tested on self recorded dataset and gives accuracy of 97.87%. We believe that this method can help to reduce a significant number of motorcycle accidents.
Robust Learning of High-dimensional Biological Networks with Bayesian Networks
NASA Astrophysics Data System (ADS)
Nägele, Andreas; Dejori, Mathäus; Stetter, Martin
Structure learning of Bayesian networks applied to gene expression data has become a potentially useful method to estimate interactions between genes. However, the NP-hardness of Bayesian network structure learning renders the reconstruction of the full genetic network with thousands of genes unfeasible. Consequently, the maximal network size is usually restricted dramatically to a small set of genes (corresponding with variables in the Bayesian network). Although this feature reduction step makes structure learning computationally tractable, on the downside, the learned structure might be adversely affected due to the introduction of missing genes. Additionally, gene expression data are usually very sparse with respect to the number of samples, i.e., the number of genes is much greater than the number of different observations. Given these problems, learning robust network features from microarray data is a challenging task. This chapter presents several approaches tackling the robustness issue in order to obtain a more reliable estimation of learned network features.
Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature
Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun
2015-01-01
Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed. PMID:25608217
NASA Technical Reports Server (NTRS)
Dube, Michael J.; Gamwell, Wayne R.
2011-01-01
Several International Space Station (ISS) hardware components use Loctite (and other polymer based liquid locking compounds (LLCs)) as a means of meeting the secondary (redundant) locking feature requirement for fasteners. The primary locking method is the fastener preload, with the application of the Loctite compound which when cured is intended to resist preload reduction. The reliability of these compounds has been questioned due to a number of failures during ground testing. The ISS Program Manager requested the NASA Engineering and Safety Center (NESC) to characterize and quantify sensitivities of Loctite being used as a secondary locking feature. The findings and recommendations provided in this investigation apply to the anaerobic LLCs Loctite 242 and 271. No other anaerobic LLCs were evaluated for this investigation. This document contains the findings and recommendations of the NESC investigation
Chen, Xiang; Velliste, Meel; Murphy, Robert F.
2010-01-01
Proteomics, the large scale identification and characterization of many or all proteins expressed in a given cell type, has become a major area of biological research. In addition to information on protein sequence, structure and expression levels, knowledge of a protein’s subcellular location is essential to a complete understanding of its functions. Currently subcellular location patterns are routinely determined by visual inspection of fluorescence microscope images. We review here research aimed at creating systems for automated, systematic determination of location. These employ numerical feature extraction from images, feature reduction to identify the most useful features, and various supervised learning (classification) and unsupervised learning (clustering) methods. These methods have been shown to perform significantly better than human interpretation of the same images. When coupled with technologies for tagging large numbers of proteins and high-throughput microscope systems, the computational methods reviewed here enable the new subfield of location proteomics. This subfield will make critical contributions in two related areas. First, it will provide structured, high-resolution information on location to enable Systems Biology efforts to simulate cell behavior from the gene level on up. Second, it will provide tools for Cytomics projects aimed at characterizing the behaviors of all cell types before, during and after the onset of various diseases. PMID:16752421
Iliyasu, Abdullah M; Fatichah, Chastine
2017-12-19
A quantum hybrid (QH) intelligent approach that blends the adaptive search capability of the quantum-behaved particle swarm optimisation (QPSO) method with the intuitionistic rationality of traditional fuzzy k -nearest neighbours (Fuzzy k -NN) algorithm (known simply as the Q-Fuzzy approach) is proposed for efficient feature selection and classification of cells in cervical smeared (CS) images. From an initial multitude of 17 features describing the geometry, colour, and texture of the CS images, the QPSO stage of our proposed technique is used to select the best subset features (i.e., global best particles) that represent a pruned down collection of seven features. Using a dataset of almost 1000 images, performance evaluation of our proposed Q-Fuzzy approach assesses the impact of our feature selection on classification accuracy by way of three experimental scenarios that are compared alongside two other approaches: the All-features (i.e., classification without prior feature selection) and another hybrid technique combining the standard PSO algorithm with the Fuzzy k -NN technique (P-Fuzzy approach). In the first and second scenarios, we further divided the assessment criteria in terms of classification accuracy based on the choice of best features and those in terms of the different categories of the cervical cells. In the third scenario, we introduced new QH hybrid techniques, i.e., QPSO combined with other supervised learning methods, and compared the classification accuracy alongside our proposed Q-Fuzzy approach. Furthermore, we employed statistical approaches to establish qualitative agreement with regards to the feature selection in the experimental scenarios 1 and 3. The synergy between the QPSO and Fuzzy k -NN in the proposed Q-Fuzzy approach improves classification accuracy as manifest in the reduction in number cell features, which is crucial for effective cervical cancer detection and diagnosis.
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-01-01
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective. PMID:25207870
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-09-09
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective.
A new method for mapping multidimensional data to lower dimensions
NASA Technical Reports Server (NTRS)
Gowda, K. C.
1983-01-01
A multispectral mapping method is proposed which is based on the new concept of BEND (Bidimensional Effective Normalised Difference). The method, which involves taking one sample point at a time and finding the interrelationships between its features, is found very economical from the point of view of storage and processing time. It has good dimensionality reduction and clustering properties, and is highly suitable for computer analysis of large amounts of data. The transformed values obtained by this procedure are suitable for either a planar 2-space mapping of geological sample points or for making grayscale and color images of geo-terrains. A few examples are given to justify the efficacy of the proposed procedure.
Avrutskiĭ, G Ia; Allikmets, L Kh; Neduva, A A; Zharkovskiĭ, A M; Beliakov, A V
1984-01-01
Clinical and experimental studies into the phenomenon of adaptation to neuroleptic agents and into the methods of its overcoming were carried out. An experimental study of the long-term administrations of haloperidol revealed the formation of adaptation to the drug which can be overcome by a zigzag-like sharp elevation of the dosage followed by rapid reduction to the baseline level. The trial of this method under clinical conditions showed that it was expedient to use on a large scale the experimental findings on the specific features of the formation and prevention of the secondary therapeutic resistance.
Crowd Sourcing to Improve Urban Stormwater Management
NASA Astrophysics Data System (ADS)
Minsker, B. S.; Band, L. E.; Heidari Haratmeh, B.; Law, N. L.; Leonard, L. N.; Rai, A.
2017-12-01
Over half of the world's population currently lives in urban areas, a number predicted to grow to 60 percent by 2030. Urban areas face unprecedented and growing challenges that threaten society's long-term wellbeing, including poverty; chronic health problems; widespread pollution and resource degradation; and increased natural disasters. These are "wicked" problems involving "systems of systems" that require unprecedented information sharing and collaboration across disciplines and organizational boundaries. Cities are recognizing that the increasing stream of data and information ("Big Data"), informatics, and modeling can support rapid advances on these challenges. Nonetheless, information technology solutions can only be effective in addressing these challenges through deeply human and systems perspectives. A stakeholder-driven approach ("crowd sourcing") is needed to develop urban systems that address multiple needs, such as parks that capture and treat stormwater while improving human and ecosystem health and wellbeing. We have developed informatics- and Cloud-based collaborative methods that enable crowd sourcing of green stormwater infrastructure (GSI: rain gardens, bioswales, trees, etc.) design and management. The methods use machine learning, social media data, and interactive design tools (called IDEAS-GI) to identify locations and features of GSI that perform best on a suite of objectives, including life cycle cost, stormwater volume reduction, and air pollution reduction. Insights will be presented on GI features that best meet stakeholder needs and are therefore most likely to improve human wellbeing and be well maintained.
PCA-HOG symmetrical feature based diseased cell detection
NASA Astrophysics Data System (ADS)
Wan, Min-jie
2016-04-01
A histogram of oriented gradient (HOG) feature is applied to the field of diseased cell detection, which can detect diseased cells in high resolution tissue images rapidly, accurately and efficiently. Firstly, motivated by symmetrical cellular forms, a new HOG symmetrical feature based on the traditional HOG feature is proposed to meet the condition of cell detection. Secondly, considering the high feature dimension of traditional HOG feature leads to plenty of memory resources and long runtime in practical applications, a classical dimension reduction method called principal component analysis (PCA) is used to reduce the dimension of high-dimensional HOG descriptor. Because of that, computational speed is increased greatly, and the accuracy of detection can be controlled in a proper range at the same time. Thirdly, support vector machine (SVM) classifier is trained with PCA-HOG symmetrical features proposed above. At last, practical tissue images is detected and analyzed by SVM classifier. In order to verify the effectiveness of this new algorithm, it is practically applied to conduct diseased cell detection which takes 200 pieces of H&E (hematoxylin & eosin) high resolution staining histopathological images collected from 20 breast cancer patients as a sample. The experiment shows that the average processing rate can be 25 frames per second and the detection accuracy can be 92.1%.
An audiovisual emotion recognition system
NASA Astrophysics Data System (ADS)
Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun
2007-12-01
Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.
You, Zhu-Hong; Lei, Ying-Ke; Zhu, Lin; Xia, Junfeng; Wang, Bing
2013-01-01
Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time.
ELM mitigation studies in JET and implications for ITER
NASA Astrophysics Data System (ADS)
de La Luna, Elena
2009-11-01
Type I edge localized modes (ELMs) remain a serious concern for ITER because of the high transient heat and particle flux that can lead to rapid erosion of the divertor plates. This has stimulated worldwide research on exploration of different methods to avoid or at least mitigate the ELM energy loss while maintaining adequate confinement. ITER will require reliable ELM control over a wide range of operating conditions, including changes in the edge safety factor, therefore a suite of different techniques is highly desirable. In JET several techniques have been demonstrated for control the frequency and size of type I ELMs, including resonant perturbations of the edge magnetic field (RMP), ELM magnetic triggering by fast vertical movement of the plasma column (``vertical kicks'') and ELM pacing using pellet injection. In this paper we present results from recent dedicated experiments in JET focusing on integrating the different ELM mitigation methods into similar plasma scenarios. Plasma parameter scans provide comparison of the performance of the different techniques in terms of both the reduction in ELM size and on the impact of each control method on plasma confinement. The compatibility of different ELM mitigation schemes has also been investigated. The plasma response to RMP and vertical kicks during the ELM mitigation phase shares common features: the reduction in ELM size (up to a factor of 3) is accompanied by a reduction in pedestal pressure (mainly due to a loss of density) with only minor (< 10%) reduction of the stored energy. Interestingly, it has been found that the combined application of RMP and kicks leads to a reduction of the threshold perturbation level (vertical displacement in the case of the kicks) necessary for the ELM mitigation to occur. The implication of these results for ITER will be discussed.
McGrath, Susan P; Ryan, Kathy L; Wendelken, Suzanne M; Rickards, Caroline A; Convertino, Victor A
2011-02-01
The primary objective of this study was to determine whether alterations in the pulse oximeter waveform characteristics would track progressive reductions in central blood volume. We also assessed whether changes in the pulse oximeter waveform provide an indication of blood loss in the hemorrhaging patient before changes in standard vital signs. Pulse oximeter data from finger, forehead, and ear pulse oximeter sensors were collected from 18 healthy subjects undergoing progressive reduction in central blood volume induced by lower body negative pressure (LBNP). Stroke volume measurements were simultaneously recorded using impedance cardiography. The study was conducted in a research laboratory setting where no interventions were performed. Pulse amplitude, width, and area under the curve (AUC) features were calculated from each pulse wave recording. Amalgamated correlation coefficients were calculated to determine the relationship between the changes in pulse oximeter waveform features and changes in stroke volume with LBNP. For pulse oximeter sensors on the ear and forehead, reductions in pulse amplitude, width, and area were strongly correlated with progressive reductions in stroke volume during LBNP (R(2) ≥ 0.59 for all features). Changes in pulse oximeter waveform features were observed before profound decreases in arterial blood pressure. The best correlations between pulse features and stroke volume were obtained from the forehead sensor area (R(2) = 0.97). Pulse oximeter waveform features returned to baseline levels when central blood volume was restored. These results support the use of pulse oximeter waveform analysis as a potential diagnostic tool to detect clinically significant hypovolemia before the onset of cardiovascular decompensation in spontaneously breathing patients.
New molecular features of cowpea bean (Vigna unguiculata, l. Walp) β-vignin.
de Souza Ferreira, Ederlan; Capraro, Jessica; Sessa, Fabio; Magni, Chiara; Demonte, Aureluce; Consonni, Alessandro; Augusto Neves, Valdir; Maffud Cilli, Eduardo; Duranti, Marcello; Scarafoni, Alessio
2018-02-01
Cowpea seed β-vignin, a vicilin-like globulin, proved to exert various health favourable effects, including blood cholesterol reduction in animal models. The need of a simple scalable enrichment procedure for further studies for tailored applications of this seed protein is crucial. A chromatography-independent fractionation method allowing to obtain a protein preparation with a high degree of homogeneity was used. Further purification was pursued to deep the molecular characterisation of β-vignin. The results showed: (i) differing glycosylation patterns of the two constituent polypeptides, in agreement with amino acid sequence features; (ii) the seed accumulation of a gene product never identified before; (iii) metal binding capacity of native protein, a property observed only in few other legume seed vicilins.
Age and gender-invariant features of handwritten signatures for verification systems
NASA Astrophysics Data System (ADS)
AbdAli, Sura; Putz-Leszczynska, Joanna
2014-11-01
Handwritten signature is one of the most natural biometrics, the study of human physiological and behavioral patterns. Behavioral biometrics includes signatures that may be different due to its owner gender or age because of intrinsic or extrinsic factors. This paper presents the results of the author's research on age and gender influence on verification factors. The experiments in this research were conducted using a database that contains signatures and their associated metadata. The used algorithm is based on the universal forgery feature idea, where the global classifier is able to classify a signature as a genuine one or, as a forgery, without the actual knowledge of the signature template and its owner. Additionally, the reduction of the dimensionality with the MRMR method is discussed.
Relevance feedback-based building recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Allinson, Nigel M.
2010-07-01
Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.
Shi, Chengdi; Cai, Leyi; Hu, Wei; Sun, Junying
2017-09-19
ABSTRACTS Objective: To study the method of X-ray diagnosis of unstable pelvic fractures displaced in three-dimensional (3D) space and its clinical application in closed reduction. Five models of hemipelvic displacement were made in an adult pelvic specimen. Anteroposterior radiographs of the pelvis were analyzed in PACS. The method of X-ray diagnosis was applied in closed reductions. From February 2012 to June 2016, 23 patients (15 men, 8 women; mean age, 43.4 years) with unstable pelvic fractures were included. All patients were treated by closed reduction and percutaneous cannulate screw fixation of the pelvic ring. According to Tile's classification, the patients were classified into type B1 in 7 cases, B2 in 3, B3 in 3, C1 in 5, C2 in 3, and C3 in 2. The operation time and intraoperative blood loss were recorded. Postoperative images were evaluated by Matta radiographic standards. Five models of displacement were made successfully. The X-ray features of the models were analyzed. For clinical patients, the average operation time was 44.8 min (range, 20-90 min) and the average intraoperative blood loss was 35.7 (range, 20-100) mL. According to the Matta standards, 7 cases were excellent, 12 cases were good, and 4 were fair. The displacements in 3D space of unstable pelvic fractures can be diagnosed rapidly by X-ray analysis to guide closed reduction, with a satisfactory clinical outcome.
Methodologies for Verification and Validation of Space Launch System (SLS) Structural Dynamic Models
NASA Technical Reports Server (NTRS)
Coppolino, Robert N.
2018-01-01
Responses to challenges associated with verification and validation (V&V) of Space Launch System (SLS) structural dynamics models are presented in this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA), which has gained acceptance by various principals in the NASA community, defines efficient and accurate FEM modal sensitivity models that are useful in test-analysis correlation and reconciliation and parametric uncertainty studies. (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976), developed to remedy difficulties encountered with the widely used Classical Guyan Reduction (CGR) method, are presented. MGR and HR are particularly relevant for estimation of "body dominant" target modes of shell-type SLS assemblies that have numerous "body", "breathing" and local component constituents. Realities associated with configuration features and "imperfections" cause "body" and "breathing" mode characteristics to mix resulting in a lack of clarity in the understanding and correlation of FEM- and test-derived modal data. (3) Mode Consolidation (MC) is a newly introduced procedure designed to effectively "de-feature" FEM and experimental modes of detailed structural shell assemblies for unambiguous estimation of "body" dominant target modes. Finally, (4) Experimental Mode Verification (EMV) is a procedure that addresses ambiguities associated with experimental modal analysis of complex structural systems. Specifically, EMV directly separates well-defined modal data from spurious and poorly excited modal data employing newly introduced graphical and coherence metrics.
NASA Astrophysics Data System (ADS)
Damayanti, A.; Werdiningsih, I.
2018-03-01
The brain is the organ that coordinates all the activities that occur in our bodies. Small abnormalities in the brain will affect body activity. Tumor of the brain is a mass formed a result of cell growth not normal and unbridled in the brain. MRI is a non-invasive medical test that is useful for doctors in diagnosing and treating medical conditions. The process of classification of brain tumor can provide the right decision and correct treatment and right on the process of treatment of brain tumor. In this study, the classification process performed to determine the type of brain tumor disease, namely Alzheimer’s, Glioma, Carcinoma and normal, using energy coefficient and ANFIS. Process stages in the classification of images of MR brain are the extraction of a feature, reduction of a feature, and process of classification. The result of feature extraction is a vector approximation of each wavelet decomposition level. The feature reduction is a process of reducing the feature by using the energy coefficients of the vector approximation. The feature reduction result for energy coefficient of 100 per feature is 1 x 52 pixels. This vector will be the input on the classification using ANFIS with Fuzzy C-Means and FLVQ clustering process and LM back-propagation. Percentage of success rate of MR brain images recognition using ANFIS-FLVQ, ANFIS, and LM back-propagation was obtained at 100%.
Toward automated denoising of single molecular Förster resonance energy transfer data
NASA Astrophysics Data System (ADS)
Lee, Hao-Chih; Lin, Bo-Lin; Chang, Wei-Hau; Tu, I.-Ping
2012-01-01
A wide-field two-channel fluorescence microscope is a powerful tool as it allows for the study of conformation dynamics of hundreds to thousands of immobilized single molecules by Förster resonance energy transfer (FRET) signals. To date, the data reduction from a movie to a final set containing meaningful single-molecule FRET (smFRET) traces involves human inspection and intervention at several critical steps, greatly hampering the efficiency at the post-imaging stage. To facilitate the data reduction from smFRET movies to smFRET traces and to address the noise-limited issues, we developed a statistical denoising system toward fully automated processing. This data reduction system has embedded several novel approaches. First, as to background subtraction, high-order singular value decomposition (HOSVD) method is employed to extract spatial and temporal features. Second, to register and map the two color channels, the spots representing bleeding through the donor channel to the acceptor channel are used. Finally, correlation analysis and likelihood ratio statistic for the change point detection (CPD) are developed to study the two channels simultaneously, resolve FRET states, and report the dwelling time of each state. The performance of our method has been checked using both simulation and real data.
Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li
2015-01-01
Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert–Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500–800 and a m range of 50–300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a good performance in Gauss white noise reduction. PMID:26540059
Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li
2015-11-03
Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert-Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500-800 and a m range of 50-300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a good performance in Gauss white noise reduction.
Speckle-modulation for speckle reduction in optical coherence tomography
NASA Astrophysics Data System (ADS)
Liba, Orly; Lew, Matthew D.; SoRelle, Elliott D.; Dutta, Rebecca; Sen, Debasish; Moshfeghi, Darius M.; Chu, Steven; de la Zerda, Adam
2018-02-01
Optical coherence tomography (OCT) is a powerful biomedical imaging technology that relies on the coherent detection of backscattered light to image tissue morphology in vivo. As a consequence, OCT is susceptible to coherent noise, known as speckle noise, which imposes significant limitations on its diagnostic capabilities. Here we show Speckle- Modulating OCT (SM-OCT), a method based purely on light manipulation, which can remove speckle noise, including noise originating from sample multiple back-scattering. SM-OCT accomplishes this by creating and averaging an unlimited number of scans with uncorrelated speckle patterns, without compromising spatial resolution. The uncorrelated speckle patterns are created by scrambling the phase of the light with sub-resolution features using a moving ground-glass diffuser in the optical path of the sample arm. This method can be implemented in existing OCTs as a relatively low-cost add-on. SM-OCT speckle statistics follow the expected decrease in speckle contrast as the number of averaged scans increases. Within a scattering phantom, SM-OCT provides a 2.5-fold increase in effective resolution compared to conventional OCT. Using SM-OCT, we reveal small structures in the tissues of living animals, such as the inner stromal structure of a live mouse cornea, the fine structures inside the mouse pinna, and sweat ducts and Meissner's corpuscle in the human fingertip skin - features that are otherwise obscured by speckle noise when using conventional OCT or OCT with current state of the art speckle reduction methods. Our results indicate that SM-OCT has the potential to improve the current diagnostic and intra-operative capabilities of OCT.
Support vector machine-based facial-expression recognition method combining shape and appearance
NASA Astrophysics Data System (ADS)
Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun
2010-11-01
Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.
Walther, Cornelia; Kellner, Martin; Berkemeyer, Matthias; Brocard, Cécile; Dürauer, Astrid
2017-10-21
Escherichia coli stores large amounts of highly pure product within inclusion bodies (IBs). To take advantage of this beneficial feature, after cell disintegration, the first step to optimal product recovery is efficient IB preparation. This step is also important in evaluating upstream optimization and process development, due to the potential impact of bioprocessing conditions on product quality and on the nanoscale properties of IBs. Proper IB preparation is often neglected, due to laboratory-scale methods requiring large amounts of materials and labor. Miniaturization and parallelization can accelerate analyses of individual processing steps and provide a deeper understanding of up- and downstream processing interdependencies. Consequently, reproducible, predictive microscale methods are in demand. In the present study, we complemented a recently established high-throughput cell disruption method with a microscale method for preparing purified IBs. This preparation provided results comparable to laboratory-scale IB processing, regarding impurity depletion, and product loss. Furthermore, with this method, we performed a "design of experiments" study to demonstrate the influence of fermentation conditions on the performance of subsequent downstream steps and product quality. We showed that this approach provided a 300-fold reduction in material consumption for each fermentation condition and a 24-fold reduction in processing time for 24 samples.
Integrand Reduction Reloaded: Algebraic Geometry and Finite Fields
NASA Astrophysics Data System (ADS)
Sameshima, Ray D.; Ferroglia, Andrea; Ossola, Giovanni
2017-01-01
The evaluation of scattering amplitudes in quantum field theory allows us to compare the phenomenological prediction of particle theory with the measurement at collider experiments. The study of scattering amplitudes, in terms of their symmetries and analytic properties, provides a theoretical framework to develop techniques and efficient algorithms for the evaluation of physical cross sections and differential distributions. Tree-level calculations have been known for a long time. Loop amplitudes, which are needed to reduce the theoretical uncertainty, are more challenging since they involve a large number of Feynman diagrams, expressed as integrals of rational functions. At one-loop, the problem has been solved thanks to the combined effect of integrand reduction, such as the OPP method, and unitarity. However, plenty of work is still needed at higher orders, starting with the two-loop case. Recently, integrand reduction has been revisited using algebraic geometry. In this presentation, we review the salient features of integrand reduction for dimensionally regulated Feynman integrals, and describe an interesting technique for their reduction based on multivariate polynomial division. We also show a novel approach to improve its efficiency by introducing finite fields. Supported in part by the National Science Foundation under Grant PHY-1417354.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa
2018-07-01
Automatic text classification techniques are useful for classifying plaintext medical documents. This study aims to automatically predict the cause of death from free text forensic autopsy reports by comparing various schemes for feature extraction, term weighing or feature value representation, text classification, and feature reduction. For experiments, the autopsy reports belonging to eight different causes of death were collected, preprocessed and converted into 43 master feature vectors using various schemes for feature extraction, representation, and reduction. The six different text classification techniques were applied on these 43 master feature vectors to construct a classification model that can predict the cause of death. Finally, classification model performance was evaluated using four performance measures i.e. overall accuracy, macro precision, macro-F-measure, and macro recall. From experiments, it was found that that unigram features obtained the highest performance compared to bigram, trigram, and hybrid-gram features. Furthermore, in feature representation schemes, term frequency, and term frequency with inverse document frequency obtained similar and better results when compared with binary frequency, and normalized term frequency with inverse document frequency. Furthermore, the chi-square feature reduction approach outperformed Pearson correlation, and information gain approaches. Finally, in text classification algorithms, support vector machine classifier outperforms random forest, Naive Bayes, k-nearest neighbor, decision tree, and ensemble-voted classifier. Our results and comparisons hold practical importance and serve as references for future works. Moreover, the comparison outputs will act as state-of-art techniques to compare future proposals with existing automated text classification techniques. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Improved neutron-gamma discrimination for a 3He neutron detector using subspace learning methods
Wang, C. L.; Funk, L. L.; Riedel, R. A.; ...
2017-02-10
3He gas based neutron linear-position-sensitive detectors (LPSDs) have been applied for many neutron scattering instruments. Traditional Pulse-Height Analysis (PHA) for Neutron-Gamma Discrimination (NGD) resulted in the neutron-gamma efficiency ratio on the orders of 10 5-10 6. The NGD ratios of 3He detectors need to be improved for even better scientific results from neutron scattering. Digital Signal Processing (DSP) analyses of waveforms were proposed for obtaining better NGD ratios, based on features extracted from rise-time, pulse amplitude, charge integration, a simplified Wiener filter, and the cross-correlation between individual and template waveforms of neutron and gamma events. Fisher linear discriminant analysis (FLDA)more » and three multivariate analyses (MVAs) of the features were performed. The NGD ratios are improved by about 10 2-10 3 times compared with the traditional PHA method. Finally, our results indicate the NGD capabilities of 3He tube detectors can be significantly improved with subspace-learning based methods, which may result in a reduced data-collection time and better data quality for further data reduction.« less
Analysis of Big Data in Gait Biomechanics: Current Trends and Future Directions.
Phinyomark, Angkoon; Petri, Giovanni; Ibáñez-Marcelo, Esther; Osis, Sean T; Ferber, Reed
2018-01-01
The increasing amount of data in biomechanics research has greatly increased the importance of developing advanced multivariate analysis and machine learning techniques, which are better able to handle "big data". Consequently, advances in data science methods will expand the knowledge for testing new hypotheses about biomechanical risk factors associated with walking and running gait-related musculoskeletal injury. This paper begins with a brief introduction to an automated three-dimensional (3D) biomechanical gait data collection system: 3D GAIT, followed by how the studies in the field of gait biomechanics fit the quantities in the 5 V's definition of big data: volume, velocity, variety, veracity, and value. Next, we provide a review of recent research and development in multivariate and machine learning methods-based gait analysis that can be applied to big data analytics. These modern biomechanical gait analysis methods include several main modules such as initial input features, dimensionality reduction (feature selection and extraction), and learning algorithms (classification and clustering). Finally, a promising big data exploration tool called "topological data analysis" and directions for future research are outlined and discussed.
Kesharaju, Manasa; Nagarajah, Romesh
2015-09-01
The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.
Patterning and reduction of graphene oxide using femtosecond-laser irradiation
NASA Astrophysics Data System (ADS)
Kang, SeungYeon; Evans, Christopher C.; Shukla, Shobha; Reshef, Orad; Mazur, Eric
2018-07-01
Graphene has emerged as one of the most versatile materials ever discovered due to its extraordinary electronic, optical, thermal, and mechanical properties. However, device fabrication is a well-known challenge and requires novel fabrication methods to realize the complex integration of graphene-based devices. Here, we demonstrate direct laser writing of reduced graphene oxide using femtosecond-laser irradiation at λ = 795 nm. We perform a systematic study of the reduction process of graphene oxide to graphene by varying both the laser fluence and the pulse repetition rate. Our observations show that the reduction has both thermal and non-thermal features, and suggest that we can achieve better resolution and conductivity using kHz pulse trains than using MHz pulse trains or a continuous wave laser. Our reduced graphene oxide lines written at 10-kHz exhibit a 5 order-of-magnitude decrease in resistivity compared to a non-irradiated control sample. This study provides new insight into the reduction process of graphene oxide and opens doors to achieving a high degree of flexibility and control in the fabrication of graphene layers.
Speckle noise reduction for optical coherence tomography based on adaptive 2D dictionary
NASA Astrophysics Data System (ADS)
Lv, Hongli; Fu, Shujun; Zhang, Caiming; Zhai, Lin
2018-05-01
As a high-resolution biomedical imaging modality, optical coherence tomography (OCT) is widely used in medical sciences. However, OCT images often suffer from speckle noise, which can mask some important image information, and thus reduce the accuracy of clinical diagnosis. Taking full advantage of nonlocal self-similarity and adaptive 2D-dictionary-based sparse representation, in this work, a speckle noise reduction algorithm is proposed for despeckling OCT images. To reduce speckle noise while preserving local image features, similar nonlocal patches are first extracted from the noisy image and put into groups using a gamma- distribution-based block matching method. An adaptive 2D dictionary is then learned for each patch group. Unlike traditional vector-based sparse coding, we express each image patch by the linear combination of a few matrices. This image-to-matrix method can exploit the local correlation between pixels. Since each image patch might belong to several groups, the despeckled OCT image is finally obtained by aggregating all filtered image patches. The experimental results demonstrate the superior performance of the proposed method over other state-of-the-art despeckling methods, in terms of objective metrics and visual inspection.
Tangen, C M; Koch, G G
1999-03-01
In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.
A locally p-adaptive approach for Large Eddy Simulation of compressible flows in a DG framework
NASA Astrophysics Data System (ADS)
Tugnoli, Matteo; Abbà, Antonella; Bonaventura, Luca; Restelli, Marco
2017-11-01
We investigate the possibility of reducing the computational burden of LES models by employing local polynomial degree adaptivity in the framework of a high-order DG method. A novel degree adaptation technique especially featured to be effective for LES applications is proposed and its effectiveness is compared to that of other criteria already employed in the literature. The resulting locally adaptive approach allows to achieve significant reductions in computational cost of representative LES computations.
Li, Shelly-Anne; Jeffs, Lianne; Barwick, Melanie; Stevens, Bonnie
2018-05-05
Organizational contextual features have been recognized as important determinants for implementing evidence-based practices across healthcare settings for over a decade. However, implementation scientists have not reached consensus on which features are most important for implementing evidence-based practices. The aims of this review were to identify the most commonly reported organizational contextual features that influence the implementation of evidence-based practices across healthcare settings, and to describe how these features affect implementation. An integrative review was undertaken following literature searches in CINAHL, MEDLINE, PsycINFO, EMBASE, Web of Science, and Cochrane databases from January 2005 to June 2017. English language, peer-reviewed empirical studies exploring organizational context in at least one implementation initiative within a healthcare setting were included. Quality appraisal of the included studies was performed using the Mixed Methods Appraisal Tool. Inductive content analysis informed data extraction and reduction. The search generated 5152 citations. After removing duplicates and applying eligibility criteria, 36 journal articles were included. The majority (n = 20) of the study designs were qualitative, 11 were quantitative, and 5 used a mixed methods approach. Six main organizational contextual features (organizational culture; leadership; networks and communication; resources; evaluation, monitoring and feedback; and champions) were most commonly reported to influence implementation outcomes in the selected studies across a wide range of healthcare settings. We identified six organizational contextual features that appear to be interrelated and work synergistically to influence the implementation of evidence-based practices within an organization. Organizational contextual features did not influence implementation efforts independently from other features. Rather, features were interrelated and often influenced each other in complex, dynamic ways to effect change. These features corresponded to the constructs in the Consolidated Framework for Implementation Research (CFIR), which supports the use of CFIR as a guiding framework for studies that explore the relationship between organizational context and implementation. Organizational culture was most commonly reported to affect implementation. Leadership exerted influence on the five other features, indicating it may be a moderator or mediator that enhances or impedes the implementation of evidence-based practices. Future research should focus on how organizational features interact to influence implementation effectiveness.
NASA Astrophysics Data System (ADS)
Miura, Yasunari; Sugiyama, Yuki
2017-12-01
We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.
NASA Astrophysics Data System (ADS)
Ivanovici, A. M.
1980-03-01
Various physiological and biochemical methods have been proposed for assessing the effects of environmental perturbation on aquatic organisms. The success of these methods as diagnostic tools has, however, been limited. This paper proposes that adenylate energy charge overcomes some of these limitations. The adenylate energy charge (AEC) is calculated from concentrations of adenine nucleotides ([ATP+½ADP]/[ATP+ADP+AMP]), and is a reflection of metabolic potential available to an organism. Several features of this method are: correlation of specific values with physiological condition or growth state, a defined range of values, fast response times and high precision. Several examples from laboratory and field experiments are given to demonstrate these features. The test organisms used (mollusc species) were exposed to a variety of environmental perturbations, including salinity reduction, hydrocarbons and low doses of heavy metal. The studies performed indicate that the energy charge may be a useful measure in the assessment of environmental impact. Its use is restricted, however, as several limitations exist which need to be fully evaluated. Further work relating values to population characteristics of multicellular organisms needs to be completed before the method can become a predictive tool for management.
NASA Astrophysics Data System (ADS)
Lohweg, Volker; Schaede, Johannes; Türke, Thomas
2006-02-01
The authenticity checking and inspection of bank notes is a high labour intensive process where traditionally every note on every sheet is inspected manually. However with the advent of more and more sophisticated security features, both visible and invisible, and the requirement of cost reduction in the printing process, it is clear that automation is required. As more and more print techniques and new security features will be established, total quality security, authenticity and bank note printing must be assured. Therefore, this factor necessitates amplification of a sensorial concept in general. We propose a concept for both authenticity checking and inspection methods for pattern recognition and classification for securities and banknotes, which is based on the concept of sensor fusion and fuzzy interpretation of data measures. In the approach different methods of authenticity analysis and print flaw detection are combined, which can be used for vending or sorting machines, as well as for printing machines. Usually only the existence or appearance of colours and their textures are checked by cameras. Our method combines the visible camera images with IR-spectral sensitive sensors, acoustical and other measurements like temperature and pressure of printing machines.
Li, Ziyi; Safo, Sandra E; Long, Qi
2017-07-11
Sparse principal component analysis (PCA) is a popular tool for dimensionality reduction, pattern recognition, and visualization of high dimensional data. It has been recognized that complex biological mechanisms occur through concerted relationships of multiple genes working in networks that are often represented by graphs. Recent work has shown that incorporating such biological information improves feature selection and prediction performance in regression analysis, but there has been limited work on extending this approach to PCA. In this article, we propose two new sparse PCA methods called Fused and Grouped sparse PCA that enable incorporation of prior biological information in variable selection. Our simulation studies suggest that, compared to existing sparse PCA methods, the proposed methods achieve higher sensitivity and specificity when the graph structure is correctly specified, and are fairly robust to misspecified graph structures. Application to a glioblastoma gene expression dataset identified pathways that are suggested in the literature to be related with glioblastoma. The proposed sparse PCA methods Fused and Grouped sparse PCA can effectively incorporate prior biological information in variable selection, leading to improved feature selection and more interpretable principal component loadings and potentially providing insights on molecular underpinnings of complex diseases.
A new time-frequency method for identification and classification of ball bearing faults
NASA Astrophysics Data System (ADS)
Attoui, Issam; Fergani, Nadir; Boutasseta, Nadir; Oudjani, Brahim; Deliou, Adel
2017-06-01
In order to fault diagnosis of ball bearing that is one of the most critical components of rotating machinery, this paper presents a time-frequency procedure incorporating a new feature extraction step that combines the classical wavelet packet decomposition energy distribution technique and a new feature extraction technique based on the selection of the most impulsive frequency bands. In the proposed procedure, firstly, as a pre-processing step, the most impulsive frequency bands are selected at different bearing conditions using a combination between Fast-Fourier-Transform FFT and Short-Frequency Energy SFE algorithms. Secondly, once the most impulsive frequency bands are selected, the measured machinery vibration signals are decomposed into different frequency sub-bands by using discrete Wavelet Packet Decomposition WPD technique to maximize the detection of their frequency contents and subsequently the most useful sub-bands are represented in the time-frequency domain by using Short Time Fourier transform STFT algorithm for knowing exactly what the frequency components presented in those frequency sub-bands are. Once the proposed feature vector is obtained, three feature dimensionality reduction techniques are employed using Linear Discriminant Analysis LDA, a feedback wrapper method and Locality Sensitive Discriminant Analysis LSDA. Lastly, the Adaptive Neuro-Fuzzy Inference System ANFIS algorithm is used for instantaneous identification and classification of bearing faults. In order to evaluate the performances of the proposed method, different testing data set to the trained ANFIS model by using different conditions of healthy and faulty bearings under various load levels, fault severities and rotating speed. The conclusion resulting from this paper is highlighted by experimental results which prove that the proposed method can serve as an intelligent bearing fault diagnosis system.
Localized contourlet features in vehicle make and model recognition
NASA Astrophysics Data System (ADS)
Zafar, I.; Edirisinghe, E. A.; Acar, B. S.
2009-02-01
Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic Number Plate Recognition (ANPR) systems. Several vehicle MMR systems have been proposed in literature. In parallel to this, the usefulness of multi-resolution based feature analysis techniques leading to efficient object classification algorithms have received close attention from the research community. To this effect, Contourlet transforms that can provide an efficient directional multi-resolution image representation has recently been introduced. Already an attempt has been made in literature to use Curvelet/Contourlet transforms in vehicle MMR. In this paper we propose a novel localized feature detection method in Contourlet transform domain that is capable of increasing the classification rates up to 4%, as compared to the previously proposed Contourlet based vehicle MMR approach in which the features are non-localized and thus results in sub-optimal classification. Further we show that the proposed algorithm can achieve the increased classification accuracy of 96% at significantly lower computational complexity due to the use of Two Dimensional Linear Discriminant Analysis (2DLDA) for dimensionality reduction by preserving the features with high between-class variance and low inter-class variance.
NASA Astrophysics Data System (ADS)
Uchiyama, Yoshikazu; Gao, Xin; Hara, Takeshi; Fujita, Hiroshi; Ando, Hiromichi; Yamakawa, Hiroyasu; Asano, Takahiko; Kato, Hiroki; Iwama, Toru; Kanematsu, Masayuki; Hoshi, Hiroaki
2008-03-01
The detection of unruptured aneurysms is a major subject in magnetic resonance angiography (MRA). However, their accurate detection is often difficult because of the overlapping between the aneurysm and the adjacent vessels on maximum intensity projection images. The purpose of this study is to develop a computerized method for the detection of unruptured aneurysms in order to assist radiologists in image interpretation. The vessel regions were first segmented using gray-level thresholding and a region growing technique. The gradient concentration (GC) filter was then employed for the enhancement of the aneurysms. The initial candidates were identified in the GC image using a gray-level threshold. For the elimination of false positives (FPs), we determined shape features and an anatomical location feature. Finally, rule-based schemes and quadratic discriminant analysis were employed along with these features for distinguishing between the aneurysms and the FPs. The sensitivity for the detection of unruptured aneurysms was 90.0% with 1.52 FPs per patient. Our computerized scheme can be useful in assisting the radiologists in the detection of unruptured aneurysms in MRA images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, C. L.; Funk, L. L.; Riedel, R. A.
3He gas based neutron linear-position-sensitive detectors (LPSDs) have been applied for many neutron scattering instruments. Traditional Pulse-Height Analysis (PHA) for Neutron-Gamma Discrimination (NGD) resulted in the neutron-gamma efficiency ratio on the orders of 10 5-10 6. The NGD ratios of 3He detectors need to be improved for even better scientific results from neutron scattering. Digital Signal Processing (DSP) analyses of waveforms were proposed for obtaining better NGD ratios, based on features extracted from rise-time, pulse amplitude, charge integration, a simplified Wiener filter, and the cross-correlation between individual and template waveforms of neutron and gamma events. Fisher linear discriminant analysis (FLDA)more » and three multivariate analyses (MVAs) of the features were performed. The NGD ratios are improved by about 10 2-10 3 times compared with the traditional PHA method. Finally, our results indicate the NGD capabilities of 3He tube detectors can be significantly improved with subspace-learning based methods, which may result in a reduced data-collection time and better data quality for further data reduction.« less
Robust extrema features for time-series data analysis.
Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N
2013-06-01
The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6∼8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3∼5 pattern classes considering the trade-off between time consumption and classification rate. PMID:22736979
Mahieu, Nathaniel G; Patti, Gary J
2017-10-03
When using liquid chromatography/mass spectrometry (LC/MS) to perform untargeted metabolomics, it is now routine to detect tens of thousands of features from biological samples. Poor understanding of the data, however, has complicated interpretation and masked the number of unique metabolites actually being measured in an experiment. Here we place an upper bound on the number of unique metabolites detected in Escherichia coli samples analyzed with one untargeted metabolomics method. We first group multiple features arising from the same analyte, which we call "degenerate features", using a context-driven annotation approach. Surprisingly, this analysis revealed thousands of previously unreported degeneracies that reduced the number of unique analytes to ∼2961. We then applied an orthogonal approach to remove nonbiological features from the data using the 13 C-based credentialing technology. This further reduced the number of unique analytes to less than 1000. Our 90% reduction in data is 5-fold greater than previously published studies. On the basis of the results, we propose an alternative approach to untargeted metabolomics that relies on thoroughly annotated reference data sets. To this end, we introduce the creDBle database ( http://creDBle.wustl.edu ), which contains accurate mass, retention time, and MS/MS fragmentation data as well as annotations of all credentialed features.
Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ochilov, S.; Alam, M. S.; Bal, A.
2006-05-01
Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.
Zeid, Elias Abou; Sereshkeh, Alborz Rezazadeh; Chau, Tom
2016-12-01
In recent years, the readiness potential (RP), a type of pre-movement neural activity, has been investigated for asynchronous electroencephalogram (EEG)-based brain-computer interfaces (BCIs). Since the RP is attenuated for involuntary movements, a BCI driven by RP alone could facilitate intentional control amid a plethora of unintentional movements. Previous studies have attempted single trial classification of RP via spatial and temporal filtering methods, or by combining the RP with event-related desynchronization. However, RP feature extraction remains challenging due to the slow non-oscillatory nature of the potential, its variability among participants and the inherent noise in EEG signals. Here, we propose a participant-specific, individually optimized pipeline of spatio-temporal filtering (PSTF) to improve RP feature extraction for laterality prediction. PSTF applies band-pass filtering on RP signals, followed by Fisher criterion spatial filtering to maximize class separation, and finally temporal window averaging for feature dimension reduction. Optimal parameters are simultaneously found by cross-validation for each participant. Using EEG data from 14 participants performing self-initiated left or right key presses as well as two benchmark BCI datasets, we compared the performance of PSTF to two popular methods: common spatial subspace decomposition, and adaptive spatio-temporal filtering. On the BCI benchmark data sets, PSTF performed comparably to both existing methods. With the key press EEG data, PSTF extracted more discriminative features, thereby leading to more accurate (74.99% average accuracy) predictions of RP laterality than that achievable with existing methods. Naturalistic and volitional interaction with the world is an important capacity that is lost with traditional system-paced BCIs. We demonstrated a significant improvement in fine movement laterality prediction from RP features alone. Our work supports further study of RP-based BCI for intuitive asynchronous control of the environment, such as augmentative communication or wheelchair navigation.
NASA Astrophysics Data System (ADS)
Abou Zeid, Elias; Rezazadeh Sereshkeh, Alborz; Chau, Tom
2016-12-01
Objective. In recent years, the readiness potential (RP), a type of pre-movement neural activity, has been investigated for asynchronous electroencephalogram (EEG)-based brain-computer interfaces (BCIs). Since the RP is attenuated for involuntary movements, a BCI driven by RP alone could facilitate intentional control amid a plethora of unintentional movements. Previous studies have attempted single trial classification of RP via spatial and temporal filtering methods, or by combining the RP with event-related desynchronization. However, RP feature extraction remains challenging due to the slow non-oscillatory nature of the potential, its variability among participants and the inherent noise in EEG signals. Here, we propose a participant-specific, individually optimized pipeline of spatio-temporal filtering (PSTF) to improve RP feature extraction for laterality prediction. Approach. PSTF applies band-pass filtering on RP signals, followed by Fisher criterion spatial filtering to maximize class separation, and finally temporal window averaging for feature dimension reduction. Optimal parameters are simultaneously found by cross-validation for each participant. Using EEG data from 14 participants performing self-initiated left or right key presses as well as two benchmark BCI datasets, we compared the performance of PSTF to two popular methods: common spatial subspace decomposition, and adaptive spatio-temporal filtering. Main results. On the BCI benchmark data sets, PSTF performed comparably to both existing methods. With the key press EEG data, PSTF extracted more discriminative features, thereby leading to more accurate (74.99% average accuracy) predictions of RP laterality than that achievable with existing methods. Significance. Naturalistic and volitional interaction with the world is an important capacity that is lost with traditional system-paced BCIs. We demonstrated a significant improvement in fine movement laterality prediction from RP features alone. Our work supports further study of RP-based BCI for intuitive asynchronous control of the environment, such as augmentative communication or wheelchair navigation.
Khushaba, Rami N; Takruri, Maen; Miro, Jaime Valls; Kodagoda, Sarath
2014-07-01
Recent studies in Electromyogram (EMG) pattern recognition reveal a gap between research findings and a viable clinical implementation of myoelectric control strategies. One of the important factors contributing to the limited performance of such controllers in practice is the variation in the limb position associated with normal use as it results in different EMG patterns for the same movements when carried out at different positions. However, the end goal of the myoelectric control scheme is to allow amputees to control their prosthetics in an intuitive and accurate manner regardless of the limb position at which the movement is initiated. In an attempt to reduce the impact of limb position on EMG pattern recognition, this paper proposes a new feature extraction method that extracts a set of power spectrum characteristics directly from the time-domain. The end goal is to form a set of features invariant to limb position. Specifically, the proposed method estimates the spectral moments, spectral sparsity, spectral flux, irregularity factor, and signals power spectrum correlation. This is achieved through using Fourier transform properties to form invariants to amplification, translation and signal scaling, providing an efficient and accurate representation of the underlying EMG activity. Additionally, due to the inherent temporal structure of the EMG signal, the proposed method is applied on the global segments of EMG data as well as the sliced segments using multiple overlapped windows. The performance of the proposed features is tested on EMG data collected from eleven subjects, while implementing eight classes of movements, each at five different limb positions. Practical results indicate that the proposed feature set can achieve significant reduction in classification error rates, in comparison to other methods, with ≈8% error on average across all subjects and limb positions. A real-time implementation and demonstration is also provided and made available as a video supplement (see Appendix A). Copyright © 2014 Elsevier Ltd. All rights reserved.
An accurate boundary element method for the exterior elastic scattering problem in two dimensions
NASA Astrophysics Data System (ADS)
Bao, Gang; Xu, Liwei; Yin, Tao
2017-11-01
This paper is concerned with a Galerkin boundary element method solving the two dimensional exterior elastic wave scattering problem. The original problem is first reduced to the so-called Burton-Miller [1] boundary integral formulation, and essential mathematical features of its variational form are discussed. In numerical implementations, a newly-derived and analytically accurate regularization formula [2] is employed for the numerical evaluation of hyper-singular boundary integral operator. A new computational approach is employed based on the series expansions of Hankel functions for the computation of weakly-singular boundary integral operators during the reduction of corresponding Galerkin equations into a discrete linear system. The effectiveness of proposed numerical methods is demonstrated using several numerical examples.
a Hyperspectral Image Classification Method Using Isomap and Rvm
NASA Astrophysics Data System (ADS)
Chang, H.; Wang, T.; Fang, H.; Su, Y.
2018-04-01
Classification is one of the most significant applications of hyperspectral image processing and even remote sensing. Though various algorithms have been proposed to implement and improve this application, there are still drawbacks in traditional classification methods. Thus further investigations on some aspects, such as dimension reduction, data mining, and rational use of spatial information, should be developed. In this paper, we used a widely utilized global manifold learning approach, isometric feature mapping (ISOMAP), to address the intrinsic nonlinearities of hyperspectral image for dimension reduction. Considering the impropriety of Euclidean distance in spectral measurement, we applied spectral angle (SA) for substitute when constructed the neighbourhood graph. Then, relevance vector machines (RVM) was introduced to implement classification instead of support vector machines (SVM) for simplicity, generalization and sparsity. Therefore, a probability result could be obtained rather than a less convincing binary result. Moreover, taking into account the spatial information of the hyperspectral image, we employ a spatial vector formed by different classes' ratios around the pixel. At last, we combined the probability results and spatial factors with a criterion to decide the final classification result. To verify the proposed method, we have implemented multiple experiments with standard hyperspectral images compared with some other methods. The results and different evaluation indexes illustrated the effectiveness of our method.
NASA Astrophysics Data System (ADS)
Chaa, Mourad; Boukezzoula, Naceur-Eddine; Attia, Abdelouahab
2017-01-01
Two types of scores extracted from two-dimensional (2-D) and three-dimensional (3-D) palmprint for personal recognition systems are merged, introducing a local image descriptor for 2-D palmprint-based recognition systems, named bank of binarized statistical image features (B-BSIF). The main idea of B-BSIF is that the extracted histograms from the binarized statistical image features (BSIF) code images (the results of applying the different BSIF descriptor size with the length 12) are concatenated into one to produce a large feature vector. 3-D palmprint contains the depth information of the palm surface. The self-quotient image (SQI) algorithm is applied for reconstructing illumination-invariant 3-D palmprint images. To extract discriminative Gabor features from SQI images, Gabor wavelets are defined and used. Indeed, the dimensionality reduction methods have shown their ability in biometrics systems. Given this, a principal component analysis (PCA)+linear discriminant analysis (LDA) technique is employed. For the matching process, the cosine Mahalanobis distance is applied. Extensive experiments were conducted on a 2-D and 3-D palmprint database with 10,400 range images from 260 individuals. Then, a comparison was made between the proposed algorithm and other existing methods in the literature. Results clearly show that the proposed framework provides a higher correct recognition rate. Furthermore, the best results were obtained by merging the score of B-BSIF descriptor with the score of the SQI+Gabor wavelets+PCA+LDA method, yielding an equal error rate of 0.00% and a recognition rate of rank-1=100.00%.
MgB2 wire diameter reduction by hot isostatic pressing—a route for enhanced critical current density
NASA Astrophysics Data System (ADS)
Morawski, A.; Cetner, T.; Gajda, D.; Zaleski, A. J.; Häßler, W.; Nenkov, K.; Rindfleisch, M. A.; Tomsic, M.; Przysłupski, P.
2018-07-01
The effect of wire diameter reduction on the critical current density of pristine MgB2 wire was studied. Wires were treated by a hot isostatic pressing method at 570 °C and at pressures of up to 1.1 GPa. It was found that the wire diameter reduction induces an increase of up to 70% in the mass density of the superconducting cores. This feature leads to increases in critical current, critical current density, and pinning force density. The magnitude and field dependence of the critical current density are related to both grain connectivity and structural defects, which act as effective pinning centers. High field transport properties were obtained without doping of the MgB2 phase. A critical current density jc of 3500 A mm‑2 was reached at 4 K, 6 T for the best sample, which was a five-fold increase compared to MgB2 samples synthesized at ambient pressure.
Polynomic nonlinear dynamical systems - A residual sensitivity method for model reduction
NASA Technical Reports Server (NTRS)
Yurkovich, S.; Bugajski, D.; Sain, M.
1985-01-01
The motivation for using polynomic combinations of system states and inputs to model nonlinear dynamics systems is founded upon the classical theories of analysis and function representation. A feature of such representations is the need to make available all possible monomials in these variables, up to the degree specified, so as to provide for the description of widely varying functions within a broad class. For a particular application, however, certain monomials may be quite superfluous. This paper examines the possibility of removing monomials from the model in accordance with the level of sensitivity displayed by the residuals to their absence. Critical in these studies is the effect of system input excitation, and the effect of discarding monomial terms, upon the model parameter set. Therefore, model reduction is approached iteratively, with inputs redesigned at each iteration to ensure sufficient excitation of remaining monomials for parameter approximation. Examples are reported to illustrate the performance of such model reduction approaches.
Whole-genome phylogeny of Escherichia coli/Shigella group by feature frequency profiles (FFPs)
Sims, Gregory E.; Kim, Sung-Hou
2011-01-01
A whole-genome phylogeny of the Escherichia coli/Shigella group was constructed by using the feature frequency profile (FFP) method. This alignment-free approach uses the frequencies of l-mer features of whole genomes to infer phylogenic distances. We present two phylogenies that accentuate different aspects of E. coli/Shigella genomic evolution: (i) one based on the compositions of all possible features of length l = 24 (∼8.4 million features), which are likely to reveal the phenetic grouping and relationship among the organisms and (ii) the other based on the compositions of core features with low frequency and low variability (∼0.56 million features), which account for ∼69% of all commonly shared features among 38 taxa examined and are likely to have genome-wide lineal evolutionary signal. Shigella appears as a single clade when all possible features are used without filtering of noncore features. However, results using core features show that Shigella consists of at least two distantly related subclades, implying that the subclades evolved into a single clade because of a high degree of convergence influenced by mobile genetic elements and niche adaptation. In both FFP trees, the basal group of the E. coli/Shigella phylogeny is the B2 phylogroup, which contains primarily uropathogenic strains, suggesting that the E. coli/Shigella ancestor was likely a facultative or opportunistic pathogen. The extant commensal strains diverged relatively late and appear to be the result of reductive evolution of genomes. We also identify clade distinguishing features and their associated genomic regions within each phylogroup. Such features may provide useful information for understanding evolution of the groups and for quick diagnostic identification of each phylogroup. PMID:21536867
Effective diagnosis of Alzheimer’s disease by means of large margin-based methodology
2012-01-01
Background Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer’s Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. Methods It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. Results Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. Conclusions All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET). PMID:22849649
Guided SAR image despeckling with probabilistic non local weights
NASA Astrophysics Data System (ADS)
Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny
2017-12-01
SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.
Using learning automata to determine proper subset size in high-dimensional spaces
NASA Astrophysics Data System (ADS)
Seyyedi, Seyyed Hossein; Minaei-Bidgoli, Behrouz
2017-03-01
In this paper, we offer a new method called FSLA (Finding the best candidate Subset using Learning Automata), which combines the filter and wrapper approaches for feature selection in high-dimensional spaces. Considering the difficulties of dimension reduction in high-dimensional spaces, FSLA's multi-objective functionality is to determine, in an efficient manner, a feature subset that leads to an appropriate tradeoff between the learning algorithm's accuracy and efficiency. First, using an existing weighting function, the feature list is sorted and selected subsets of the list of different sizes are considered. Then, a learning automaton verifies the performance of each subset when it is used as the input space of the learning algorithm and estimates its fitness upon the algorithm's accuracy and the subset size, which determines the algorithm's efficiency. Finally, FSLA introduces the fittest subset as the best choice. We tested FSLA in the framework of text classification. The results confirm its promising performance of attaining the identified goal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Jijo; Yang, Cungeng; Wu, Hui
Purpose: To investigate early tumor and normal tissue responses during the course of radiation therapy (RT) for lung cancer using quantitative analysis of daily computed tomography (CT) scans. Methods and Materials: Daily diagnostic-quality CT scans acquired using CT-on-rails during CT-guided RT for 20 lung cancer patients were quantitatively analyzed. On each daily CT set, the contours of the gross tumor volume (GTV) and lungs were generated and the radiation dose delivered was reconstructed. The changes in CT image intensity (Hounsfield unit [HU]) features in the GTV and the multiple normal lung tissue shells around the GTV were extracted from themore » daily CT scans. The associations between the changes in the mean HUs, GTV, accumulated dose during RT delivery, and patient survival rate were analyzed. Results: During the RT course, radiation can induce substantial changes in the HU histogram features on the daily CT scans, with reductions in the GTV mean HUs (dH) observed in the range of 11 to 48 HU (median 30). The dH is statistically related to the accumulated GTV dose (R{sup 2} > 0.99) and correlates weakly with the change in GTV (R{sup 2} = 0.3481). Statistically significant increases in patient survival rates (P=.038) were observed for patients with a higher dH in the GTV. In the normal lung, the 4 regions proximal to the GTV showed statistically significant (P<.001) HU reductions from the first to last fraction. Conclusion: Quantitative analysis of the daily CT scans indicated that the mean HUs in lung tumor and surrounding normal tissue were reduced during RT delivery. This reduction was observed in the early phase of the treatment, is patient specific, and correlated with the delivered dose. A larger HU reduction in the GTV correlated significantly with greater patient survival. The changes in daily CT features, such as the mean HU, can be used for early assessment of the radiation response during RT delivery for lung cancer.« less
Breast reduction (mammoplasty) - slideshow
... page: //medlineplus.gov/ency/presentations/100189.htm Breast reduction (mammoplasty) - series—Indications To use the sharing features ... Lickstein, MD, FACS, specializing in cosmetic and reconstructive plastic surgery, Palm Beach Gardens, FL. Review provided by ...
Principal polynomial analysis.
Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus
2014-11-01
This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.
NASA Technical Reports Server (NTRS)
Knott, P. R.; Janardan, B. A.; Majjigi, R. K.; Shutiani, P. K.; Vogt, P. G.
1981-01-01
Six coannular plug nozzle configurations having inverted velocity and temperature profiles, and a baseline convergent conical nozzle were tested for simulated flight acoustic evaluation in General Electric's Anechoic Free-Jet Acoustic Facility. The nozzles were tested over a range of test conditions that are typical of a Variable Cycle Engine for application to advanced high speed aircraft. The outer stream radius ratio for most of the configurations was 0.853, and the inner-stream-outer-stream area ratio was tested in the range of 0.54. Other variables investigated were the influence of bypass struts, a simple noncontoured convergent-divergent outer stream nozzle for forward quadrant shock noise control, and the effects of varying outer stream radius and inner-stream-to-outer-stream velocity ratios on the flight noise signatures of the nozzles. It was found that in simulated flight, the high-radius-ratio coannular plug nozzles maintain their jet noise and shock noise reduction features previously observed in static testing. The presence of nozzle bypass structs will not significantly effect the acoustic noise reduction features of a General Electric-type nozzle design. A unique coannular plug nozzle flight acoustic spectral prediction method was identified and found to predict the measured results quite well. Special laser velocimeter and acoustic measurements were performed which have given new insight into the jet and shock noise reduction mechanisms of coannular plug nozzles with regard to identifying further beneficial research efforts.
Feature Selection Methods for Robust Decoding of Finger Movements in a Non-human Primate
Padmanaban, Subash; Baker, Justin; Greger, Bradley
2018-01-01
Objective: The performance of machine learning algorithms used for neural decoding of dexterous tasks may be impeded due to problems arising when dealing with high-dimensional data. The objective of feature selection algorithms is to choose a near-optimal subset of features from the original feature space to improve the performance of the decoding algorithm. The aim of our study was to compare the effects of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis (PCA), and Mutual Information Maximization on SVM classification performance for a dexterous decoding task. Approach: A nonhuman primate (NHP) was trained to perform small coordinated movements—similar to typing. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials (AP) during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon AP firing rates. We used the SVM classification to examine the functional parameters of (i) robustness to simulated failure and (ii) longevity of classification. We also compared the effect of using isolated-neuron and multi-unit firing rates as the feature vector supplied to the SVM. Main results: The average decoding accuracy for multi-unit features and single-unit features using Mutual Information Maximization (MIM) across 47 sessions was 96.74 ± 3.5% and 97.65 ± 3.36% respectively. The reduction in decoding accuracy between using 100% of the features and 10% of features based on MIM was 45.56% (from 93.7 to 51.09%) and 4.75% (from 95.32 to 90.79%) for multi-unit and single-unit features respectively. MIM had best performance compared to other feature selection methods. Significance: These results suggest improved decoding performance can be achieved by using optimally selected features. The results based on clinically relevant performance metrics also suggest that the decoding algorithm can be made robust by using optimal features and feature selection algorithms. We believe that even a few percent increase in performance is important and improves the decoding accuracy of the machine learning algorithm potentially increasing the ease of use of a brain machine interface. PMID:29467602
Growing Up with Type 1 Narcolepsy: Its Anthropometric and Endocrine Features
Ponziani, Virginia; Gennari, Monia; Pizza, Fabio; Balsamo, Antonio; Bernardi, Filippo; Plazzi, Giuseppe
2016-01-01
Study Objectives: To evaluate the effect of type 1 narcolepsy (NT1) on anthropometric and endocrine features in childhood/adolescence, focusing on patterns and correlates of weight, pubertal development, and growth in treated and untreated patients. Methods: We collected anthropometric (height, weight, body mass index (BMI) z-scores), pubertal, metabolic, and endocrine data from 72 NT1 patients at diagnosis and all available premorbid anthropometric parameters of patients from their pediatric files (n = 30). New measurements at 1-y reassessment in patients undergoing different treatments were compared with baseline data. Results: We detected a high prevalence of overweight (29.2%), obesity (25%), metabolic syndrome (18.8%), and precocious puberty (16.1%), but no signs of linear growth alterations at diagnosis. According to anthropometric records, weight gain started soon after NT1 onset. At 1-y follow-up reassessment, sodium oxybate treatment was associated with a significant BMI z-score reduction (−1.29 ± 0.30, p < 0.0005) after adjusting for baseline age, sex, sleepiness, and BMI. Conclusions: NT1 onset in children/adolescents is associated with rapid weight gain up to overweight/obesity and precocious puberty without affecting growth. In our study, sodium oxybate treatment resulted in a significant weight reduction in NT1 overweight/obese patients at 1-y follow-up. Citation: Ponziani V, Gennari M, Pizza F, Balsamo A, Bernardi F, Plazzi G. Growing up with type 1 narcolepsy: its anthropometric and endocrine features. J Clin Sleep Med 2016;12(12):1649–1657. PMID:27707443
Lim, Pooi Khoon; Ng, Siew-Cheok; Jassim, Wissam A.; Redmond, Stephen J.; Zilany, Mohammad; Avolio, Alberto; Lim, Einly; Tan, Maw Pin; Lovell, Nigel H.
2015-01-01
We present a novel approach to improve the estimation of systolic (SBP) and diastolic blood pressure (DBP) from oscillometric waveform data using variable characteristic ratios between SBP and DBP with mean arterial pressure (MAP). This was verified in 25 healthy subjects, aged 28 ± 5 years. The multiple linear regression (MLR) and support vector regression (SVR) models were used to examine the relationship between the SBP and the DBP ratio with ten features extracted from the oscillometric waveform envelope (OWE). An automatic algorithm based on relative changes in the cuff pressure and neighbouring oscillometric pulses was proposed to remove outlier points caused by movement artifacts. Substantial reduction in the mean and standard deviation of the blood pressure estimation errors were obtained upon artifact removal. Using the sequential forward floating selection (SFFS) approach, we were able to achieve a significant reduction in the mean and standard deviation of differences between the estimated SBP values and the reference scoring (MLR: mean ± SD = −0.3 ± 5.8 mmHg; SVR and −0.6 ± 5.4 mmHg) with only two features, i.e., Ratio2 and Area3, as compared to the conventional maximum amplitude algorithm (MAA) method (mean ± SD = −1.6 ± 8.6 mmHg). Comparing the performance of both MLR and SVR models, our results showed that the MLR model was able to achieve comparable performance to that of the SVR model despite its simplicity. PMID:26087370
Ensemble based on static classifier selection for automated diagnosis of Mild Cognitive Impairment.
Nanni, Loris; Lumini, Alessandra; Zaffonato, Nicolò
2018-05-15
Alzheimer's disease (AD) is the most common cause of neurodegenerative dementia in the elderly population. Scientific research is very active in the challenge of designing automated approaches to achieve an early and certain diagnosis. Recently an international competition among AD predictors has been organized: "A Machine learning neuroimaging challenge for automated diagnosis of Mild Cognitive Impairment" (MLNeCh). This competition is based on pre-processed sets of T1-weighted Magnetic Resonance Images (MRI) to be classified in four categories: stable AD, individuals with MCI who converted to AD, individuals with MCI who did not convert to AD and healthy controls. In this work, we propose a method to perform early diagnosis of AD, which is evaluated on MLNeCh dataset. Since the automatic classification of AD is based on the use of feature vectors of high dimensionality, different techniques of feature selection/reduction are compared in order to avoid the curse-of-dimensionality problem, then the classification method is obtained as the combination of Support Vector Machines trained using different clusters of data extracted from the whole training set. The multi-classifier approach proposed in this work outperforms all the stand-alone method tested in our experiments. The final ensemble is based on a set of classifiers, each trained on a different cluster of the training data. The proposed ensemble has the great advantage of performing well using a very reduced version of the data (the reduction factor is more than 90%). The MATLAB code for the ensemble of classifiers will be publicly available 1 to other researchers for future comparisons. Copyright © 2017 Elsevier B.V. All rights reserved.
Using Wind Driven Tumbleweed Rovers to Explore Martian Gully Features
NASA Technical Reports Server (NTRS)
Antol, Jeffrey; Woodard, Stanley E.; Hajos, Gregory A.; Heldmann, Jennifer L.; Taylor, Bryant D.
2004-01-01
Gully features have been observed on the slopes of numerous Martian crater walls, valleys, pits, and graben. Several mechanisms for gully formation have been proposed, including: liquid water aquifers (shallow and deep), melting ground ice, snow melt, CO2 aquifers, and dry debris flow. Remote sensing observations indicate that the most likely erosional agent is liquid water. Debate concerns the source of this water. Observations favor a liquid water aquifer as the primary candidate. The current strategy in the search for life on Mars is to "follow the water." A new vehicle known as a Tumbleweed rover may be able to conduct in-situ investigations in the gullies, which are currently inaccessible by conventional rovers. Deriving mobility through use of the surface winds on Mars, Tumbleweed rovers would be lightweight and relatively inexpensive thus allowing multiple rovers to be deployed in a single mission to survey areas for future exploration. NASA Langley Research Center (LaRC) is developing deployable structure Tumbleweed concepts. An extremely lightweight measurement acquisition system and sensors are proposed for the Tumbleweed rover that greatly increases the number of measurements performed while having negligible mass increase. The key to this method is the use of magnetic field response sensors designed as passive inductor-capacitor circuits that produce magnetic field responses whose attributes correspond to values of physical properties for which the sensors measure. The sensors do not need a physical connection to a power source or to data acquisition equipment resulting in additional weight reduction. Many of the sensors and interrogating antennae can be directly placed on the Tumbleweed using film deposition methods such as photolithography thus providing further weight reduction. Concepts are presented herein for methods to measure subsurface water, subsurface metals, planetary winds and environmental gases.
Using Wind Driven Tumbleweed Rovers to Explore Martian Gully Features
NASA Technical Reports Server (NTRS)
Antol, Jeffrey; Woodard, Stanley E.; Hajos, Gregory A.; Heldmann, Jennifer L.; Taylor, Bryant D.
2005-01-01
Gully features have been observed on the slopes of numerous Martian crater walls, valleys, pits, and graben. Several mechanisms for gully formation have been proposed, including: liquid water aquifers (shallow and deep), melting ground ice, snow melt, CO2 aquifers, and dry debris flow. Remote sensing observations indicate that the most likely erosional agent is liquid water. Debate concerns the source of this water. Observations favor a liquid water aquifer as the primary candidate. The current strategy in the search for life on Mars is to "follow the water." A new vehicle known as a Tumbleweed rover may be able to conduct in-situ investigations in the gullies, which are currently inaccessible by conventional rovers. Deriving mobility through use of the surface winds on Mars, Tumbleweed rovers would be lightweight and relatively inexpensive thus allowing multiple rovers to be deployed in a single mission to survey areas for future exploration. NASA Langley Research Center (LaRC) is developing deployable structure Tumbleweed concepts. An extremely lightweight measurement acquisition system and sensors are proposed for the Tumbleweed rover that greatly increases the number of measurements performed while having negligible mass increase. The key to this method is the use of magnetic field response sensors designed as passive inductor-capacitor circuits that produce magnetic field responses whose attributes correspond to values of physical properties for which the sensors measure. The sensors do not need a physical connection to a power source or to data acquisition equipment resulting in additional weight reduction. Many of the sensors and interrogating antennae can be directly placed on the Tumbleweed using film deposition methods such as photolithography thus providing further weight reduction. Concepts are presented herein for methods to measure subsurface water, subsurface metals, planetary winds and environmental gases.
Heterobimetallic Pd–K carbene complexes via one-electron reductions of palladium radical carbenes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Peng; Hoffbauer, Melissa R.; Vyushkova, Mariya
2016-03-24
Unprecedented sequential substitution/reduction synthetic strategy on the Pd radical carbenes afforded heterobimetallic Pd–K carbene complexes, which features novel Pd–C carbene–K structural moieties.
Heterobimetallic Pd–K carbene complexes via one-electron reductions of palladium radical carbenes
Cui, Peng; Hoffbauer, Melissa R.; Vyushkova, Mariya; ...
2016-01-01
Unprecedented sequential substitution/reduction synthetic strategy on the Pd radical carbenes afforded heterobimetallic Pd–K carbene complexes, which features novel Pd–C carbene–K structural moieties.
Improved classification accuracy by feature extraction using genetic algorithms
NASA Astrophysics Data System (ADS)
Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.
2003-05-01
A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.
Partial Least Squares for Discrimination in fMRI Data
Andersen, Anders H.; Rayens, William S.; Liu, Yushu; Smith, Charles D.
2011-01-01
Multivariate methods for discrimination were used in the comparison of brain activation patterns between groups of cognitively normal women who are at either high or low Alzheimer's disease risk based on family history and apolipoprotein-E4 status. Linear discriminant analysis (LDA) was preceded by dimension reduction using either principal component analysis (PCA), partial least squares (PLS), or a new oriented partial least squares (OrPLS) method. The aim was to identify a spatial pattern of functionally connected brain regions that was differentially expressed by the risk groups and yielded optimal classification accuracy. Multivariate dimension reduction is required prior to LDA when the data contains more feature variables than there are observations on individual subjects. Whereas PCA has been commonly used to identify covariance patterns in neuroimaging data, this approach only identifies gross variability and is not capable of distinguishing among-groups from within-groups variability. PLS and OrPLS provide a more focused dimension reduction by incorporating information on class structure and therefore lead to more parsimonious models for discrimination. Performance was evaluated in terms of the cross-validated misclassification rates. The results support the potential of using fMRI as an imaging biomarker or diagnostic tool to discriminate individuals with disease or high risk. PMID:22227352
Aydemir, Yusuf; Aydemir, Özlem; Pekcan, Sevgi; Özdemir, Mehmet
2017-02-01
Conventional methods for the aetiological diagnosis of community-acquired pneumonia (CAP) are often insufficient owing to low sensitivity and the long wait for the results of culture and particularly serology, and it often these methods establish a diagnosis in only half of cases. To evaluate the most common bacterial and viral agents in CAP using a fast responsive PCR method and investigate the relationship between clinical/laboratory features and aetiology, thereby contributing to empirical antibiotic selection and reduction of treatment failure. In children aged 4-15 years consecutively admitted with a diagnosis of CAP, the 10 most commonly detected bacterial and 12 most commonly detected viral agents were investigated by induced sputum using bacterial culture and multiplex PCR methods. Clinical and laboratory features were compared between bacterial and viral pneumonia. In 78 patients, at least one virus was detected in 38 (48.7%) and at least one bacterium in 32 (41%). In addition, both bacteria and viruses were detected in 16 (20.5%) patients. Overall, the agent detection rate was 69.2%. The most common viruses were respiratory syncytial virus and influenza and the most frequently detected bacteria were S. pneumoniae and H. influenzae. PCR was superior to culture for bacterial isolation (41% vs 13%, respectively). Fever, wheezing and radiological features were not helpful in differentiating between bacterial and viral CAP. White blood cell count, CRP and ESR values were significantly higher in the bacterial/mixed aetiology group than in the viral aetiology group. In CAP, multiplex PCR is highly reliable, superior in detecting multiple pathogens and rapidly identifies aetiological agents. Clinical features are poor for differentiation between bacterial and viral infections. The use of PCR methods allow physicians to provide more appropriate antimicrobial therapy, resulting in a better response to treatment, and it may be possible for use as a routine service if costs can be reduced.
NASA Astrophysics Data System (ADS)
Yosipof, Abraham; Guedes, Rita C.; García-Sosa, Alfonso T.
2018-05-01
Data mining approaches can uncover underlying patterns in chemical and pharmacological property space decisive for drug discovery and development. Two of the most common approaches are visualization and machine learning methods. Visualization methods use dimensionality reduction techniques in order to reduce multi-dimension data into 2D or 3D representations with a minimal loss of information. Machine learning attempts to find correlations between specific activities or classifications for a set of compounds and their features by means of recurring mathematical models. Both models take advantage of the different and deep relationships that can exist between features of compounds, and helpfully provide classification of compounds based on such features. Drug-likeness has been studied from several viewpoints, but here we provide the first implementation in chemoinformatics of the t-Distributed Stochastic Neighbor Embedding (t-SNE) method for the visualization and the representation of chemical space, and the use of different machine learning methods separately and together to form a new ensemble learning method called AL Boost. The models obtained from AL Boost synergistically combine decision tree, random forests (RF), support vector machine (SVM), artificial neuronal network (ANN), k nearest neighbors (kNN), and logistic regression models. In this work, we show that together they form a predictive model that not only improves the predictive force but also decreases bias. This resulted in a corrected classification rate of over 0.81, as well as higher sensitivity and specificity rates for the models. In addition, separation and good models were also achieved for disease categories such as antineoplastic compounds and nervous system diseases, among others. Such models can be used to guide decision on the feature landscape of compounds and their likeness to either drugs or other characteristics, such as specific or multiple disease-category(ies) or organ(s) of action of a molecule.
Intramedullary osteosynthesis versus plate osteosynthesis in subtrochanteric fractures.
Burnei, C; Popescu, Gh; Barbu, D; Capraru, F
2011-11-14
Due to an ever-aging population and a growing prevalence of osteoporosis and motor vehicle accidents, the number of subtrochanteric fractures is increasing worldwide. The choice of the appropriate implant continues to be critical for fixation of unstable hip fractures. The subtrochanteric region has certain anatomical and biomechanical features that can make fractures in this region difficult to treat. The preferred type of device is a matter of debate. Increased understandings of biomechanical characteristics of the hip and improvement of the implant materials have reduced the incidence of complications. The surgeons choose between the two methods according to Seinsheimer's classification and also to their personal preferences. As a general principle, the open reduction and internal fixation were performed in stable fractures, and the closed reduction and internal fixation were performed in unstable fractures. The advantages of intramedullary nailing consist in a small skin incision, lower operating times, preservation of fracture hematoma and the possibility of early weight bearing. The disadvantages consist in a difficult closed reduction due to important muscular forces, although the nail can be used as a reduction instrument, and higher implant cost. In open reduction internal fixation techniques, the advantage is represented by anatomical reduction which, in our opinion, is not necessary. The disadvantages are represented by: higher operating time, demanding surgery, large devascularization, higher infection rates, late weight bearing, medial instability, refracture after plate removal and inesthetic approach.
Intramedullary osteosynthesis versus plate osteosynthesis in subtrochanteric fractures
Burnei, C; Popescu, Gh; Barbu, D; Capraru, F
2011-01-01
Due to an ever-aging population and a growing prevalence of osteoporosis and motor vehicle accidents, the number of subtrochanteric fractures is increasing worldwide. The choice of the appropriate implant continues to be critical for fixation of unstable hip fractures. The subtrochanteric region has certain anatomical and biomechanical features that can make fractures in this region difficult to treat. The preferred type of device is a matter of debate. Increased understandings of biomechanical characteristics of the hip and improvement of the implant materials have reduced the incidence of complications. The surgeons choose between the two methods according to Seinsheimer's classification and also to their personal preferences. As a general principle, the open reduction and internal fixation were performed in stable fractures, and the closed reduction and internal fixation were performed in unstable fractures. The advantages of intramedullary nailing consist in a small skin incision, lower operating times, preservation of fracture hematoma and the possibility of early weight bearing. The disadvantages consist in a difficult closed reduction due to important muscular forces, although the nail can be used as a reduction instrument, and higher implant cost. In open reduction internal fixation techniques, the advantage is represented by anatomical reduction which, in our opinion, is not necessary. The disadvantages are represented by: higher operating time, demanding surgery, large devascularization, higher infection rates, late weight bearing, medial instability, refracture after plate removal and inesthetic approach. PMID:22514563
NASA Astrophysics Data System (ADS)
Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi
2018-01-01
On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.
Canty, Russell; Gonzalez, Edwin; MacDonald, Caleb; Osswald, Sebastian; Zea, Hugo; Luhrs, Claudia C.
2015-01-01
Graphene sheets doped with nitrogen were produced by the reduction-expansion (RES) method utilizing graphite oxide (GO) and urea as precursor materials. The simultaneous graphene generation and nitrogen insertion reactions are based on the fact that urea decomposes upon heating to release reducing gases. The volatile byproducts perform two primary functions: (i) promoting the reduction of the GO and (ii) providing the nitrogen to be inserted in situ as the graphene structure is created. Samples with diverse urea/GO mass ratios were treated at 800 °C in inert atmosphere to generate graphene with diverse microstructural characteristics and levels of nitrogen doping. Scanning electron microscopy (SEM) and transmission electron microscopy (TEM) were used to study the microstructural features of the products. The effects of doping on the samples structure and surface area were studied by X-ray diffraction (XRD), Raman Spectroscopy, and Brunauer Emmet Teller (BET). The GO and urea decomposition-reduction process as well as nitrogen-doped graphene stability were studied by thermogravimetric analysis (TGA) coupled with mass spectroscopy (MS) analysis of the evolved gases. Results show that the proposed method offers a high level of control over the amount of nitrogen inserted in the graphene and may be used alternatively to control its surface area. To demonstrate the practical relevance of these findings, as-produced samples were used as electrodes in supercapacitor and battery devices and compared with conventional, thermally exfoliated graphene. PMID:28793618
Torres-Montúfar, Alejandro; Borsch, Thomas; Ochoterena, Helga
2018-05-01
The conceptualization and coding of characters is a difficult issue in phylogenetic systematics, no matter which inference method is used when reconstructing phylogenetic trees or if the characters are just mapped onto a specific tree. Complex characters are groups of features that can be divided into simpler hierarchical characters (reductive coding), although the implied hierarchical relational information may change depending on the type of coding (composite vs. reductive). Up to now, there is no common agreement to either code characters as complex or simple. Phylogeneticists have discussed which coding method is best but have not incorporated the heuristic process of reciprocal illumination to evaluate the coding. Composite coding allows to test whether 1) several characters were linked resulting in a structure described as a complex character or trait or 2) independently evolving characters resulted in the configuration incorrectly interpreted as a complex character. We propose that complex characters or character states should be decomposed iteratively into simpler characters when the original homology hypothesis is not corroborated by a phylogenetic analysis, and the character or character state is retrieved as homoplastic. We tested this approach using the case of fruit types within subfamily Cinchonoideae (Rubiaceae). The iterative reductive coding of characters associated with drupes allowed us to unthread fruit evolution within Cinchonoideae. Our results show that drupes and berries are not homologous. As a consequence, a more precise ontology for the Cinchonoideae drupes is required.
NASA Astrophysics Data System (ADS)
Dong, Xue; Yang, Xiaofeng; Rosenfield, Jonathan; Elder, Eric; Dhabaan, Anees
2017-03-01
X-ray computed tomography (CT) is widely used in radiation therapy treatment planning in recent years. However, metal implants such as dental fillings and hip prostheses can cause severe bright and dark streaking artifacts in reconstructed CT images. These artifacts decrease image contrast and degrade HU accuracy, leading to inaccuracies in target delineation and dose calculation. In this work, a metal artifact reduction method is proposed based on the intrinsic anatomical similarity between neighboring CT slices. Neighboring CT slices from the same patient exhibit similar anatomical features. Exploiting this anatomical similarity, a gamma map is calculated as a weighted summation of relative HU error and distance error for each pixel in an artifact-corrupted CT image relative to a neighboring, artifactfree image. The minimum value in the gamma map for each pixel is used to identify an appropriate pixel from the artifact-free CT slice to replace the corresponding artifact-corrupted pixel. With the proposed method, the mean CT HU error was reduced from 360 HU and 460 HU to 24 HU and 34 HU on head and pelvis CT images, respectively. Dose calculation accuracy also improved, as the dose difference was reduced from greater than 20% to less than 4%. Using 3%/3mm criteria, the gamma analysis failure rate was reduced from 23.25% to 0.02%. An image-based metal artifact reduction method is proposed that replaces corrupted image pixels with pixels from neighboring CT slices free of metal artifacts. This method is shown to be capable of suppressing streaking artifacts, thereby improving HU and dose calculation accuracy.
Options for Robust Airfoil Optimization under Uncertainty
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Li, Wu
2002-01-01
A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.
1996-01-01
architecture for a family of sys- tems. The Feature-Oriented Domain Analysis ( FODA ) method looks primarily at "user-visible" aspects of a domain. The...Partners I-9 1.4.2 Acquisition, Development, and Post Deployment 1-10 2 Strategic Overview 1-15 2.1 Situation Analysis 1-15 2.1.1...Contents Strategic Overview 1-15 2.1 Situation Analysis 1-15 2.1.1 DoD Budget Reductions, Downsizing, and the Changing Role of the Military
Development of advanced avionics systems applicable to terminal-configured vehicles
NASA Technical Reports Server (NTRS)
Heimbold, R. L.; Lee, H. P.; Leffler, M. F.
1980-01-01
A technique to add the time constraint to the automatic descent feature of the existing L-1011 aircraft Flight Management System (FMS) was developed. Software modifications were incorporated in the FMS computer program and the results checked by lab simulation and on a series of eleven test flights. An arrival time dispersion (2 sigma) of 19 seconds was achieved. The 4 D descent technique can be integrated with the time-based metering method of air traffic control. Substantial reductions in delays at today's busy airports should result.
NASA Astrophysics Data System (ADS)
Xiong, Zhi-bo; Liu, Jing; Zhou, Fei; Liu, Dun-yu; Lu, Wei; Jin, Jing; Ding, Shi-fa
2017-06-01
A series of magnetic Fe0.85Ce0.10W0.05Oz catalysts were synthesized by three different methods(Co-precipitation(Fe0.85Ce0.10W0.05Oz-CP), Hydrothermal treatment assistant critic acid sol-gel method(Fe0.85Ce0.10W0.05Oz-HT) and Microwave irradiation assistant critic acid sol-gel method(Fe0.85Ce0.10W0.05Oz-MW)), and the catalytic activity was evaluated for selective catalytic reduction of NO with NH3. The catalyst was characterized by XRD, N2 adsorption-desorption, XPS, H2-TPR and NH3-TPD. Among the tested catalysts, Fe0.85Ce0.10W0.05Oz-MW shows the highest NOx conversion over per gram in unit time with NOx conversion of 60.8% at 350 °C under a high gas hourly space velocity of 1,200,000 ml/(g h). Different from Fe0.85Ce0.10W0.05Oz-CP catalyst, there exists a large of iron oxide crystallite(γ-Fe2O3 and α-Fe2O3) scattered in Fe0.85Ce0.10W0.05Oz catalysts prepared through hydrothermal treatment or microwave irradiation assistant critic acid sol-gel method, and higher iron atomic concentration on their surface. And Fe0.85Ce0.10W0.05Oz-MW shows higher surface absorbed oxygen concentration and better dispersion compared with Fe0.85Ce0.10W0.05Oz-HT catalyst. These features were favorable for the high catalytic performance of NO reduction with NH3 over Fe0.85Ce0.10W0.05Oz-MW catalyst.
On-the-fly reduction of open loops
NASA Astrophysics Data System (ADS)
Buccioni, Federico; Pozzorini, Stefano; Zoller, Max
2018-01-01
Building on the open-loop algorithm we introduce a new method for the automated construction of one-loop amplitudes and their reduction to scalar integrals. The key idea is that the factorisation of one-loop integrands in a product of loop segments makes it possible to perform various operations on-the-fly while constructing the integrand. Reducing the integrand on-the-fly, after each segment multiplication, the construction of loop diagrams and their reduction are unified in a single numerical recursion. In this way we entirely avoid objects with high tensor rank, thereby reducing the complexity of the calculations in a drastic way. Thanks to the on-the-fly approach, which is applied also to helicity summation and for the merging of different diagrams, the speed of the original open-loop algorithm can be further augmented in a very significant way. Moreover, addressing spurious singularities of the employed reduction identities by means of simple expansions in rank-two Gram determinants, we achieve a remarkably high level of numerical stability. These features of the new algorithm, which will be made publicly available in a forthcoming release of the OpenLoops program, are particularly attractive for NLO multi-leg and NNLO real-virtual calculations.
2.5D multi-view gait recognition based on point cloud registration.
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-03-28
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.
Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction
NASA Astrophysics Data System (ADS)
Zang, Y.; Yang, B.
2018-04-01
3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.
Interpreting expressive performance through listener judgments of musical tension
Farbood, Morwaread M.; Upham, Finn
2013-01-01
This study examines listener judgments of musical tension for a recording of a Schubert song and its harmonic reduction. Continuous tension ratings collected in an experiment and quantitative descriptions of the piece's musical features, include dynamics, pitch height, harmony, onset frequency, and tempo, were analyzed from two different angles. In the first part of the analysis, the different processing timescales for disparate features contributing to tension were explored through the optimization of a predictive tension model. The results revealed the optimal time windows for harmony were considerably longer (~22 s) than for any other feature (~1–4 s). In the second part of the analysis, tension ratings for the individual verses of the song and its harmonic reduction were examined and compared. The results showed that although the average tension ratings between verses were very similar, differences in how and when participants reported tension changes highlighted performance decisions made in the interpretation of the score, ambiguity in tension implications of the music, and the potential importance of contrast between verses and phrases. Analysis of the tension ratings for the harmonic reduction also provided a new perspective for better understanding how complex musical features inform listener tension judgments. PMID:24416024
2013-01-01
Background Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. Results We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. Conclusions When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time. PMID:23815620
Molinari, Filippo; Raghavendra, U; Gudigar, Anjan; Meiburger, Kristen M; Rajendra Acharya, U
2018-02-23
Atherosclerosis is a type of cardiovascular disease which may cause stroke. It is due to the deposition of fatty plaque in the artery walls resulting in the reduction of elasticity gradually and hence restricting the blood flow to the heart. Hence, an early prediction of carotid plaque deposition is important, as it can save lives. This paper proposes a novel data mining framework for the assessment of atherosclerosis in its early stage using ultrasound images. In this work, we are using 1353 symptomatic and 420 asymptomatic carotid plaque ultrasound images. Our proposed method classifies the symptomatic and asymptomatic carotid plaques using bidimensional empirical mode decomposition (BEMD) and entropy features. The unbalanced data samples are compensated using adaptive synthetic sampling (ADASYN), and the developed method yielded a promising accuracy of 91.43%, sensitivity of 97.26%, and specificity of 83.22% using fourteen features. Hence, the proposed method can be used as an assisting tool during the regular screening of carotid arteries in hospitals. Graphical abstract Outline for our efficient data mining framework for the characterization of symptomatic and asymptomatic carotid plaques.
A novel approach for dimension reduction of microarray.
Aziz, Rabia; Verma, C K; Srivastava, Namita
2017-12-01
This paper proposes a new hybrid search technique for feature (gene) selection (FS) using Independent component analysis (ICA) and Artificial Bee Colony (ABC) called ICA+ABC, to select informative genes based on a Naïve Bayes (NB) algorithm. An important trait of this technique is the optimization of ICA feature vector using ABC. ICA+ABC is a hybrid search algorithm that combines the benefits of extraction approach, to reduce the size of data and wrapper approach, to optimize the reduced feature vectors. This hybrid search technique is facilitated by evaluating the performance of ICA+ABC on six standard gene expression datasets of classification. Extensive experiments were conducted to compare the performance of ICA+ABC with the results obtained from recently published Minimum Redundancy Maximum Relevance (mRMR) +ABC algorithm for NB classifier. Also to check the performance that how ICA+ABC works as feature selection with NB classifier, compared the combination of ICA with popular filter techniques and with other similar bio inspired algorithm such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The result shows that ICA+ABC has a significant ability to generate small subsets of genes from the ICA feature vector, that significantly improve the classification accuracy of NB classifier compared to other previously suggested methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Murakami, Hiroki; Watanabe, Tsuneo; Fukuoka, Daisuke; Terabayashi, Nobuo; Hara, Takeshi; Muramatsu, Chisako; Fujita, Hiroshi
2016-04-01
The word "Locomotive syndrome" has been proposed to describe the state of requiring care by musculoskeletal disorders and its high-risk condition. Reduction of the knee extension strength is cited as one of the risk factors, and the accurate measurement of the strength is needed for the evaluation. The measurement of knee extension strength using a dynamometer is one of the most direct and quantitative methods. This study aims to develop a system for measuring the knee extension strength using the ultrasound images of the rectus femoris muscles obtained with non-invasive ultrasonic diagnostic equipment. First, we extract the muscle area from the ultrasound images and determine the image features, such as the thickness of the muscle. We combine these features and physical features, such as the patient's height, and build a regression model of the knee extension strength from training data. We have developed a system for estimating the knee extension strength by applying the regression model to the features obtained from test data. Using the test data of 168 cases, correlation coefficient value between the measured values and estimated values was 0.82. This result suggests that this system can estimate knee extension strength with high accuracy.
Evaluation of stabilization techniques for ion implant processing
NASA Astrophysics Data System (ADS)
Ross, Matthew F.; Wong, Selmer S.; Minter, Jason P.; Marlowe, Trey; Narcy, Mark E.; Livesay, William R.
1999-06-01
With the integration of high current ion implant processing into volume CMOS manufacturing, the need for photoresist stabilization to achieve a stable ion implant process is critical. This study compares electron beam stabilization, a non-thermal process, with more traditional thermal stabilization techniques such as hot plate baking and vacuum oven processing. The electron beam processing is carried out in a flood exposure system with no active heating of the wafer. These stabilization techniques are applied to typical ion implant processes that might be found in a CMOS production process flow. The stabilization processes are applied to a 1.1 micrometers thick PFI-38A i-line photoresist film prior to ion implant processing. Post stabilization CD variation is detailed with respect to wall slope and feature integrity. SEM photographs detail the effects of the stabilization technique on photoresist features. The thermal stability of the photoresist is shown for different levels of stabilization and post stabilization thermal cycling. Thermal flow stability of the photoresist is detailed via SEM photographs. A significant improvement in thermal stability is achieved with the electron beam process, such that photoresist features are stable to temperatures in excess of 200 degrees C. Ion implant processing parameters are evaluated and compared for the different stabilization methods. Ion implant system end-station chamber pressure is detailed as a function of ion implant process and stabilization condition. The ion implant process conditions are detailed for varying factors such as ion current, energy, and total dose. A reduction in the ion implant systems end-station chamber pressure is achieved with the electron beam stabilization process over the other techniques considered. This reduction in end-station chamber pressure is shown to provide a reduction in total process time for a given ion implant dose. Improvements in the ion implant process are detailed across several combinations of current and energy.
Hu, Jingwen; Flannagan, Carol A; Bao, Shan; McCoy, Robert W; Siasoco, Kevin M; Barbat, Saeed
2015-11-01
The objective of this study is to develop a method that uses a combination of field data analysis, naturalistic driving data analysis, and computational simulations to explore the potential injury reduction capabilities of integrating passive and active safety systems in frontal impact conditions. For the purposes of this study, the active safety system is actually a driver assist (DA) feature that has the potential to reduce delta-V prior to a crash, in frontal or other crash scenarios. A field data analysis was first conducted to estimate the delta-V distribution change based on an assumption of 20% crash avoidance resulting from a pre-crash braking DA feature. Analysis of changes in driver head location during 470 hard braking events in a naturalistic driving study found that drivers' head positions were mostly in the center position before the braking onset, while the percentage of time drivers leaning forward or backward increased significantly after the braking onset. Parametric studies with a total of 4800 MADYMO simulations showed that both delta-V and occupant pre-crash posture had pronounced effects on occupant injury risks and on the optimal restraint designs. By combining the results for the delta-V and head position distribution changes, a weighted average of injury risk reduction of 17% and 48% was predicted by the 50th percentile Anthropomorphic Test Device (ATD) model and human body model, respectively, with the assumption that the restraint system can adapt to the specific delta-V and pre-crash posture. This study demonstrated the potential for further reducing occupant injury risk in frontal crashes by the integration of a passive safety system with a DA feature. Future analyses considering more vehicle models, various crash conditions, and variations of occupant characteristics, such as age, gender, weight, and height, are necessary to further investigate the potential capability of integrating passive and DA or active safety systems.
Hypergraph Based Feature Selection Technique for Medical Diagnosis.
Somu, Nivethitha; Raman, M R Gauthama; Kirthivasan, Kannan; Sriram, V S Shankar
2016-11-01
The impact of internet and information systems across various domains have resulted in substantial generation of multidimensional datasets. The use of data mining and knowledge discovery techniques to extract the original information contained in the multidimensional datasets play a significant role in the exploitation of complete benefit provided by them. The presence of large number of features in the high dimensional datasets incurs high computational cost in terms of computing power and time. Hence, feature selection technique has been commonly used to build robust machine learning models to select a subset of relevant features which projects the maximal information content of the original dataset. In this paper, a novel Rough Set based K - Helly feature selection technique (RSKHT) which hybridize Rough Set Theory (RST) and K - Helly property of hypergraph representation had been designed to identify the optimal feature subset or reduct for medical diagnostic applications. Experiments carried out using the medical datasets from the UCI repository proves the dominance of the RSKHT over other feature selection techniques with respect to the reduct size, classification accuracy and time complexity. The performance of the RSKHT had been validated using WEKA tool, which shows that RSKHT had been computationally attractive and flexible over massive datasets.
Das, D K; Maiti, A K; Chakraborty, C
2015-03-01
In this paper, we propose a comprehensive image characterization cum classification framework for malaria-infected stage detection using microscopic images of thin blood smears. The methodology mainly includes microscopic imaging of Leishman stained blood slides, noise reduction and illumination correction, erythrocyte segmentation, feature selection followed by machine classification. Amongst three-image segmentation algorithms (namely, rule-based, Chan-Vese-based and marker-controlled watershed methods), marker-controlled watershed technique provides better boundary detection of erythrocytes specially in overlapping situations. Microscopic features at intensity, texture and morphology levels are extracted to discriminate infected and noninfected erythrocytes. In order to achieve subgroup of potential features, feature selection techniques, namely, F-statistic and information gain criteria are considered here for ranking. Finally, five different classifiers, namely, Naive Bayes, multilayer perceptron neural network, logistic regression, classification and regression tree (CART), RBF neural network have been trained and tested by 888 erythrocytes (infected and noninfected) for each features' subset. Performance evaluation of the proposed methodology shows that multilayer perceptron network provides higher accuracy for malaria-infected erythrocytes recognition and infected stage classification. Results show that top 90 features ranked by F-statistic (specificity: 98.64%, sensitivity: 100%, PPV: 99.73% and overall accuracy: 96.84%) and top 60 features ranked by information gain provides better results (specificity: 97.29%, sensitivity: 100%, PPV: 99.46% and overall accuracy: 96.73%) for malaria-infected stage classification. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
The Reduction of Ducted Fan Engine Noise Via A Boundary Integral Equation Method
NASA Technical Reports Server (NTRS)
Tweed, J.; Dunn, M.
1997-01-01
The development of a Boundary Integral Equation Method (BIEM) for the prediction of ducted fan engine noise is discussed. The method is motivated by the need for an efficient and versatile computational tool to assist in parametric noise reduction studies. In this research, the work in reference 1 was extended to include passive noise control treatment on the duct interior. The BEM considers the scattering of incident sound generated by spinning point thrust dipoles in a uniform flow field by a thin cylindrical duct. The acoustic field is written as a superposition of spinning modes. Modal coefficients of acoustic pressure are calculated term by term. The BEM theoretical framework is based on Helmholtz potential theory. A boundary value problem is converted to a boundary integral equation formulation with unknown single and double layer densities on the duct wall. After solving for the unknown densities, the acoustic field is easily calculated. The main feature of the BIEM is the ability to compute any portion of the sound field without the need to compute the entire field. Other noise prediction methods such as CFD and Finite Element methods lack this property. Additional BIEM attributes include versatility, ease of use, rapid noise predictions, coupling of propagation and radiation both forward and aft, implementable on midrange personal computers, and valid over a wide range of frequencies.
Liu, Ding-Yun; Gan, Tao; Rao, Ni-Ni; Xing, Yao-Wen; Zheng, Jie; Li, Sang; Luo, Cheng-Si; Zhou, Zhong-Jun; Wan, Yong-Li
2016-08-01
The gastrointestinal endoscopy in this study refers to conventional gastroscopy and wireless capsule endoscopy (WCE). Both of these techniques produce a large number of images in each diagnosis. The lesion detection done by hand from the images above is time consuming and inaccurate. This study designed a new computer-aided method to detect lesion images. We initially designed an algorithm named joint diagonalisation principal component analysis (JDPCA), in which there are no approximation, iteration or inverting procedures. Thus, JDPCA has a low computational complexity and is suitable for dimension reduction of the gastrointestinal endoscopic images. Then, a novel image feature extraction method was established through combining the algorithm of machine learning based on JDPCA and conventional feature extraction algorithm without learning. Finally, a new computer-aided method is proposed to identify the gastrointestinal endoscopic images containing lesions. The clinical data of gastroscopic images and WCE images containing the lesions of early upper digestive tract cancer and small intestinal bleeding, which consist of 1330 images from 291 patients totally, were used to confirm the validation of the proposed method. The experimental results shows that, for the detection of early oesophageal cancer images, early gastric cancer images and small intestinal bleeding images, the mean values of accuracy of the proposed method were 90.75%, 90.75% and 94.34%, with the standard deviations (SDs) of 0.0426, 0.0334 and 0.0235, respectively. The areas under the curves (AUCs) were 0.9471, 0.9532 and 0.9776, with the SDs of 0.0296, 0.0285 and 0.0172, respectively. Compared with the traditional related methods, our method showed a better performance. It may therefore provide worthwhile guidance for improving the efficiency and accuracy of gastrointestinal disease diagnosis and is a good prospect for clinical application. Copyright © 2016 Elsevier B.V. All rights reserved.
Performance of digital RGB reflectance color extraction for plaque lesion
NASA Astrophysics Data System (ADS)
Hashim, Hadzli; Taib, Mohd Nasir; Jailani, Rozita; Sulaiman, Saadiah; Baba, Roshidah
2005-01-01
Several clinical psoriasis lesion groups are been studied for digital RGB color features extraction. Previous works have used samples size that included all the outliers lying beyond the standard deviation factors from the peak histograms. This paper described the statistical performances of the RGB model with and without removing these outliers. Plaque lesion is experimented with other types of psoriasis. The statistical tests are compared with respect to three samples size; the original 90 samples, the first size reduction by removing outliers from 2 standard deviation distances (2SD) and the second size reduction by removing outliers from 1 standard deviation distance (1SD). Quantification of data images through the normal/direct and differential of the conventional reflectance method is considered. Results performances are concluded by observing the error plots with 95% confidence interval and findings of the inference T-tests applied. The statistical tests outcomes have shown that B component for conventional differential method can be used to distinctively classify plaque from the other psoriasis groups in consistent with the error plots finding with an improvement in p-value greater than 0.5.
Ultrasound speckle reduction based on fractional order differentiation.
Shao, Dangguo; Zhou, Ting; Liu, Fan; Yi, Sanli; Xiang, Yan; Ma, Lei; Xiong, Xin; He, Jianfeng
2017-07-01
Ultrasound images show a granular pattern of noise known as speckle that diminishes their quality and results in difficulties in diagnosis. To preserve edges and features, this paper proposes a fractional differentiation-based image operator to reduce speckle in ultrasound. An image de-noising model based on fractional partial differential equations with balance relation between k (gradient modulus threshold that controls the conduction) and v (the order of fractional differentiation) was constructed by the effective combination of fractional calculus theory and a partial differential equation, and the numerical algorithm of it was achieved using a fractional differential mask operator. The proposed algorithm has better speckle reduction and structure preservation than the three existing methods [P-M model, the speckle reducing anisotropic diffusion (SRAD) technique, and the detail preserving anisotropic diffusion (DPAD) technique]. And it is significantly faster than bilateral filtering (BF) in producing virtually the same experimental results. Ultrasound phantom testing and in vivo imaging show that the proposed method can improve the quality of an ultrasound image in terms of tissue SNR, CNR, and FOM values.
NASA Astrophysics Data System (ADS)
Aghamaleki, Javad Abbasi; Behrad, Alireza
2018-01-01
Double compression detection is a crucial stage in digital image and video forensics. However, the detection of double compressed videos is challenging when the video forger uses the same quantization matrix and synchronized group of pictures (GOP) structure during the recompression history to conceal tampering effects. A passive approach is proposed for detecting double compressed MPEG videos with the same quantization matrix and synchronized GOP structure. To devise the proposed algorithm, the effects of recompression on P frames are mathematically studied. Then, based on the obtained guidelines, a feature vector is proposed to detect double compressed frames on the GOP level. Subsequently, sparse representations of the feature vectors are used for dimensionality reduction and enrich the traces of recompression. Finally, a support vector machine classifier is employed to detect and localize double compression in temporal domain. The experimental results show that the proposed algorithm achieves the accuracy of more than 95%. In addition, the comparisons of the results of the proposed method with those of other methods reveal the efficiency of the proposed algorithm.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Tianfeng
The goal of the proposed research is to create computational flame diagnostics (CFLD) that are rigorous numerical algorithms for systematic detection of critical flame features, such as ignition, extinction, and premixed and non-premixed flamelets, and to understand the underlying physicochemical processes controlling limit flame phenomena, flame stabilization, turbulence-chemistry interactions and pollutant emissions etc. The goal has been accomplished through an integrated effort on mechanism reduction, direct numerical simulations (DNS) of flames at engine conditions and a variety of turbulent flames with transport fuels, computational diagnostics, turbulence modeling, and DNS data mining and data reduction. The computational diagnostics are primarily basedmore » on the chemical explosive mode analysis (CEMA) and a recently developed bifurcation analysis using datasets from first-principle simulations of 0-D reactors, 1-D laminar flames, and 2-D and 3-D DNS (collaboration with J.H. Chen and S. Som at Argonne, and C.S. Yoo at UNIST). Non-stiff reduced mechanisms for transportation fuels amenable for 3-D DNS are developed through graph-based methods and timescale analysis. The flame structures, stabilization mechanisms, local ignition and extinction etc., and the rate controlling chemical processes are unambiguously identified through CFLD. CEMA is further employed to segment complex turbulent flames based on the critical flame features, such as premixed reaction fronts, and to enable zone-adaptive turbulent combustion modeling.« less
Digital Signal Processing For Low Bit Rate TV Image Codecs
NASA Astrophysics Data System (ADS)
Rao, K. R.
1987-06-01
In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.
Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.
Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann
2017-01-01
Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.
Identification of appropriate patients for cardiometabolic risk management.
Peters, Anne L
2007-01-01
Patients at increased risk for cardiovascular disease have a wide array of clinical features that should alert practitioners to the need for risk reduction. Some, but not all, of these features relate to insulin resistance. Multiple approaches exist for diagnosing and defining this risk, including the traditional Framingham risk assessment, various definitions of the metabolic syndrome, and assessment of risk factors not commonly included in the standard criteria. This article reviews the many clinical findings that should alert healthcare providers to the need for aggressive cardiovascular risk reduction.
Simultaneous fabrication of very high aspect ratio positive nano- to milliscale structures.
Chen, Long Qing; Chan-Park, Mary B; Zhang, Qing; Chen, Peng; Li, Chang Ming; Li, Sai
2009-05-01
A simple and inexpensive technique for the simultaneous fabrication of positive (i.e., protruding), very high aspect (>10) ratio nanostructures together with micro- or millistructures is developed. The method involves using residual patterns of thin-film over-etching (RPTO) to produce sub-micro-/nanoscale features. The residual thin-film nanopattern is used as an etching mask for Si deep reactive ion etching. The etched Si structures are further reduced in size by Si thermal oxidation to produce amorphous SiO(2), which is subsequently etched away by HF. Two arrays of positive Si nanowalls are demonstrated with this combined RPTO-SiO(2)-HF technique. One array has a feature size of 150 nm and an aspect ratio of 26.7 and another has a feature size of 50 nm and an aspect ratio of 15. No other parallel reduction technique can achieve such a very high aspect ratio for 50-nm-wide nanowalls. As a demonstration of the technique to simultaneously achieve nano- and milliscale features, a simple Si nanofluidic master mold with positive features with dimensions varying continuously from 1 mm to 200 nm and a highest aspect ratio of 6.75 is fabricated; the narrow 200-nm section is 4.5 mm long. This Si master mold is then used as a mold for UV embossing. The embossed open channels are then closed by a cover with glue bonding. A high aspect ratio is necessary to produce unblocked closed channels after the cover bonding process of the nanofluidic chip. The combined method of RPTO, Si thermal oxidation, and HF etching can be used to make complex nanofluidic systems and nano-/micro-/millistructures for diverse applications.
Qian, Xiaohua; Tan, Hua; Zhang, Jian; Zhao, Weilin; Chan, Michael D.; Zhou, Xiaobo
2016-01-01
Purpose: Pseudoprogression (PsP) can mimic true tumor progression (TTP) on magnetic resonance imaging in patients with glioblastoma multiform (GBM). The phenotypical similarity between PsP and TTP makes it a challenging task for physicians to distinguish these entities. So far, no approved biomarkers or computer-aided diagnosis systems have been used clinically for this purpose. Methods: To address this challenge, the authors developed an objective classification system for PsP and TTP based on longitudinal diffusion tensor imaging. A novel spatio-temporal discriminative dictionary learning scheme was proposed to differentiate PsP and TTP, thereby avoiding segmentation of the region of interest. The authors constructed a novel discriminative sparse matrix with the classification-oriented dictionary learning approach by excluding the shared features of two categories, so that the pooled features captured the subtle difference between PsP and TTP. The most discriminating features were then identified from the pooled features by their feature scoring system. Finally, the authors stratified patients with GBM into PsP and TTP by a support vector machine approach. Tenfold cross-validation (CV) and the area under the receiver operating characteristic (AUC) were used to assess the robustness of the developed system. Results: The average accuracy and AUC values after ten rounds of tenfold CV were 0.867 and 0.92, respectively. The authors also assessed the effects of different methods and factors (such as data types, pooling techniques, and dimensionality reduction approaches) on the performance of their classification system which obtained the best performance. Conclusions: The proposed objective classification system without segmentation achieved a desirable and reliable performance in differentiating PsP from TTP. Thus, the developed approach is expected to advance the clinical research and diagnosis of PsP and TTP. PMID:27806598
Understanding the Tectonic Features in the South China Sea By Analyzing Magnetic Anomalies
NASA Astrophysics Data System (ADS)
Guo, L.; Meng, X.; Shi, L.; Yao, C.
2011-12-01
The South China Sea (SCS) is surrounded by the Eurasia, Pacific and India-Australia plates. It formed during Late Oligocene-Early Miocene, and is one of the largest marginal seas in the Western Pacific. The collision of Indian subcontinent and Eurasian plate in the northwest, back-arc spreading in the centre and subduction beneath the Philippine plate along Manila trench in the east and along Palawan trough in the south had produced the complex tectonic features in the SCS that we can see today. In the past few decades, a variety of geophysical methods were conducted to study geological tectonics and evolution of the SCS. Here, we analyzed the magnetic data of this area using new data enhancement techniques to understand the regional tectonic features. We assembled the magnetic anomalies data with a resolution of two arc-minute from the World Digital Magnetic Anomaly Map, and then gridded the data on a regular grid. Then we used the method of reduction to the pole at low latitude with varying magnetic inclinations to stably reduce the magnetic anomalies. Then we used the preferential continuation method based on Wiener filtering and Green's equivalence principle to separate the reduced-to-pole (RTP) magnetic anomalies, and subsequently analyze the regional and residual anomalies. We also calculated the directional horizontal derivatives and the tilt-angle derivative of the data to derive clearer geological structures with more details. Then we calculated the depth of the magnetic basement surface in the area by 3D interface inversion. From the results of the preliminary processing, we analyzed the main faults, geological structures, magma distribution and tectonic features in the SCS. In the future, the integrated interpretation of the RTP magnetic anomalies, Bouguer gravity anomalies and other geophysical methods will be performed for better understanding the deep structure , the tectonic features and evolution of the South China Sea. Acknowledgment: We acknowledge the financial support of the SinoProbe project (201011039), the Fundamental Research Funds for the Central Universities (2010ZY26, 2011PY0184), and the National Natural Science Foundation of China (40904033, 41074095).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Shaofang; Zhu, Chengzhou; Song, Junhua
2017-01-01
Dendritic nanostructures are capturing increasing attentions in electrocatalysis owing to their unique structural features and low density. Herein, we report for the first time bromide ions mediated synthesis of low-Pt-content PdCuPt ternary nanodendrites via galvanic replacement reaction between Pt precursor and PdCu template in aqueous solution. The experimental results show that the ternary PdCuPt nanodendrites present enhanced electrocatalytic performance for oxygen reduction reaction in acid solution compared with commercial Pt/C as well as some state-of-the-art catalysts. In details, the mass activity of the PdCuPt catalyst with optimized composition is 1.73 A/mgPt at 0.85 V vs RHE, which is 14 timesmore » higher than that of commercial Pt/C catalyst. Moreover, the long-term stability test demonstrates its better durability in acid solution. After 5k cycles, there is still 70% electrochemical surface area maintained. This method provides an efficient way to synthesize trimetallic alloys with controllable composition and specific structure for oxygen reduction reaction.« less
ARCH: Adaptive recurrent-convolutional hybrid networks for long-term action recognition
Xin, Miao; Zhang, Hong; Wang, Helong; Sun, Mingui; Yuan, Ding
2017-01-01
Recognition of human actions from digital video is a challenging task due to complex interfering factors in uncontrolled realistic environments. In this paper, we propose a learning framework using static, dynamic and sequential mixed features to solve three fundamental problems: spatial domain variation, temporal domain polytrope, and intra- and inter-class diversities. Utilizing a cognitive-based data reduction method and a hybrid “network upon networks” architecture, we extract human action representations which are robust against spatial and temporal interferences and adaptive to variations in both action speed and duration. We evaluated our method on the UCF101 and other three challenging datasets. Our results demonstrated a superior performance of the proposed algorithm in human action recognition. PMID:29290647
Machine Learning: A Crucial Tool for Sensor Design
Zhao, Weixiang; Bhushan, Abhinav; Santamaria, Anthony D.; Simon, Melinda G.; Davis, Cristina E.
2009-01-01
Sensors have been widely used for disease diagnosis, environmental quality monitoring, food quality control, industrial process analysis and control, and other related fields. As a key tool for sensor data analysis, machine learning is becoming a core part of novel sensor design. Dividing a complete machine learning process into three steps: data pre-treatment, feature extraction and dimension reduction, and system modeling, this paper provides a review of the methods that are widely used for each step. For each method, the principles and the key issues that affect modeling results are discussed. After reviewing the potential problems in machine learning processes, this paper gives a summary of current algorithms in this field and provides some feasible directions for future studies. PMID:20191110
2011-01-01
Background In the past decade, the use of technologies to persuade, motivate, and activate individuals’ health behavior change has been a quickly expanding field of research. The use of the Web for delivering interventions has been especially relevant. Current research tends to reveal little about the persuasive features and mechanisms embedded in Web-based interventions targeting health behavior change. Objectives The purpose of this systematic review was to extract and analyze persuasive system features in Web-based interventions for substance use by applying the persuasive systems design (PSD) model. In more detail, the main objective was to provide an overview of the persuasive features within current Web-based interventions for substance use. Methods We conducted electronic literature searches in various databases to identify randomized controlled trials of Web-based interventions for substance use published January 1, 2004, through December 31, 2009, in English. We extracted and analyzed persuasive system features of the included Web-based interventions using interpretive categorization. Results The primary task support components were utilized and reported relatively widely in the reviewed studies. Reduction, self-monitoring, simulation, and personalization seem to be the most used features to support accomplishing user’s primary task. This is an encouraging finding since reduction and self-monitoring can be considered key elements for supporting users to carry out their primary tasks. The utilization of tailoring was at a surprisingly low level. The lack of tailoring may imply that the interventions are targeted for too broad an audience. Leveraging reminders was the most common way to enhance the user-system dialogue. Credibility issues are crucial in website engagement as users will bind with sites they perceive credible and navigate away from those they do not find credible. Based on the textual descriptions of the interventions, we cautiously suggest that most of them were credible. The prevalence of social support in the reviewed interventions was encouraging. Conclusions Understanding the persuasive elements of systems supporting behavior change is important. This may help users to engage and keep motivated in their endeavors. Further research is needed to increase our understanding of how and under what conditions specific persuasive features (either in isolation or collectively) lead to positive health outcomes in Web-based health behavior change interventions across diverse health contexts and populations. PMID:21795238
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
Effective diagnosis of Alzheimer's disease by means of large margin-based methodology.
Chaves, Rosa; Ramírez, Javier; Górriz, Juan M; Illán, Ignacio A; Gómez-Río, Manuel; Carnero, Cristobal
2012-07-31
Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).
2012-01-01
We have developed a method for obtaining a direct pattern of silver nanoparticles (NPs) on porous silicon (p-Si) by means of inkjet printing (IjP) of a silver salt. Silver NPs were obtained by p-Si mediated in-situ reduction of Ag+ cations using solutions based on AgNO3 which were directly printed on p-Si according to specific geometries and process parameters. The main difference with respect to existing literature is that normally, inkjet printing is applied to silver (metal) NP suspensions, while in our experiment the NPs are formed after jetting the solution on the reactive substrate. We performed both optical and scanning electron microscopes on the NPs traces, correlating the morphology features with the IjP parameters, giving an insight on the synthesis kinetics. The patterned NPs show good performances as SERS substrates. PMID:22953722
Progress in standoff surface contaminant detector platform
NASA Astrophysics Data System (ADS)
Dupuis, Julia R.; Giblin, Jay; Dixon, John; Hensley, Joel; Mansur, David; Marinelli, William J.
2017-05-01
Progress towards the development of a longwave infrared quantum cascade laser (QLC) based standoff surface contaminant detection platform is presented. The detection platform utilizes reflectance spectroscopy with application to optically thick and thin materials including solid and liquid phase chemical warfare agents, toxic industrial chemicals and materials, and explosives. The platform employs an ensemble of broadband QCLs with a spectrally selective detector to interrogate target surfaces at 10s of m standoff. A version of the Adaptive Cosine Estimator (ACE) featuring class based screening is used for detection and discrimination in high clutter environments. Detection limits approaching 0.1 μg/cm2 are projected through speckle reduction methods enabling detector noise limited performance. The design, build, and validation of a breadboard version of the QCL-based surface contaminant detector are discussed. Functional test results specific to the QCL illuminator are presented with specific emphasis on speckle reduction.
Yue, Yuanyuan; Liu, Haiyan; Yuan, Pei; Yu, Chengzhong; Bao, Xiaojun
2015-01-01
Iron-modified ZSM-5 zeolites (FeZSM-5s) have been considered to be a promising catalyst system to reduce nitrogen oxide emissions, one of the most important global environmental issues, but their synthesis faces enormous economic and environmental challenges. Herein we report a cheap and green strategy to fabricate hierarchical FeZSM-5 zeolites from natural aluminosilicate minerals via a nanoscale depolymerization-reorganization method. Our strategy is featured by neither using any aluminum-, silicon-, or iron-containing inorganic chemical nor involving any mesoscale template and any post-synthetic modification. Compared with the conventional FeZSM-5 synthesized from inorganic chemicals with the similar Fe content, the resulting hierarchical FeZSM-5 with highly-dispersed iron species showed superior catalytic activity in the selective catalytic reduction of NO by NH3. PMID:25791958
Cho, Taehoon; Yoon, Chang Won; Kim, Joohoon
2018-06-13
In this study, we report the controllable synthesis of dendrimer-encapsulated Pt nanoparticles (Pt DENs) utilizing repetitively coupled chemical reduction and galvanic exchange reactions. The synthesis strategy allows the expansion of the applicable number of Pt atoms encapsulated inside dendrimers to more than 1000 without being limited by the fixed number of complexation sites for Pt 2+ precursor ions in the dendrimers. The synthesis of Pt DENs is achieved in a short period of time (i.e., ∼10 min) simply by the coaddition of appropriate amounts of Cu 2+ and Pt 2+ precursors into aqueous dendrimer solution and subsequent addition of reducing agents such as BH 4 - , resulting in fast and selective complexation of Cu 2+ with the dendrimers and subsequent chemical reduction of the complexed Cu 2+ while uncomplexed Pt 2+ precursors remain oxidized. Interestingly, the chemical reduction of Cu 2+ , leading to the formation of Cu nanoparticles encapsulated inside the dendrimers, is coupled with the galvanic exchange of the Cu nanoparticles with the nearby Pt 2+ . This coupling repetitively proceeds until all of the added Pt 2+ ions form into Pt nanoparticles encapsulated inside the dendrimers. In contrast to the conventional method utilizing direct chemical reduction, this repetitively coupled chemical reduction and galvanic exchange enables a substantial increase in the applicable number of Pt atoms up to 1320 in Pt DENs while maintaining the unique features of DENs.
Li, Wei; Cao, Peng; Zhao, Dazhe; Wang, Junbo
2016-01-01
Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods.
2.5D Multi-View Gait Recognition Based on Point Cloud Registration
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-01-01
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rios Velazquez, E; Parmar, C; Narayan, V
Purpose: To compare the complementary value of quantitative radiomic features to that of radiologist-annotated semantic features in predicting EGFR mutations in lung adenocarcinomas. Methods: Pre-operative CT images of 258 lung adenocarcinoma patients were available. Tumors were segmented using the sing-click ensemble segmentation algorithm. A set of radiomic features was extracted using 3D-Slicer. Test-retest reproducibility and unsupervised dimensionality reduction were applied to select a subset of reproducible and independent radiomic features. Twenty semantic annotations were scored by an expert radiologist, describing the tumor, surrounding tissue and associated findings. Minimum-redundancy-maximum-relevance (MRMR) was used to identify the most informative radiomic and semantic featuresmore » in 172 patients (training-set, temporal split). Radiomic, semantic and combined radiomic-semantic logistic regression models to predict EGFR mutations were evaluated in and independent validation dataset of 86 patients using the area under the receiver operating curve (AUC). Results: EGFR mutations were found in 77/172 (45%) and 39/86 (45%) of the training and validation sets, respectively. Univariate AUCs showed a similar range for both feature types: radiomics median AUC = 0.57 (range: 0.50 – 0.62); semantic median AUC = 0.53 (range: 0.50 – 0.64, Wilcoxon p = 0.55). After MRMR feature selection, the best-performing radiomic, semantic, and radiomic-semantic logistic regression models, for EGFR mutations, showed a validation AUC of 0.56 (p = 0.29), 0.63 (p = 0.063) and 0.67 (p = 0.004), respectively. Conclusion: Quantitative volumetric and textural Radiomic features complement the qualitative and semi-quantitative radiologist annotations. The prognostic value of informative qualitative semantic features such as cavitation and lobulation is increased with the addition of quantitative textural features from the tumor region.« less
Xia, Yidong; Luo, Hong; Frisbey, Megan; ...
2014-07-01
A set of implicit methods are proposed for a third-order hierarchical WENO reconstructed discontinuous Galerkin method for compressible flows on 3D hybrid grids. An attractive feature in these methods are the application of the Jacobian matrix based on the P1 element approximation, resulting in a huge reduction of memory requirement compared with DG (P2). Also, three approaches -- analytical derivation, divided differencing, and automatic differentiation (AD) are presented to construct the Jacobian matrix respectively, where the AD approach shows the best robustness. A variety of compressible flow problems are computed to demonstrate the fast convergence property of the implemented flowmore » solver. Furthermore, an SPMD (single program, multiple data) programming paradigm based on MPI is proposed to achieve parallelism. The numerical results on complex geometries indicate that this low-storage implicit method can provide a viable and attractive DG solution for complicated flows of practical importance.« less
Multilevel Contextual 3-D CNNs for False Positive Reduction in Pulmonary Nodule Detection.
Dou, Qi; Chen, Hao; Yu, Lequan; Qin, Jing; Heng, Pheng-Ann
2017-07-01
False positive reduction is one of the most crucial components in an automated pulmonary nodule detection system, which plays an important role in lung cancer diagnosis and early treatment. The objective of this paper is to effectively address the challenges in this task and therefore to accurately discriminate the true nodules from a large number of candidates. We propose a novel method employing three-dimensional (3-D) convolutional neural networks (CNNs) for false positive reduction in automated pulmonary nodule detection from volumetric computed tomography (CT) scans. Compared with its 2-D counterparts, the 3-D CNNs can encode richer spatial information and extract more representative features via their hierarchical architecture trained with 3-D samples. More importantly, we further propose a simple yet effective strategy to encode multilevel contextual information to meet the challenges coming with the large variations and hard mimics of pulmonary nodules. The proposed framework has been extensively validated in the LUNA16 challenge held in conjunction with ISBI 2016, where we achieved the highest competition performance metric (CPM) score in the false positive reduction track. Experimental results demonstrated the importance and effectiveness of integrating multilevel contextual information into 3-D CNN framework for automated pulmonary nodule detection in volumetric CT data. While our method is tailored for pulmonary nodule detection, the proposed framework is general and can be easily extended to many other 3-D object detection tasks from volumetric medical images, where the targeting objects have large variations and are accompanied by a number of hard mimics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia Lixin; Wang Haibo; Wang Jian
A sensitive silver substrate for surface-enhanced Raman scattering (SERS) spectroscopy is synthesized under multimode microwave irradiation. The microwave-assisted synthesis of the SERS-active substrate was carried out in a modified domestic microwave oven of 2450 MHz, and the reductive reaction was conducted in a polypropylene container under microwave irradiation with a power of 100 W for 5 min. Formaldehyde was employed as both the reductant and microwave absorber in the reductive process. The effects of different heating methods (microwave dielectric and conventional) on the properties of the SERS-active substrates were investigated. Samples obtained with 5 min of microwave irradiation at amore » power of 100 W have more well-defined edges, corners, and sharper surface features, while the samples synthesized with 1 h of conventional heating at 40 deg. C consist primarily of spheroidal nanoparticles. The SERS peak intensity of the {approx}1593 cm{sup -1} band of 4-mercaptobenzoic acid adsorbed on silver nanoparticles synthesized with 5 min of microwave irradiation at a power of 100 W is about 30 times greater than when it is adsorbed on samples synthesized with 1 h of conventional heating at 40 deg. C. The results of quantum chemical calculations are in good agreement with our experimental data. This method is expected to be utilized for the synthesis of other metal nanostructural materials.« less
NASA Astrophysics Data System (ADS)
Marble, Jay A.; Gorman, John D.
1999-08-01
A feature based approach is taken to reduce the occurrence of false alarms in foliage penetrating, ultra-wideband, synthetic aperture radar data. A set of 'generic' features is defined based on target size, shape, and pixel intensity. A second set of features is defined that contains generic features combined with features based on scattering phenomenology. Each set is combined using a quadratic polynomial discriminant (QPD), and performance is characterized by generating a receiver operating characteristic (ROC) curve. Results show that the feature set containing phenomenological features improves performance against both broadside and end-on targets. Performance against end-on targets, however, is especially pronounced.
Globally scalable generation of high-resolution land cover from multispectral imagery
NASA Astrophysics Data System (ADS)
Stutts, S. Craig; Raskob, Benjamin L.; Wenger, Eric J.
2017-05-01
We present an automated method of generating high resolution ( 2 meter) land cover using a pattern recognition neural network trained on spatial and spectral features obtained from over 9000 WorldView multispectral images (MSI) in six distinct world regions. At this resolution, the network can classify small-scale objects such as individual buildings, roads, and irrigation ponds. This paper focuses on three key areas. First, we describe our land cover generation process, which involves the co-registration and aggregation of multiple spatially overlapping MSI, post-aggregation processing, and the registration of land cover to OpenStreetMap (OSM) road vectors using feature correspondence. Second, we discuss the generation of land cover derivative products and their impact in the areas of region reduction and object detection. Finally, we discuss the process of globally scaling land cover generation using cloud computing via Amazon Web Services (AWS).
Nutrition labeling reduces valuations of food through multiple health and taste channels.
Fisher, Geoffrey
2018-01-01
One popularized technique to promote healthy dietary choice involves posting calorie or other nutritional information at the time individuals make a consumption decision. While the evidence on the effectiveness of such interventions is mixed, relatively little work has focused on the underlying mechanisms of how such labels alter behavior. In the research reported here, we asked 87 hungry laboratory subjects to make bids over foods with or without nutrition labels present. We found that the presence of a nutrition label reduced bids by an average of 25 cents. Furthermore, we found this reduction was driven by differences in perceptions and the importance individuals placed on health features of the foods, but also by differences in the importance individuals placed on more visceral taste features. These results help explain the various methods in which nutritional information postings or other policy tools can nudge individuals to consume healthier options. Copyright © 2017 Elsevier Ltd. All rights reserved.
Satellite Elevation Magnetic and Gravity Models of Major South American Plate Tectonic Features
NASA Technical Reports Server (NTRS)
Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Lidiak, E. G.; Keller, G. R. (Principal Investigator); Longacre, M. B.
1984-01-01
Some MAGSAT scalar and vector magnetic anomaly data together with regional gravity anomaly data are being used to investigate the regional tectonic features of the South American Plate. An initial step in this analysis is three dimensional modeling of magnetic and gravity anomalies of major structures such as the Andean subduction zone and the Amazon River Aulacogen at satellite elevations over an appropriate range of physical properties using Gaus-Legendre quadrature integration method. In addition, one degree average free-air gravity anomalies of South America and adjacent marine areas are projected to satellite elevations assuming a spherical Earth and available MAGSAT data are processed to obtain compatible data sets for correlation. Correlation of these data sets is enhanced by reduction of the MAGSAT data to radial polarization because of the profound effect of the variation of the magnetic inclination over South America.
Candra, Henry; Yuwono, Mitchell; Rifai Chai; Nguyen, Hung T; Su, Steven
2016-08-01
Psychotherapy requires appropriate recognition of patient's facial-emotion expression to provide proper treatment in psychotherapy session. To address the needs this paper proposed a facial emotion recognition system using Combination of Viola-Jones detector together with a feature descriptor we term Edge-Histogram of Oriented Gradients (E-HOG). The performance of the proposed method is compared with various feature sources including the face, the eyes, the mouth, as well as both the eyes and the mouth. Seven classes of basic emotions have been successfully identified with 96.4% accuracy using Multi-class Support Vector Machine (SVM). The proposed descriptor E-HOG is much leaner to compute compared to traditional HOG as shown by a significant improvement in processing time as high as 1833.33% (p-value = 2.43E-17) with a slight reduction in accuracy of only 1.17% (p-value = 0.0016).
Dynamic gesture recognition using neural networks: a fundament for advanced interaction construction
NASA Astrophysics Data System (ADS)
Boehm, Klaus; Broll, Wolfgang; Sokolewicz, Michael A.
1994-04-01
Interaction in virtual reality environments is still a challenging task. Static hand posture recognition is currently the most common and widely used method for interaction using glove input devices. In order to improve the naturalness of interaction, and thereby decrease the user-interface learning time, there is a need to be able to recognize dynamic gestures. In this paper we describe our approach to overcoming the difficulties of dynamic gesture recognition (DGR) using neural networks. Backpropagation neural networks have already proven themselves to be appropriate and efficient for posture recognition. However, the extensive amount of data involved in DGR requires a different approach. Because of features such as topology preservation and automatic-learning, Kohonen Feature Maps are particularly suitable for the reduction of the high dimensional data space that is the result of a dynamic gesture, and are thus implemented for this task.
Facades structure detection by geometric moment
NASA Astrophysics Data System (ADS)
Jiang, Diqiong; Chen, Hui; Song, Rui; Meng, Lei
2017-06-01
This paper proposes a novel method for extracting facades structure from real-world pictures by using local geometric moment. Compared with existing methods, the proposed method has advantages of easy-to-implement, low computational cost, and robustness to noises, such as uneven illumination, shadow, and shade from other objects. Besides, our method is faster and has a lower space complexity, making it feasible for mobile devices and the situation where real-time data processing is required. Specifically, a facades structure modal is first proposed to support the use of our special noise reduction method, which is based on a self-adapt local threshold with Gaussian weighted average for image binarization processing and the feature of the facades structure. Next, we divide the picture of the building into many individual areas, each of which represents a door or a window in the picture. Subsequently we calculate the geometric moment and centroid for each individual area, for identifying those collinear ones based on the feature vectors, each of which is thereafter replaced with a line. Finally, we comprehensively analyze all the geometric moment and centroid to find out the facades structure of the building. We compare our result with other methods and especially report the result from the pictures taken in bad environmental conditions. Our system is designed for two application, i.e, the reconstruction of facades based on higher resolution ground-based on imagery, and the positional system based on recognize the urban building.
Lo, P; Young, S; Kim, H J; Brown, M S; McNitt-Gray, M F
2016-08-01
To investigate the effects of dose level and reconstruction method on density and texture based features computed from CT lung nodules. This study had two major components. In the first component, a uniform water phantom was scanned at three dose levels and images were reconstructed using four conventional filtered backprojection (FBP) and four iterative reconstruction (IR) methods for a total of 24 different combinations of acquisition and reconstruction conditions. In the second component, raw projection (sinogram) data were obtained for 33 lung nodules from patients scanned as a part of their clinical practice, where low dose acquisitions were simulated by adding noise to sinograms acquired at clinical dose levels (a total of four dose levels) and reconstructed using one FBP kernel and two IR kernels for a total of 12 conditions. For the water phantom, spherical regions of interest (ROIs) were created at multiple locations within the water phantom on one reference image obtained at a reference condition. For the lung nodule cases, the ROI of each nodule was contoured semiautomatically (with manual editing) from images obtained at a reference condition. All ROIs were applied to their corresponding images reconstructed at different conditions. For 17 of the nodule cases, repeat contours were performed to assess repeatability. Histogram (eight features) and gray level co-occurrence matrix (GLCM) based texture features (34 features) were computed for all ROIs. For the lung nodule cases, the reference condition was selected to be 100% of clinical dose with FBP reconstruction using the B45f kernel; feature values calculated from other conditions were compared to this reference condition. A measure was introduced, which the authors refer to as Q, to assess the stability of features across different conditions, which is defined as the ratio of reproducibility (across conditions) to repeatability (across repeat contours) of each feature. The water phantom results demonstrated substantial variability among feature values calculated across conditions, with the exception of histogram mean. Features calculated from lung nodules demonstrated similar results with histogram mean as the most robust feature (Q ≤ 1), having a mean and standard deviation Q of 0.37 and 0.22, respectively. Surprisingly, histogram standard deviation and variance features were also quite robust. Some GLCM features were also quite robust across conditions, namely, diff. variance, sum variance, sum average, variance, and mean. Except for histogram mean, all features have a Q of larger than one in at least one of the 3% dose level conditions. As expected, the histogram mean is the most robust feature in their study. The effects of acquisition and reconstruction conditions on GLCM features vary widely, though trending toward features involving summation of product between intensities and probabilities being more robust, barring a few exceptions. Overall, care should be taken into account for variation in density and texture features if a variety of dose and reconstruction conditions are used for the quantification of lung nodules in CT, otherwise changes in quantification results may be more reflective of changes due to acquisition and reconstruction conditions than in the nodule itself.
Estimating nonrigid motion from inconsistent intensity with robust shape features
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Ruan, Dan, E-mail: druan@mednet.ucla.edu; Department of Radiation Oncology, University of California, Los Angeles, California 90095
2013-12-15
Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, andmore » regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. Conclusions: The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.« less
Bifunctional redox tagging of carbon nanoparticles
NASA Astrophysics Data System (ADS)
Poon, Jeffrey; Batchelor-McAuley, Christopher; Tschulik, Kristina; Palgrave, Robert G.; Compton, Richard G.
2015-01-01
Despite extensive work on the controlled surface modification of carbon with redox moieties, to date almost all available methodologies involve complex chemistry and are prone to the formation of polymerized multi-layer surface structures. Herein, the facile bifunctional redox tagging of carbon nanoparticles (diameter 27 nm) and its characterization is undertaken using the industrial dye Reactive Blue 2. The modification route is demonstrated to be via exceptionally strong physisorption. The modified carbon is found to exhibit both well-defined oxidative and reductive voltammetric redox features which are quantitatively interpreted. The method provides a generic approach to monolayer modifications of carbon and carbon nanoparticle surfaces.
Mamun, A A; Shukla, P K
2009-09-01
Effects of the nonthermal distribution of electrons as well as the polarity of the net dust-charge number density on nonplanar (viz. cylindrical and spherical) dust-ion-acoustic solitary waves (DIASWs) are investigated by employing the reductive perturbation method. It is found that the basic features of the DIASWs are significantly modified by the effects of nonthermal electron distribution, polarity of net dust-charge number density, and nonplanar geometry. The implications of our results in some space and laboratory dusty plasma environments are briefly discussed.
Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Wei, Jun; Cha, Kenny
2016-01-01
Purpose: Develop a computer-aided detection (CAD) system for masses in digital breast tomosynthesis (DBT) volume using a deep convolutional neural network (DCNN) with transfer learning from mammograms. Methods: A data set containing 2282 digitized film and digital mammograms and 324 DBT volumes were collected with IRB approval. The mass of interest on the images was marked by an experienced breast radiologist as reference standard. The data set was partitioned into a training set (2282 mammograms with 2461 masses and 230 DBT views with 228 masses) and an independent test set (94 DBT views with 89 masses). For DCNN training, the region of interest (ROI) containing the mass (true positive) was extracted from each image. False positive (FP) ROIs were identified at prescreening by their previously developed CAD systems. After data augmentation, a total of 45 072 mammographic ROIs and 37 450 DBT ROIs were obtained. Data normalization and reduction of non-uniformity in the ROIs across heterogeneous data was achieved using a background correction method applied to each ROI. A DCNN with four convolutional layers and three fully connected (FC) layers was first trained on the mammography data. Jittering and dropout techniques were used to reduce overfitting. After training with the mammographic ROIs, all weights in the first three convolutional layers were frozen, and only the last convolution layer and the FC layers were randomly initialized again and trained using the DBT training ROIs. The authors compared the performances of two CAD systems for mass detection in DBT: one used the DCNN-based approach and the other used their previously developed feature-based approach for FP reduction. The prescreening stage was identical in both systems, passing the same set of mass candidates to the FP reduction stage. For the feature-based CAD system, 3D clustering and active contour method was used for segmentation; morphological, gray level, and texture features were extracted and merged with a linear discriminant classifier to score the detected masses. For the DCNN-based CAD system, ROIs from five consecutive slices centered at each candidate were passed through the trained DCNN and a mass likelihood score was generated. The performances of the CAD systems were evaluated using free-response ROC curves and the performance difference was analyzed using a non-parametric method. Results: Before transfer learning, the DCNN trained only on mammograms with an AUC of 0.99 classified DBT masses with an AUC of 0.81 in the DBT training set. After transfer learning with DBT, the AUC improved to 0.90. For breast-based CAD detection in the test set, the sensitivity for the feature-based and the DCNN-based CAD systems was 83% and 91%, respectively, at 1 FP/DBT volume. The difference between the performances for the two systems was statistically significant (p-value < 0.05). Conclusions: The image patterns learned from the mammograms were transferred to the mass detection on DBT slices through the DCNN. This study demonstrated that large data sets collected from mammography are useful for developing new CAD systems for DBT, alleviating the problem and effort of collecting entirely new large data sets for the new modality. PMID:27908154
Haptic computer-assisted patient-specific preoperative planning for orthopedic fractures surgery.
Kovler, I; Joskowicz, L; Weil, Y A; Khoury, A; Kronman, A; Mosheiff, R; Liebergall, M; Salavarrieta, J
2015-10-01
The aim of orthopedic trauma surgery is to restore the anatomy and function of displaced bone fragments to support osteosynthesis. For complex cases, including pelvic bone and multi-fragment femoral neck and distal radius fractures, preoperative planning with a CT scan is indicated. The planning consists of (1) fracture reduction-determining the locations and anatomical sites of origin of the fractured bone fragments and (2) fracture fixation-selecting and placing fixation screws and plates. The current bone fragment manipulation, hardware selection, and positioning processes based on 2D slices and a computer mouse are time-consuming and require a technician. We present a novel 3D haptic-based system for patient-specific preoperative planning of orthopedic fracture surgery based on CT scans. The system provides the surgeon with an interactive, intuitive, and comprehensive, planning tool that supports fracture reduction and fixation. Its unique features include: (1) two-hand haptic manipulation of 3D bone fragments and fixation hardware models; (2) 3D stereoscopic visualization and multiple viewing modes; (3) ligaments and pivot motion constraints to facilitate fracture reduction; (4) semiautomatic and automatic fracture reduction modes; and (5) interactive custom fixation plate creation to fit the bone morphology. We evaluate our system with two experimental studies: (1) accuracy and repeatability of manual fracture reduction and (2) accuracy of our automatic virtual bone fracture reduction method. The surgeons achieved a mean accuracy of less than 1 mm for the manual reduction and 1.8 mm (std [Formula: see text] 1.1 mm) for the automatic reduction. 3D haptic-based patient-specific preoperative planning of orthopedic fracture surgery from CT scans is useful and accurate and may have significant advantages for evaluating and planning complex fractures surgery.
White, James A P; Bond, Ian P; Jagger, Daryll C
2011-01-01
This study investigated how ribbed design features, including palatal rugae, may be used to significantly improve the structural performance of a maxillary denture under load. A computer-aided design model of a generic maxillary denture, incorporating various rib features, was created and imported into a finite element analysis program. The denture and ribbed features were assigned the material properties of standard denture acrylic resin, and load was applied in two different ways: the first simulating a three-point flexural bend of the posterior section and the second simulating loading of the entire palatal region. To investigate the combined use of ribbing and reinforcement, the same simulations were repeated with the ribbed features having a Young modulus two orders of magnitude greater than denture acrylic resin. For a prescribed load, total displacements of tracking nodes were compared to those of a control denture (without ribbing) to assess relative denture rigidity. When subjected to flexural loading, an increase in rib depth was seen to result in a reduction of both the transverse displacement of the last molar and vertical displacement at the centerline. However, ribbed features assigned the material properties of denture acrylic resin require a depth that may impose on speech and bolus propulsion before significant improvements are observed. The use of ribbed features, when made from a significantly stiffer material (eg, fiber-reinforced polymer) and designed to mimic palatal rugae, offer an acceptable method of providing significant improvements in rigidity to a maxillary denture under flexural load.
Cao, Peng; Liu, Xiaoli; Yang, Jinzhu; Zhao, Dazhe; Huang, Min; Zhang, Jian; Zaiane, Osmar
2017-12-01
Alzheimer's disease (AD) has been not only a substantial financial burden to the health care system but also an emotional burden to patients and their families. Making accurate diagnosis of AD based on brain magnetic resonance imaging (MRI) is becoming more and more critical and emphasized at the earliest stages. However, the high dimensionality and imbalanced data issues are two major challenges in the study of computer aided AD diagnosis. The greatest limitations of existing dimensionality reduction and over-sampling methods are that they assume a linear relationship between the MRI features (predictor) and the disease status (response). To better capture the complicated but more flexible relationship, we propose a multi-kernel based dimensionality reduction and over-sampling approaches. We combined Marginal Fisher Analysis with ℓ 2,1 -norm based multi-kernel learning (MKMFA) to achieve the sparsity of region-of-interest (ROI), which leads to simultaneously selecting a subset of the relevant brain regions and learning a dimensionality transformation. Meanwhile, a multi-kernel over-sampling (MKOS) was developed to generate synthetic instances in the optimal kernel space induced by MKMFA, so as to compensate for the class imbalanced distribution. We comprehensively evaluate the proposed models for the diagnostic classification (binary class and multi-class classification) including all subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The experimental results not only demonstrate the proposed method has superior performance over multiple comparable methods, but also identifies relevant imaging biomarkers that are consistent with prior medical knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
Climate balance of biogas upgrading systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pertl, A., E-mail: andreas.pertl@boku.ac.a; Mostbauer, P.; Obersteiner, G.
2010-01-15
One of the numerous applications of renewable energy is represented by the use of upgraded biogas where needed by feeding into the gas grid. The aim of the present study was to identify an upgrading scenario featuring minimum overall GHG emissions. The study was based on a life-cycle approach taking into account also GHG emissions resulting from plant cultivation to the process of energy conversion. For anaerobic digestion two substrates have been taken into account: (1) agricultural resources and (2) municipal organic waste. The study provides results for four different upgrading technologies including the BABIU (Bottom Ash for Biogas Upgrading)more » method. As the transport of bottom ash is a critical factor implicated in the BABIU-method, different transport distances and means of conveyance (lorry, train) have been considered. Furthermore, aspects including biogas compression and energy conversion in a combined heat and power plant were assessed. GHG emissions from a conventional energy supply system (natural gas) have been estimated as reference scenario. The main findings obtained underlined how the overall reduction of GHG emissions may be rather limited, for example for an agricultural context in which PSA-scenarios emit only 10% less greenhouse gases than the reference scenario. The BABIU-method constitutes an efficient upgrading method capable of attaining a high reduction of GHG emission by sequestration of CO{sub 2}.« less
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
Silver nanoparticles toxicity against airborne strains of Staphylococcus spp.
Wolny-Koładka, Katarzyna A; Malina, Dagmara K
2017-11-10
The aim of this study was to explore the toxicity of silver nanoparticles (AgNPs) synthesized by chemical reduction method assessment with regard to airborne strains of Staphylococcus spp. The first step of the experiment was the preparation of silver nanoparticle suspension. The suspension was obtained by a fast and simple chemical method involving the reduction of silver ions through a reducing factor in the presence of the suitable stabilizer required to prevent the aggregation. In the second stage, varied instrumental techniques were used for the analysis and characterization of the obtained nanostructures. Third, the bacteria of the Staphylococcus genus were isolated from the air under stable conditions with 47 sports and recreational horses, relatively. Next, isolated strains were identified using biochemical and spectrophotometric methods. The final step was the evaluation of the Staphylococcus genus sensitivity to nanosilver using the disk diffusion test. It has been proven that prepared silver nanoparticles exhibit strong antibacterial properties. The minimum inhibitory concentration for tested isolates was 30 μg/mL. It has been found that the sensitivity of Staphylococcus spp. isolated from six identified species differs considerably. The size distribution of bacterial growth inhibition zones indicates that resistance to various nanosilver concentrations is an individual strain feature, and has no connection with belonging to a specific species.
A wavelet-based technique to predict treatment outcome for Major Depressive Disorder.
Mumtaz, Wajid; Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad; Malik, Aamir Saeed
2017-01-01
Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant's treatment outcome may help during antidepressant's selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant's treatment outcome for the MDD patients.
Mizianty, Marcin J; Kurgan, Lukasz
2009-12-13
Knowledge of structural class is used by numerous methods for identification of structural/functional characteristics of proteins and could be used for the detection of remote homologues, particularly for chains that share twilight-zone similarity. In contrast to existing sequence-based structural class predictors, which target four major classes and which are designed for high identity sequences, we predict seven classes from sequences that share twilight-zone identity with the training sequences. The proposed MODular Approach to Structural class prediction (MODAS) method is unique as it allows for selection of any subset of the classes. MODAS is also the first to utilize a novel, custom-built feature-based sequence representation that combines evolutionary profiles and predicted secondary structure. The features quantify information relevant to the definition of the classes including conservation of residues and arrangement and number of helix/strand segments. Our comprehensive design considers 8 feature selection methods and 4 classifiers to develop Support Vector Machine-based classifiers that are tailored for each of the seven classes. Tests on 5 twilight-zone and 1 high-similarity benchmark datasets and comparison with over two dozens of modern competing predictors show that MODAS provides the best overall accuracy that ranges between 80% and 96.7% (83.5% for the twilight-zone datasets), depending on the dataset. This translates into 19% and 8% error rate reduction when compared against the best performing competing method on two largest datasets. The proposed predictor provides accurate predictions at 58% accuracy for membrane proteins class, which is not considered by majority of existing methods, in spite that this class accounts for only 2% of the data. Our predictive model is analyzed to demonstrate how and why the input features are associated with the corresponding classes. The improved predictions stem from the novel features that express collocation of the secondary structure segments in the protein sequence and that combine evolutionary and secondary structure information. Our work demonstrates that conservation and arrangement of the secondary structure segments predicted along the protein chain can successfully predict structural classes which are defined based on the spatial arrangement of the secondary structures. A web server is available at http://biomine.ece.ualberta.ca/MODAS/.
2009-01-01
Background Knowledge of structural class is used by numerous methods for identification of structural/functional characteristics of proteins and could be used for the detection of remote homologues, particularly for chains that share twilight-zone similarity. In contrast to existing sequence-based structural class predictors, which target four major classes and which are designed for high identity sequences, we predict seven classes from sequences that share twilight-zone identity with the training sequences. Results The proposed MODular Approach to Structural class prediction (MODAS) method is unique as it allows for selection of any subset of the classes. MODAS is also the first to utilize a novel, custom-built feature-based sequence representation that combines evolutionary profiles and predicted secondary structure. The features quantify information relevant to the definition of the classes including conservation of residues and arrangement and number of helix/strand segments. Our comprehensive design considers 8 feature selection methods and 4 classifiers to develop Support Vector Machine-based classifiers that are tailored for each of the seven classes. Tests on 5 twilight-zone and 1 high-similarity benchmark datasets and comparison with over two dozens of modern competing predictors show that MODAS provides the best overall accuracy that ranges between 80% and 96.7% (83.5% for the twilight-zone datasets), depending on the dataset. This translates into 19% and 8% error rate reduction when compared against the best performing competing method on two largest datasets. The proposed predictor provides accurate predictions at 58% accuracy for membrane proteins class, which is not considered by majority of existing methods, in spite that this class accounts for only 2% of the data. Our predictive model is analyzed to demonstrate how and why the input features are associated with the corresponding classes. Conclusions The improved predictions stem from the novel features that express collocation of the secondary structure segments in the protein sequence and that combine evolutionary and secondary structure information. Our work demonstrates that conservation and arrangement of the secondary structure segments predicted along the protein chain can successfully predict structural classes which are defined based on the spatial arrangement of the secondary structures. A web server is available at http://biomine.ece.ualberta.ca/MODAS/. PMID:20003388
Saravanan, K; Panigrahi, B K; Suresh, K; Sundaravel, B; Magudapathy, P; Gupta, Mukul
2018-08-24
Ion beam irradiation technique has been proposed, for efficient, fast and eco-friendly reduction of graphene oxide (GO), as an alternative to the conventional methods. 5 MeV, Au + ion beam has been used to reduce the free standing GO flake. Both electronic and nuclear energy loss mechanisms of the irradiation process play a major role in removal of oxygen moieties and recovery of graphene network. Atomic resolution scanning tunnelling microscopy analysis of the irradiated GO flake shows the characteristic honeycomb structure of graphene. X-ray absorption near edge structure analysis at C K-edge reveals that the features of the irradiated GO flake resemble the few layer graphene. Resonant Rutherford backscattering spectrometry analysis evidenced an enhanced C/O ratio of ∼23 in the irradiated GO. In situ sheet resistance measurements exhibit a sharp decrease of resistance (few 100 s of Ω) at a fluence of 6.5 × 10 14 ions cm -2 . Photoluminescence spectroscopic analysis of irradiated GO shows a sharp blue emission, while pristine GO exhibits a broad emission in the visible-near IR region. Region selective reduction, tunable electrical and optical properties by controlling C/O ratio makes ion irradiation as a versatile tool for the green reduction of GO for diverse applications.
Parallelized modelling and solution scheme for hierarchically scaled simulations
NASA Technical Reports Server (NTRS)
Padovan, Joe
1995-01-01
This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.
Zhang, Ming; Xu, Yan; Li, Lei; Liu, Zi; Yang, Xibei; Yu, Dong-Jun
2018-06-01
RNA 5-methylcytosine (m 5 C) is an important post-transcriptional modification that plays an indispensable role in biological processes. The accurate identification of m 5 C sites from primary RNA sequences is especially useful for deeply understanding the mechanisms and functions of m 5 C. Due to the difficulty and expensive costs of identifying m 5 C sites with wet-lab techniques, developing fast and accurate machine-learning-based prediction methods is urgently needed. In this study, we proposed a new m 5 C site predictor, called M5C-HPCR, by introducing a novel heuristic nucleotide physicochemical property reduction (HPCR) algorithm and classifier ensemble. HPCR extracts multiple reducts of physical-chemical properties for encoding discriminative features, while the classifier ensemble is applied to integrate multiple base predictors, each of which is trained based on a separate reduct of the physical-chemical properties obtained from HPCR. Rigorous jackknife tests on two benchmark datasets demonstrate that M5C-HPCR outperforms state-of-the-art m 5 C site predictors, with the highest values of MCC (0.859) and AUC (0.962). We also implemented the webserver of M5C-HPCR, which is freely available at http://cslab.just.edu.cn:8080/M5C-HPCR/. Copyright © 2018 Elsevier Inc. All rights reserved.
Salvage of failed protein targets by reductive alkylation.
Tan, Kemin; Kim, Youngchang; Hatzos-Skintges, Catherine; Chang, Changsoo; Cuff, Marianne; Chhor, Gekleng; Osipiuk, Jerzy; Michalska, Karolina; Nocek, Boguslaw; An, Hao; Babnigg, Gyorgy; Bigelow, Lance; Joachimiak, Grazyna; Li, Hui; Mack, Jamey; Makowska-Grzyska, Magdalena; Maltseva, Natalia; Mulligan, Rory; Tesar, Christine; Zhou, Min; Joachimiak, Andrzej
2014-01-01
The growth of diffraction-quality single crystals is of primary importance in protein X-ray crystallography. Chemical modification of proteins can alter their surface properties and crystallization behavior. The Midwest Center for Structural Genomics (MCSG) has previously reported how reductive methylation of lysine residues in proteins can improve crystallization of unique proteins that initially failed to produce diffraction-quality crystals. Recently, this approach has been expanded to include ethylation and isopropylation in the MCSG protein crystallization pipeline. Applying standard methods, 180 unique proteins were alkylated and screened using standard crystallization procedures. Crystal structures of 12 new proteins were determined, including the first ethylated and the first isopropylated protein structures. In a few cases, the structures of native and methylated or ethylated states were obtained and the impact of reductive alkylation of lysine residues was assessed. Reductive methylation tends to be more efficient and produces the most alkylated protein structures. Structures of methylated proteins typically have higher resolution limits. A number of well-ordered alkylated lysine residues have been identified, which make both intermolecular and intramolecular contacts. The previous report is updated and complemented with the following new data; a description of a detailed alkylation protocol with results, structural features, and roles of alkylated lysine residues in protein crystals. These contribute to improved crystallization properties of some proteins.
Salvage of Failed Protein Targets by Reductive Alkylation
Tan, Kemin; Kim, Youngchang; Hatzos-Skintges, Catherine; Chang, Changsoo; Cuff, Marianne; Chhor, Gekleng; Osipiuk, Jerzy; Michalska, Karolina; Nocek, Boguslaw; An, Hao; Babnigg, Gyorgy; Bigelow, Lance; Joachimiak, Grazyna; Li, Hui; Mack, Jamey; Makowska-Grzyska, Magdalena; Maltseva, Natalia; Mulligan, Rory; Tesar, Christine; Zhou, Min; Joachimiak, Andrzej
2014-01-01
The growth of diffraction-quality single crystals is of primary importance in protein X-ray crystallography. Chemical modification of proteins can alter their surface properties and crystallization behavior. The Midwest Center for Structural Genomics (MCSG) has previously reported how reductive methylation of lysine residues in proteins can improve crystallization of unique proteins that initially failed to produce diffraction-quality crystals. Recently, this approach has been expanded to include ethylation and isopropylation in the MCSG protein crystallization pipeline. Applying standard methods, 180 unique proteins were alkylated and screened using standard crystallization procedures. Crystal structures of 12 new proteins were determined, including the first ethylated and the first isopropylated protein structures. In a few cases, the structures of native and methylated or ethylated states were obtained and the impact of reductive alkylation of lysine residues was assessed. Reductive methylation tends to be more efficient and produces the most alkylated protein structures. Structures of methylated proteins typically have higher resolution limits. A number of well-ordered alkylated lysine residues have been identified, which make both intermolecular and intramolecular contacts. The previous report is updated and complemented with the following new data; a description of a detailed alkylation protocol with results, structural features, and roles of alkylated lysine residues in protein crystals. These contribute to improved crystallization properties of some proteins. PMID:24590719
NASA Astrophysics Data System (ADS)
Choi, Jae Young; Kim, Dae Hoe; Choi, Seon Hyeong; Ro, Yong Man
2012-03-01
We investigated the feasibility of using multiresolution Local Binary Pattern (LBP) texture analysis to reduce falsepositive (FP) detection in a computerized mass detection framework. A new and novel approach for extracting LBP features is devised to differentiate masses and normal breast tissue on mammograms. In particular, to characterize the LBP texture patterns of the boundaries of masses, as well as to preserve the spatial structure pattern of the masses, two individual LBP texture patterns are then extracted from the core region and the ribbon region of pixels of the respective ROI regions, respectively. These two texture patterns are combined to produce the so-called multiresolution LBP feature of a given ROI. The proposed LBP texture analysis of the information in mass core region and its margin has clearly proven to be significant and is not sensitive to the precise location of the boundaries of masses. In this study, 89 mammograms were collected from the public MAIS database (DB). To perform a more realistic assessment of FP reduction process, the LBP texture analysis was applied directly to a total of 1,693 regions of interest (ROIs) automatically segmented by computer algorithm. Support Vector Machine (SVM) was applied for the classification of mass ROIs from ROIs containing normal tissue. Receiver Operating Characteristic (ROC) analysis was conducted to evaluate the classification accuracy and its improvement using multiresolution LBP features. With multiresolution LBP features, the classifier achieved an average area under the ROC curve, , z A of 0.956 during testing. In addition, the proposed LBP features outperform other state-of-the-arts features designed for false positive reduction.
An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.
Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan
2015-01-01
A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the individual subjects, therefore, it can be used as a significant tool in clinical practice.
Vyas, Shilpa; Le, Yi; Zhang, Zhe; Armour, Woody
2015-01-01
Purpose Several robotic delivery systems for prostate brachytherapy are under development or in pre-clinical testing. One of the features of robotic brachytherapy is the ability to vary spacing of needles at non-fixed intervals. This feature may play an important role in prostate brachytherapy, which is traditionally template-based with fixed needle spacing of 0.5 cm. We sought to quantify potential reductions in the dose to urethra and rectum by utilizing variable needle spacing, as compared to fixed needle spacing. Material and methods Transrectal ultrasound images from 10 patients were used by 3 experienced planners to create 120 treatment plans. Each planner created 4 plan variations per patient with respect to needle positions: 125I fixed spacing, 125I variable spacing, 103Pd fixed spacing, and 103Pd variable spacing. The primary planning objective was to achieve a prostate V100 of 100% while minimizing dose to urethra and rectum. Results All plans met the objective of achieving prostate V100 of 100%. Combined results for all plans show statistically significant improvements in all assessed dosimetric variables for urethra (Umax, Umean, D30, D5) and rectum (Rmax, Rmean, RV100) when using variable spacing. The dose reductions for mean and maximum urethra dose using variable spacing had p values of 0.011 and 0.024 with 103Pd, and 0.007 and 0.029 with 125I plans. Similarly dose reductions for mean and maximum rectal dose using variable spacing had p values of 0.007 and 0.052 with 103Pd, and 0.012 and 0.037 with 125I plans. Conclusions The variable needle spacing achievable by the use of robotics in prostate brachytherapy allows for reductions in both urethral and rectal planned doses while maintaining prostate dose coverage. Such dosimetric advantages have the potential in translating to significant clinical benefits with the use of robotic brachytherapy. PMID:26622227
METHANOGENESIS AND SULFATE REDUCTION IN CHEMOSTATS: II. MODEL DEVELOPMENT AND VERIFICATION
A comprehensive dynamic model is presented that simulates methanogenesis and sulfate reduction in a continuously stirred tank reactor (CSTR). This model incorporates the complex chemistry of anaerobic systems. A salient feature of the model is its ability to predict the effluent ...
Energy efficient engine component development and integration program
NASA Technical Reports Server (NTRS)
1982-01-01
The objective of the Energy Efficient Engine Component Development and Integration program is to develop, evaluate, and demonstrate the technology for achieving lower installed fuel consumption and lower operating costs in future commercial turbofan engines. Minimum goals have been set for a 12 percent reduction in thrust specific fuel consumption (TSFC), 5 percent reduction in direct operating cost (DOC), and 50 percent reduction in performance degradation for the Energy Efficient Engine (flight propulsion system) relative to the JT9D-7A reference engine. The Energy Efficienct Engine features a twin spool, direct drive, mixed flow exhaust configuration, utilizing an integrated engine nacelle structure. A short, stiff, high rotor and a single stage high pressure turbine are among the major enhancements in providing for both performance retention and major reductions in maintenance and direct operating costs. Improved clearance control in the high pressure compressor and turbines, and advanced single crystal materials in turbine blades and vanes are among the major features providing performance improvement. Highlights of work accomplished and programs modifications and deletions are presented.
Design and Implementation of Sound Searching Robots in Wireless Sensor Networks
Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao
2016-01-01
A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well. PMID:27657088
Design and Implementation of Sound Searching Robots in Wireless Sensor Networks.
Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao
2016-09-21
A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well.
A review on machine learning principles for multi-view biological data integration.
Li, Yifeng; Wu, Fang-Xiang; Ngom, Alioune
2018-03-01
Driven by high-throughput sequencing techniques, modern genomic and clinical studies are in a strong need of integrative machine learning models for better use of vast volumes of heterogeneous information in the deep understanding of biological systems and the development of predictive models. How data from multiple sources (called multi-view data) are incorporated in a learning system is a key step for successful analysis. In this article, we provide a comprehensive review on omics and clinical data integration techniques, from a machine learning perspective, for various analyses such as prediction, clustering, dimension reduction and association. We shall show that Bayesian models are able to use prior information and model measurements with various distributions; tree-based methods can either build a tree with all features or collectively make a final decision based on trees learned from each view; kernel methods fuse the similarity matrices learned from individual views together for a final similarity matrix or learning model; network-based fusion methods are capable of inferring direct and indirect associations in a heterogeneous network; matrix factorization models have potential to learn interactions among features from different views; and a range of deep neural networks can be integrated in multi-modal learning for capturing the complex mechanism of biological systems.
New solutions and applications of 3D computer tomography image processing
NASA Astrophysics Data System (ADS)
Effenberger, Ira; Kroll, Julia; Verl, Alexander
2008-02-01
As nowadays the industry aims at fast and high quality product development and manufacturing processes a modern and efficient quality inspection is essential. Compared to conventional measurement technologies, industrial computer tomography (CT) is a non-destructive technology for 3D-image data acquisition which helps to overcome their disadvantages by offering the possibility to scan complex parts with all outer and inner geometric features. In this paper new and optimized methods for 3D image processing, including innovative ways of surface reconstruction and automatic geometric feature detection of complex components, are presented, especially our work of developing smart online data processing and data handling methods, with an integrated intelligent online mesh reduction. Hereby the processing of huge and high resolution data sets is guaranteed. Besides, new approaches for surface reconstruction and segmentation based on statistical methods are demonstrated. On the extracted 3D point cloud or surface triangulation automated and precise algorithms for geometric inspection are deployed. All algorithms are applied to different real data sets generated by computer tomography in order to demonstrate the capabilities of the new tools. Since CT is an emerging technology for non-destructive testing and inspection more and more industrial application fields will use and profit from this new technology.
Estimating nonrigid motion from inconsistent intensity with robust shape features.
Liu, Wenyang; Ruan, Dan
2013-12-01
To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.
Differential correlation for sequencing data.
Siska, Charlotte; Kechris, Katerina
2017-01-19
Several methods have been developed to identify differential correlation (DC) between pairs of molecular features from -omics studies. Most DC methods have only been tested with microarrays and other platforms producing continuous and Gaussian-like data. Sequencing data is in the form of counts, often modeled with a negative binomial distribution making it difficult to apply standard correlation metrics. We have developed an R package for identifying DC called Discordant which uses mixture models for correlations between features and the Expectation Maximization (EM) algorithm for fitting parameters of the mixture model. Several correlation metrics for sequencing data are provided and tested using simulations. Other extensions in the Discordant package include additional modeling for different types of differential correlation, and faster implementation, using a subsampling routine to reduce run-time and address the assumption of independence between molecular feature pairs. With simulations and breast cancer miRNA-Seq and RNA-Seq data, we find that Spearman's correlation has the best performance among the tested correlation methods for identifying differential correlation. Application of Spearman's correlation in the Discordant method demonstrated the most power in ROC curves and sensitivity/specificity plots, and improved ability to identify experimentally validated breast cancer miRNA. We also considered including additional types of differential correlation, which showed a slight reduction in power due to the additional parameters that need to be estimated, but more versatility in applications. Finally, subsampling within the EM algorithm considerably decreased run-time with negligible effect on performance. A new method and R package called Discordant is presented for identifying differential correlation with sequencing data. Based on comparisons with different correlation metrics, this study suggests Spearman's correlation is appropriate for sequencing data, but other correlation metrics are available to the user depending on the application and data type. The Discordant method can also be extended to investigate additional DC types and subsampling with the EM algorithm is now available for reduced run-time. These extensions to the R package make Discordant more robust and versatile for multiple -omics studies.
Samala, Ravi K; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A; Wei, Jun; Cha, Kenny
2016-12-01
Develop a computer-aided detection (CAD) system for masses in digital breast tomosynthesis (DBT) volume using a deep convolutional neural network (DCNN) with transfer learning from mammograms. A data set containing 2282 digitized film and digital mammograms and 324 DBT volumes were collected with IRB approval. The mass of interest on the images was marked by an experienced breast radiologist as reference standard. The data set was partitioned into a training set (2282 mammograms with 2461 masses and 230 DBT views with 228 masses) and an independent test set (94 DBT views with 89 masses). For DCNN training, the region of interest (ROI) containing the mass (true positive) was extracted from each image. False positive (FP) ROIs were identified at prescreening by their previously developed CAD systems. After data augmentation, a total of 45 072 mammographic ROIs and 37 450 DBT ROIs were obtained. Data normalization and reduction of non-uniformity in the ROIs across heterogeneous data was achieved using a background correction method applied to each ROI. A DCNN with four convolutional layers and three fully connected (FC) layers was first trained on the mammography data. Jittering and dropout techniques were used to reduce overfitting. After training with the mammographic ROIs, all weights in the first three convolutional layers were frozen, and only the last convolution layer and the FC layers were randomly initialized again and trained using the DBT training ROIs. The authors compared the performances of two CAD systems for mass detection in DBT: one used the DCNN-based approach and the other used their previously developed feature-based approach for FP reduction. The prescreening stage was identical in both systems, passing the same set of mass candidates to the FP reduction stage. For the feature-based CAD system, 3D clustering and active contour method was used for segmentation; morphological, gray level, and texture features were extracted and merged with a linear discriminant classifier to score the detected masses. For the DCNN-based CAD system, ROIs from five consecutive slices centered at each candidate were passed through the trained DCNN and a mass likelihood score was generated. The performances of the CAD systems were evaluated using free-response ROC curves and the performance difference was analyzed using a non-parametric method. Before transfer learning, the DCNN trained only on mammograms with an AUC of 0.99 classified DBT masses with an AUC of 0.81 in the DBT training set. After transfer learning with DBT, the AUC improved to 0.90. For breast-based CAD detection in the test set, the sensitivity for the feature-based and the DCNN-based CAD systems was 83% and 91%, respectively, at 1 FP/DBT volume. The difference between the performances for the two systems was statistically significant (p-value < 0.05). The image patterns learned from the mammograms were transferred to the mass detection on DBT slices through the DCNN. This study demonstrated that large data sets collected from mammography are useful for developing new CAD systems for DBT, alleviating the problem and effort of collecting entirely new large data sets for the new modality.
Hasnain, Zaki; Li, Ming; Dorff, Tanya; Quinn, David; Ueno, Naoto T; Yennu, Sriram; Kolatkar, Anand; Shahabi, Cyrus; Nocera, Luciano; Nieva, Jorge; Kuhn, Peter; Newton, Paul K
2018-05-18
Biomechanical characterization of human performance with respect to fatigue and fitness is relevant in many settings, however is usually limited to either fully qualitative assessments or invasive methods which require a significant experimental setup consisting of numerous sensors, force plates, and motion detectors. Qualitative assessments are difficult to standardize due to their intrinsic subjective nature, on the other hand, invasive methods provide reliable metrics but are not feasible for large scale applications. Presented here is a dynamical toolset for detecting performance groups using a non-invasive system based on the Microsoft Kinect motion capture sensor, and a case study of 37 cancer patients performing two clinically monitored tasks before and after therapy regimens. Dynamical features are extracted from the motion time series data and evaluated based on their ability to i) cluster patients into coherent fitness groups using unsupervised learning algorithms and to ii) predict Eastern Cooperative Oncology Group performance status via supervised learning. The unsupervised patient clustering is comparable to clustering based on physician assigned Eastern Cooperative Oncology Group status in that they both have similar concordance with change in weight before and after therapy as well as unexpected hospitalizations throughout the study. The extracted dynamical features can predict physician, coordinator, and patient Eastern Cooperative Oncology Group status with an accuracy of approximately 80%. The non-invasive Microsoft Kinect sensor and the proposed dynamical toolset comprised of data preprocessing, feature extraction, dimensionality reduction, and machine learning offers a low-cost and general method for performance segregation and can complement existing qualitative clinical assessments. Copyright © 2018 Elsevier Ltd. All rights reserved.
Investigation of Error Patterns in Geographical Databases
NASA Technical Reports Server (NTRS)
Dryer, David; Jacobs, Derya A.; Karayaz, Gamze; Gronbech, Chris; Jones, Denise R. (Technical Monitor)
2002-01-01
The objective of the research conducted in this project is to develop a methodology to investigate the accuracy of Airport Safety Modeling Data (ASMD) using statistical, visualization, and Artificial Neural Network (ANN) techniques. Such a methodology can contribute to answering the following research questions: Over a representative sampling of ASMD databases, can statistical error analysis techniques be accurately learned and replicated by ANN modeling techniques? This representative ASMD sample should include numerous airports and a variety of terrain characterizations. Is it possible to identify and automate the recognition of patterns of error related to geographical features? Do such patterns of error relate to specific geographical features, such as elevation or terrain slope? Is it possible to combine the errors in small regions into an error prediction for a larger region? What are the data density reduction implications of this work? ASMD may be used as the source of terrain data for a synthetic visual system to be used in the cockpit of aircraft when visual reference to ground features is not possible during conditions of marginal weather or reduced visibility. In this research, United States Geologic Survey (USGS) digital elevation model (DEM) data has been selected as the benchmark. Artificial Neural Networks (ANNS) have been used and tested as alternate methods in place of the statistical methods in similar problems. They often perform better in pattern recognition, prediction and classification and categorization problems. Many studies show that when the data is complex and noisy, the accuracy of ANN models is generally higher than those of comparable traditional methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levakhina, Y. M.; Mueller, J.; Buzug, T. M.
Purpose: This paper introduces a nonlinear weighting scheme into the backprojection operation within the simultaneous algebraic reconstruction technique (SART). It is designed for tomosynthesis imaging of objects with high-attenuation features in order to reduce limited angle artifacts. Methods: The algorithm estimates which projections potentially produce artifacts in a voxel. The contribution of those projections into the updating term is reduced. In order to identify those projections automatically, a four-dimensional backprojected space representation is used. Weighting coefficients are calculated based on a dissimilarity measure, evaluated in this space. For each combination of an angular view direction and a voxel position anmore » individual weighting coefficient for the updating term is calculated. Results: The feasibility of the proposed approach is shown based on reconstructions of the following real three-dimensional tomosynthesis datasets: a mammography quality phantom, an apple with metal needles, a dried finger bone in water, and a human hand. Datasets have been acquired with a Siemens Mammomat Inspiration tomosynthesis device and reconstructed using SART with and without suggested weighting. Out-of-focus artifacts are described using line profiles and measured using standard deviation (STD) in the plane and below the plane which contains artifact-causing features. Artifacts distribution in axial direction is measured using an artifact spread function (ASF). The volumes reconstructed with the weighting scheme demonstrate the reduction of out-of-focus artifacts, lower STD (meaning reduction of artifacts), and narrower ASF compared to nonweighted SART reconstruction. It is achieved successfully for different kinds of structures: point-like structures such as phantom features, long structures such as metal needles, and fine structures such as trabecular bone structures. Conclusions: Results indicate the feasibility of the proposed algorithm to reduce typical tomosynthesis artifacts produced by high-attenuation features. The proposed algorithm assigns weighting coefficients automatically and no segmentation or tissue-classification steps are required. The algorithm can be included into various iterative reconstruction algorithms with an additive updating strategy. It can also be extended to computed tomography case with the complete set of angular data.« less
Temporal and spatial adaptation of transient responses to local features
O'Carroll, David C.; Barnett, Paul D.; Nordström, Karin
2012-01-01
Interpreting visual motion within the natural environment is a challenging task, particularly considering that natural scenes vary enormously in brightness, contrast and spatial structure. The performance of current models for the detection of self-generated optic flow depends critically on these very parameters, but despite this, animals manage to successfully navigate within a broad range of scenes. Within global scenes local areas with more salient features are common. Recent work has highlighted the influence that local, salient features have on the encoding of optic flow, but it has been difficult to quantify how local transient responses affect responses to subsequent features and thus contribute to the global neural response. To investigate this in more detail we used experimenter-designed stimuli and recorded intracellularly from motion-sensitive neurons. We limited the stimulus to a small vertically elongated strip, to investigate local and global neural responses to pairs of local “doublet” features that were designed to interact with each other in the temporal and spatial domain. We show that the passage of a high-contrast doublet feature produces a complex transient response from local motion detectors consistent with predictions of a simple computational model. In the neuron, the passage of a high-contrast feature induces a local reduction in responses to subsequent low-contrast features. However, this neural contrast gain reduction appears to be recruited only when features stretch vertically (i.e., orthogonal to the direction of motion) across at least several aligned neighboring ommatidia. Horizontal displacement of the components of elongated features abolishes the local adaptation effect. It is thus likely that features in natural scenes with vertically aligned edges, such as tree trunks, recruit the greatest amount of response suppression. This property could emphasize the local responses to such features vs. those in nearby texture within the scene. PMID:23087617
Image superresolution by midfrequency sparse representation and total variation regularization
NASA Astrophysics Data System (ADS)
Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi
2015-01-01
Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.
Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan
2017-07-01
High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
I. W. Ginsberg
Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing
2017-11-01
The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.
Tigges, P; Kathmann, N; Engel, R R
1997-07-01
Though artificial neural networks (ANN) are excellent tools for pattern recognition problems when signal to noise ratio is low, the identification of decision relevant features for ANN input data is still a crucial issue. The experience of the ANN designer and the existing knowledge and understanding of the problem seem to be the only links for a specific construction. In the present study a backpropagation ANN based on modified raw data inputs showed encouraging results. Investigating the specific influences of prototypical input patterns on a specially designed ANN led to a new sparse and efficient input data presentation. This data coding obtained by a semiautomatic procedure combining existing expert knowledge and the internal representation structures of the raw data based ANN yielded a list of feature vectors, each representing the relevant information for saccade identification. The feature based ANN produced a reduction of the error rate of nearly 40% compared with the raw data ANN. An overall correct classification of 92% of so far unknown data was realized. The proposed method of extracting internal ANN knowledge for the production of a better input data representation is not restricted to EOG recordings, and could be used in various fields of signal analysis.
Janousova, Eva; Schwarz, Daniel; Kasparek, Tomas
2015-06-30
We investigated a combination of three classification algorithms, namely the modified maximum uncertainty linear discriminant analysis (mMLDA), the centroid method, and the average linkage, with three types of features extracted from three-dimensional T1-weighted magnetic resonance (MR) brain images, specifically MR intensities, grey matter densities, and local deformations for distinguishing 49 first episode schizophrenia male patients from 49 healthy male subjects. The feature sets were reduced using intersubject principal component analysis before classification. By combining the classifiers, we were able to obtain slightly improved results when compared with single classifiers. The best classification performance (81.6% accuracy, 75.5% sensitivity, and 87.8% specificity) was significantly better than classification by chance. We also showed that classifiers based on features calculated using more computation-intensive image preprocessing perform better; mMLDA with classification boundary calculated as weighted mean discriminative scores of the groups had improved sensitivity but similar accuracy compared to the original MLDA; reducing a number of eigenvectors during data reduction did not always lead to higher classification accuracy, since noise as well as the signal important for classification were removed. Our findings provide important information for schizophrenia research and may improve accuracy of computer-aided diagnostics of neuropsychiatric diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Frikha, Mayssa; Fendri, Emna; Hammami, Mohamed
2017-09-01
Using semantic attributes such as gender, clothes, and accessories to describe people's appearance is an appealing modeling method for video surveillance applications. We proposed a midlevel appearance signature based on extracting a list of nameable semantic attributes describing the body in uncontrolled acquisition conditions. Conventional approaches extract the same set of low-level features to learn the semantic classifiers uniformly. Their critical limitation is the inability to capture the dominant visual characteristics for each trait separately. The proposed approach consists of extracting low-level features in an attribute-adaptive way by automatically selecting the most relevant features for each attribute separately. Furthermore, relying on a small training-dataset would easily lead to poor performance due to the large intraclass and interclass variations. We annotated large scale people images collected from different person reidentification benchmarks covering a large attribute sample and reflecting the challenges of uncontrolled acquisition conditions. These annotations were gathered into an appearance semantic attribute dataset that contains 3590 images annotated with 14 attributes. Various experiments prove that carefully designed features for learning the visual characteristics for an attribute provide an improvement of the correct classification accuracy and a reduction of both spatial and temporal complexities against state-of-the-art approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apte, A; Veeraraghavan, H; Oh, J
Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less
Dioxygen Binding, Activation, and Reduction to H2O by Cu Enzymes.
Solomon, Edward I
2016-07-05
Oxygen intermediates in copper enzymes exhibit unique spectroscopic features that reflect novel geometric and electronic structures that are key to reactivity. This perspective will describe: (1) the bonding origin of the unique spectroscopic features of the coupled binuclear copper enzymes and how this overcomes the spin forbiddenness of O2 binding and activates monooxygenase activity, (2) how the difference in exchange coupling in the non-coupled binuclear Cu enzymes controls the reaction mechanism, and (3) how the trinuclear Cu cluster present in the multicopper oxidases leads to a major structure/function difference in enabling the irreversible reductive cleavage of the O-O bond with little overpotential and generating a fully oxidized intermediate, different from the resting enzyme studied by crystallography, that is key in enabling fast PCET in the reductive half of the catalytic cycle.
Electron transfer at the cell-uranium interface in Geobacter spp.
Reguera, Gemma
2012-12-01
The in situ stimulation of Fe(III) oxide reduction in the subsurface stimulates the growth of Geobacter spp. and the precipitation of U(VI) from groundwater. As with Fe(III) oxide reduction, the reduction of uranium by Geobacter spp. requires the expression of their conductive pili. The pili bind the soluble uranium and catalyse its extracellular reductive precipitation along the pili filaments as a mononuclear U(IV) complexed by carbon-containing ligands. Although most of the uranium is immobilized by the pili, some uranium deposits are also observed in discreet regions of the outer membrane, consistent with the participation of redox-active foci, presumably c-type cytochromes, in the extracellular reduction of uranium. It is unlikely that cytochromes released from the outer membrane could associate with the pili and contribute to the catalysis, because scanning tunnelling microscopy spectroscopy did not reveal any haem-specific electronic features in the pili, but, rather, showed topographic and electronic features intrinsic to the pilus shaft. Pili not only enhance the rate and extent of uranium reduction per cell, but also prevent the uranium from traversing the outer membrane and mineralizing the cell envelope. As a result, pili expression preserves the essential respiratory activities of the cell envelope and the cell's viability. Hence the results support a model in which the conductive pili function as the primary mechanism for the reduction of uranium and cellular protection in Geobacter spp.
Recent advances in reduction methods for nonlinear problems. [in structural mechanics
NASA Technical Reports Server (NTRS)
Noor, A. K.
1981-01-01
Status and some recent developments in the application of reduction methods to nonlinear structural mechanics problems are summarized. The aspects of reduction methods discussed herein include: (1) selection of basis vectors in nonlinear static and dynamic problems, (2) application of reduction methods in nonlinear static analysis of structures subjected to prescribed edge displacements, and (3) use of reduction methods in conjunction with mixed finite element models. Numerical examples are presented to demonstrate the effectiveness of reduction methods in nonlinear problems. Also, a number of research areas which have high potential for application of reduction methods are identified.
Frost, Stephen R.; Gilbert, Christopher C.; Pugh, Kelsey D.; Guthrie, Emily H.; Delson, Eric
2015-01-01
Thumb reduction is among the most important features distinguishing the African and Asian colobines from each other and from other Old World monkeys. In this study we demonstrate that the partial skeleton KNM-ER 4420 from Koobi Fora, Kenya, dated to 1.9 Ma and assigned to the Plio-Pleistocene colobine species Cercopithecoides williamsi, shows marked reduction of its first metacarpal relative to the medial metacarpals. Thus, KNM-ER 4420 is the first documented occurrence of cercopithecid pollical reduction in the fossil record. In the size of its first metacarpal relative to the medial metacarpals, C. williamsi is similar to extant African colobines, but different from cercopithecines, extant Asian colobines and the Late Miocene colobines Microcolobus and Mesopithecus. This feature clearly links the genus Cercopithecoides with the extant African colobine clade and makes it the first definitive African colobine in the fossil record. The postcranial adaptations to terrestriality in Cercopithecoides are most likely secondary, while ancestral colobinans (and colobines) were arboreal. Finally, the absence of any evidence for pollical reduction in Mesopithecus implies either independent thumb reduction in African and Asian colobines or multiple colobine dispersal events out of Africa. Based on the available evidence, we consider the first scenario more likely. PMID:25993410
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goh, Vicky, E-mail: vicky.goh@stricklandscanner.org.u; Gollub, Frank K.; Liaw, Jonathan
2010-11-01
Purpose: To describe the MRI appearances of squamous cell carcinoma of the anal canal before and after chemoradiation and to assess whether MRI features predict for clinical outcome. Methods and Materials: Thirty-five patients (15 male, 20 female; mean age 60.8 years) with histologically proven squamous cell cancer of the anal canal underwent MRI before and 6-8 weeks after definitive chemoradiation. Images were reviewed retrospectively by two radiologists in consensus blinded to clinical outcome: tumor size, signal intensity, extent, and TNM stage were recorded. Following treatment, patients were defined as responders by T and N downstaging and Response Evaluation Criteria inmore » Solid Tumors (RECIST). Final clinical outcome was determined by imaging and case note review: patients were divided into (1) disease-free and (2) with relapse and compared using appropriate univariate methods to identify imaging predictors; statistical significance was at 5%. Results: The majority of tumors were {<=}T2 (23/35; 65.7%) and N0 (21/35; 60%), mean size 3.75cm, and hyperintense (++ to +++, 24/35 patients; 68%). Following chemoradiation, there was a size reduction in all cases (mean 73.3%) and a reduction in signal intensity in 26/35 patients (74.2%). The majority of patients were classified as responders (26/35 (74.2%) patients by T and N downstaging; and 30/35 (85.7%) patients by RECIST). At a median follow-up of 33.5 months, 25 patients (71.4%) remained disease-free; 10 patients (28.6%) had locoregional or metastatic disease. Univariate analysis showed that no individual MRI features were predictive of eventual outcome. Conclusion: Early assessment of response by MRI at 6-8 weeks is unhelpful in predicting future clinical outcome.« less
The effect of strain rate on the evolution of microstructure in aluminium alloys.
Leszczyńska-Madej, B; Richert, M
2010-03-01
Intensive deformations influence strongly microstructure. The very well-known phenomenon is the diminishing dimension of grain size by the severe plastic deformation (SPD) methods. The nanometric features of microstructure were discovered after the SPD deformation of various materials, such as aluminium alloys, iron and others. The observed changes depended on the kind of the deformed material, amount of deformation, strain rate, existence of different phases and stacking fault energy. The influence of the strain and strain rate on the microstructure is commonly investigated nowadays. It was found that the high strain rates activate deformation in shear bands, microbands and adiabatic shear bands. It was observed that bands were places of the nucleation of nanograins in the material deformed by SPD methods. In the work, the refinement of microstructure of the aluminium alloys influenced by the high strain rate was investigated. The samples were compressed by a specially designed hammer to the deformation of phi= 0/0.62 with the strain rate in the range of [Formula in text]. The highest reduction of microbands width with the increase of the strain was found in the AlCu4Zr alloy. The influence of the strain rate on the microstructure refinement indicated that the increase of the strain rate caused the reduction of the microbands width in the all investigated materials (Al99.5, AlCu4Zr, AlMg5, AlZn6Mg2.5CuZr). A characteristic feature of the microstructure of the compressed material was large density of the shear bands and microbands. It was found that the microbands show a large misorientation to the surrounds and, except Al99.5, the large density of dislocation.
Integrated System Test of the Advanced Instructional System (AIS). Final Report.
ERIC Educational Resources Information Center
Lintz, Larry M.; And Others
The integrated system test for the Advanced Instructional System (AIS) was designed to provide quantitative information regarding training time reductions resulting from certain computer managed instruction features. The reliabilities of these features and of support systems were also investigated. Basic computer managed instruction reduced…
Tighe, Patrick J; Lucas, Stephen D; Edwards, David A; Boezaart, André P; Aytug, Haldun; Bihorac, Azra
2012-10-01
The purpose of this project was to determine whether machine-learning classifiers could predict which patients would require a preoperative acute pain service (APS) consultation. Retrospective cohort. University teaching hospital. The records of 9,860 surgical patients posted between January 1 and June 30, 2010 were reviewed. Request for APS consultation. A cohort of machine-learning classifiers was compared according to its ability or inability to classify surgical cases as requiring a request for a preoperative APS consultation. Classifiers were then optimized utilizing ensemble techniques. Computational efficiency was measured with the central processing unit processing times required for model training. Classifiers were tested using the full feature set, as well as the reduced feature set that was optimized using a merit-based dimensional reduction strategy. Machine-learning classifiers correctly predicted preoperative requests for APS consultations in 92.3% (95% confidence intervals [CI], 91.8-92.8) of all surgical cases. Bayesian methods yielded the highest area under the receiver operating curve (0.87, 95% CI 0.84-0.89) and lowest training times (0.0018 seconds, 95% CI, 0.0017-0.0019 for the NaiveBayesUpdateable algorithm). An ensemble of high-performing machine-learning classifiers did not yield a higher area under the receiver operating curve than its component classifiers. Dimensional reduction decreased the computational requirements for multiple classifiers, but did not adversely affect classification performance. Using historical data, machine-learning classifiers can predict which surgical cases should prompt a preoperative request for an APS consultation. Dimensional reduction improved computational efficiency and preserved predictive performance. Wiley Periodicals, Inc.
An Empirical Method for Determining the Lunar Gravity Field. Ph.D. Thesis - George Washington Univ.
NASA Technical Reports Server (NTRS)
Ferrari, A. J.
1971-01-01
A method has been devised to determine the spherical harmonic coefficients of the lunar gravity field. This method consists of a two-step data reduction and estimation process. In the first step, a weighted least-squares empirical orbit determination scheme is applied to Doppler tracking data from lunar orbits to estimate long-period Kepler elements and rates. Each of the Kepler elements is represented by an independent function of time. The long-period perturbing effects of the earth, sun, and solar radiation are explicitly modeled in this scheme. Kepler element variations estimated by this empirical processor are ascribed to the non-central lunar gravitation features. Doppler data are reduced in this manner for as many orbits as are available. In the second step, the Kepler element rates are used as input to a second least-squares processor that estimates lunar gravity coefficients using the long-period Lagrange perturbation equations.
Native T1 mapping of the heart - a pictorial review.
Germain, Philippe; El Ghannudi, Soraya; Jeung, Mi-Young; Ohlmann, Patrick; Epailly, Eric; Roy, Catherine; Gangi, Afshin
2014-01-01
T1 mapping is now a clinically feasible method, providing pixel-wise quantification of the cardiac structure's T1 values. Beyond focal lesions, well depicted by late gadolinium enhancement sequences, it has become possible to discriminate diffuse myocardial alterations, previously not assessable by noninvasive means. The strength of this method includes the high reproducibility and immediate clinical applicability, even without the use of contrast media injection (native or pre-contrast T1). The two most important determinants of native T1 augmentation are (1) edema related to tissue water increase (recent infarction or inflammation) and (2) interstitial space increase related to fibrosis (infarction scar, cardiomyopathy) or to amyloidosis. Conversely, lipid (Anderson-Fabry) or iron overload diseases are responsible for T1 reduction. In this pictorial review, the main features provided by native T1 mapping are discussed and illustrated, with a special focus on the awaited clinical purpose of this unique, promising new method.
VAMPnets for deep learning of molecular kinetics.
Mardt, Andreas; Pasquali, Luca; Wu, Hao; Noé, Frank
2018-01-02
There is an increasing demand for computing the relevant structures, equilibria, and long-timescale kinetics of biomolecular processes, such as protein-drug binding, from high-throughput molecular dynamics simulations. Current methods employ transformation of simulated coordinates into structural features, dimension reduction, clustering the dimension-reduced data, and estimation of a Markov state model or related model of the interconversion rates between molecular structures. This handcrafted approach demands a substantial amount of modeling expertise, as poor decisions at any step will lead to large modeling errors. Here we employ the variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states, thus combining the whole data processing pipeline in a single end-to-end framework. Our method performs equally or better than state-of-the-art Markov modeling methods and provides easily interpretable few-state kinetic models.
So you want to do research? 3. An introduction to qualitative methods.
Meadows, Keith A
2003-10-01
This article describes some of the key issues in the use of qualitative research methods. Starting with a description of what qualitative research is and outlining some of the distinguishing features between quantitative and qualitative research, examples of the type of setting where qualitative research can be applied are provided. Methods of collecting information through in-depth interviews and group discussions are discussed in some detail, including issues around sampling and recruitment, the use of topic guides and techniques to encourage participants to talk openly. An overview on the analysis of qualitative data discusses aspects on data reduction, display and drawing conclusions from the data. Approaches to ensuring rigour in the collection, analysis and reporting of qualitative research are discussed and the concepts of credibility, transferability, dependability and confirmability are described. Finally, guidelines for the reporting of qualitative research are outlined and the need to write for a particular audience is discussed.
An efficient classification method based on principal component and sparse representation.
Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang
2016-01-01
As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Jun, E-mail: jvwei@umich.edu; Zhou, Chuan; Chan, Heang-Ping
2014-08-15
Purpose: The buildup of noncalcified plaques (NCPs) that are vulnerable to rupture in coronary arteries is a risk for myocardial infarction. Interpretation of coronary CT angiography (cCTA) to search for NCP is a challenging task for radiologists due to the low CT number of NCP, the large number of coronary arteries, and multiple phase CT acquisition. The authors conducted a preliminary study to develop machine learning method for automated detection of NCPs in cCTA. Methods: With IRB approval, a data set of 83 ECG-gated contrast enhanced cCTA scans with 120 NCPs was collected retrospectively from patient files. A multiscale coronarymore » artery response and rolling balloon region growing (MSCAR-RBG) method was applied to each cCTA volume to extract the coronary arterial trees. Each extracted vessel was reformatted to a straightened volume composed of cCTA slices perpendicular to the vessel centerline. A topological soft-gradient (TSG) detection method was developed to prescreen for NCP candidates by analyzing the 2D topological features of the radial gradient field surface along the vessel wall. The NCP candidates were then characterized by a luminal analysis that used 3D geometric features to quantify the shape information and gray-level features to evaluate the density of the NCP candidates. With machine learning techniques, useful features were identified and combined into an NCP score to differentiate true NCPs from false positives (FPs). To evaluate the effectiveness of the image analysis methods, the authors performed tenfold cross-validation with the available data set. Receiver operating characteristic (ROC) analysis was used to assess the classification performance of individual features and the NCP score. The overall detection performance was estimated by free response ROC (FROC) analysis. Results: With our TSG prescreening method, a prescreening sensitivity of 92.5% (111/120) was achieved with a total of 1181 FPs (14.2 FPs/scan). On average, six features were selected during the tenfold cross-validation training. The average area under the ROC curve (AUC) value for training was 0.87 ± 0.01 and the AUC value for validation was 0.85 ± 0.01. Using the NCP score, FROC analysis of the validation set showed that the FP rates were reduced to 3.16, 1.90, and 1.39 FPs/scan at sensitivities of 90%, 80%, and 70%, respectively. Conclusions: The topological soft-gradient prescreening method in combination with the luminal analysis for FP reduction was effective for detection of NCPs in cCTA, including NCPs causing positive or negative vessel remodeling. The accuracy of vessel segmentation, tracking, and centerline identification has a strong impact on NCP detection. Studies are underway to further improve these techniques and reduce the FPs of the CADe system.« less
McGregor, Stephanie M; Chon, W James; Kim, Lisa; Chang, Anthony; Meehan, Shane M
2015-01-01
AIM: To describe the clinicopathologic features of concurrent polyomavirus nephropathy (PVN) and endarteritis due to rejection in renal allografts. METHODS: We searched our electronic records database for cases with transplant kidney biopsies demonstrating features of both PVN and acute rejection (AR). PVN was defined by the presence of typical viral cytopathic effect on routine sections and positive polyomavirus SV40 large-T antigen immunohistochemistry. AR was identified by endarteritis (v1 by Banff criteria). All cases were subjected to chart review in order to determine clinical presentation, treatment course and outcomes. Outcomes were recorded with a length of follow-up of at least one year or time to nephrectomy. RESULTS: Of 94 renal allograft recipients who developed PVN over an 11-year period at our institution, we identified 7 (7.4%) with viral cytopathic changes, SV40 large T antigen staining, and endarteritis in the same biopsy specimen, indicative of concurrent PVN and AR. Four arose after reduction of immunosuppression (IS) (for treatment of PVN in 3 and tuberculosis in 1), and 3 patients had no decrease of IS before developing simultaneous concurrent disease. Treatment consisted of reduced oral IS and leflunomide for PVN, and anti-rejection therapy. Three of 4 patients who developed endarteritis in the setting of reduced IS lost their grafts to rejection. All 3 patients with simultaneous PVN and endarteritis cleared viremia and were stable at 1 year of follow up. Patients with endarteritis and PVN arising in a background of reduced IS had more severe rejection and poorer outcome. CONCLUSION: Concurrent PVN and endarteritis may be more frequent than is currently appreciated and may occur with or without prior reduction of IS. PMID:26722657
Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems
NASA Technical Reports Server (NTRS)
Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan
2010-01-01
A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.
Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin
2015-12-01
The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael
2014-10-01
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Miller, Steven D.; Bankert, Richard L.; Solbrig, Jeremy E.; Forsythe, John M.; Noh, Yoo-Jeong; Grasso, Lewis D.
2017-12-01
This paper describes a Dynamic Enhancement Background Reduction Algorithm (DEBRA) applicable to multispectral satellite imaging radiometers. DEBRA uses ancillary information about the clear-sky background to reduce false detections of atmospheric parameters in complex scenes. Applied here to the detection of lofted dust, DEBRA enlists a surface emissivity database coupled with a climatological database of surface temperature to approximate the clear-sky equivalent signal for selected infrared-based multispectral dust detection tests. This background allows for suppression of false alarms caused by land surface features while retaining some ability to detect dust above those problematic surfaces. The algorithm is applicable to both day and nighttime observations and enables weighted combinations of dust detection tests. The results are provided quantitatively, as a detection confidence factor [0, 1], but are also readily visualized as enhanced imagery. Utilizing the DEBRA confidence factor as a scaling factor in false color red/green/blue imagery enables depiction of the targeted parameter in the context of the local meteorology and topography. In this way, the method holds utility to both automated clients and human analysts alike. Examples of DEBRA performance from notable dust storms and comparisons against other detection methods and independent observations are presented.
Fu, Yubin; Zhang, Lide; Zheng, Jiyong
2005-04-01
Halloysite template has a tubular microstructure; its wall has a multi-layer aluminosilicate structure. A new catalytic method is adopted here, through the in-situ reduction of Pd ions on the surface of tubular halloysite by methanol to initiate electroless plating; the detailed deposition features of Pd nanoparticles are investigated for the first time. The results indicate that an in-situ reduction and deposition of Pd occurs at room temperature, in which the halloysite template plays an important role. Impurities in halloysite (such as ferric oxide) influence the formation and distribution of the Pd nanoparticles. The Pd nanoparticles are of a non-spherical shape in most cases, which would be caused by the irregular appearance of halloysite. No intercalation of the nanoparticles occurs between the aluminosilicate layers in the halloysite. The diameter of Pd nanoparticles increases with time; the average diameter ranges from 1 nm to 4 nm. Pd nanoparticles on a halloysite template can catalyze electroless deposition of Ni to prepare a novel nano-sized cermet at low cost. This practicable catalytic method could also be used on other clay substrates for the initiation of metallization.
Hybrid CMS methods with model reduction for assembly of structures
NASA Technical Reports Server (NTRS)
Farhat, Charbel
1991-01-01
Future on-orbit structures will be designed and built in several stages, each with specific control requirements. Therefore there must be a methodology which can predict the dynamic characteristics of the assembled structure, based on the dynamic characteristics of the subassemblies and their interfaces. The methodology developed by CSC to address this issue is Hybrid Component Mode Synthesis (HCMS). HCMS distinguishes itself from standard component mode synthesis algorithms in the following features: (1) it does not require the subcomponents to have displacement compatible models, which makes it ideal for analyzing the deployment of heterogeneous flexible multibody systems, (2) it incorporates a second-level model reduction scheme at the interface, which makes it much faster than other algorithms and therefore suitable for control purposes, and (3) it does answer specific questions such as 'how does the global fundamental frequency vary if I change the physical parameters of substructure k by a specified amount?'. Because it is based on an energy principle rather than displacement compatibility, this methodology can also help the designer to define an assembly process. Current and future efforts are devoted to applying the HCMS method to design and analyze docking and berthing procedures in orbital construction.
Alefounder, P R; Ferguson, S J
1980-01-01
1. A method is described for preparing spheroplasts from Paracoccus denitrificans that are substantially depleted of dissimilatory nitrate reductase (cytochrome cd) activity. Treatment of cells with lysozyme + EDTA together with a mild osmotic shock, followed by centrifugation, yielded a pellet of spheroplasts and a supernatant that contained d-type cytochrome. The spheroplasts were judged to have retained an intact plasma membrane on the basis that less than 1% of the activity of a cytoplasmic marker protein, malate dehydrogenase, was released from the spheroplasts. In addition to a low activity towards added nitrite, the suspension of spheroplasts accumulated the nitrite that was produced by respiratory chain-linked reduction of nitrate. It is concluded that nitrate reduction occurs at the periplasmic side of the plasma membrane irrespective of whether nitrite is generated by nitrate reduction or is added exogenously. 2. Further evidence for the integrity of the spheroplasts was that nitrate reduction was inhibited by O2, and that chlorate was reduced at a markedly lower rate than nitrate. These data are taken as evidence for an intact plasma membrane because it was shown that cells acquire the capability to reduce nitrate under aerobic conditions after addition of low amounts of Triton X-100 which, with the same titre, also overcame the permeability barrier to chlorate reduction by intact cells. The close relationship between the appearance of chlorate reduction and the loss of the inhibitory effect of O2 on nitrate reduction also suggests that the later feature of nitrate respiration is due to a control on the accessibility of nitrate to its reductase rather than on the flow of electrons to nitrate reductase. PMID:7197918
A Features Selection for Crops Classification
NASA Astrophysics Data System (ADS)
Liu, Yifan; Shao, Luyi; Yin, Qiang; Hong, Wen
2016-08-01
The components of the polarimetric target decomposition reflect the differences of target since they linked with the scattering properties of the target and can be imported into SVM as the classification features. The result of decomposition usually concentrate on part of the components. Selecting a combination of components can reduce the features that importing into the SVM. The features reduction can lead to less calculation and targeted classification of one target when we classify a multi-class area. In this research, we import different combinations of features into the SVM and find a better combination for classification with a data of AGRISAR.
Perski, Olga; Blandford, Ann; Ubhi, Harveen Kaur; West, Robert; Michie, Susan
2017-02-28
Public health organisations such as the National Health Service in the United Kingdom and the National Institutes of Health in the United States provide access to online libraries of publicly endorsed smartphone applications (apps); however, there is little evidence that users rely on this guidance. Rather, one of the most common methods of finding new apps is to search an online store. As hundreds of smoking cessation and alcohol-related apps are currently available on the market, smokers and drinkers must actively choose which app to download prior to engaging with it. The influences on this choice are yet to be identified. This study aimed to investigate 1) design features that shape users' choice of smoking cessation or alcohol reduction apps, and 2) design features judged to be important for engagement. Adult smokers (n = 10) and drinkers (n = 10) interested in using an app to quit/cut down were asked to search an online store to identify and explore a smoking cessation or alcohol reduction app of their choice whilst thinking aloud. Semi-structured interview techniques were used to allow participants to elaborate on their statements. An interpretivist theoretical framework informed the analysis. Verbal reports were audio recorded, transcribed verbatim and analysed using inductive thematic analysis. Participants chose apps based on their immediate look and feel, quality as judged by others' ratings and brand recognition ('social proof'), and titles judged to be realistic and relevant. Monitoring and feedback, goal setting, rewards and prompts were identified as important for engagement, fostering motivation and autonomy. Tailoring of content, a non-judgmental communication style, privacy and accuracy were viewed as important for engagement, fostering a sense of personal relevance and trust. Sharing progress on social media and the use of craving management techniques in social settings were judged not to be engaging because of concerns about others' negative reactions. Choice of a smoking cessation or alcohol reduction app may be influenced by its immediate look and feel, 'social proof' and titles that appear realistic. Design features that enhance motivation, autonomy, personal relevance and credibility may be important for engagement.
Tuning Nb–Pt Interactions To Facilitate Fuel Cell Electrocatalysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghoshal, Shraboni; Jia, Qingying; Bates, Michael K.
High stability, availability of multiple oxidation states, and accessibility within a wide electrochemical window are the prime features of Nb that make it a favorable candidate for electrocatalysis, especially when it is combined with Pt. However, Nb has been used as a support in the form of oxides in all previously reported Pt–Nb electrocatalysts, and no Pt–Nb alloying phase has been demonstrated hitherto. Herein, we report a multifunctional Pt–Nb composite (PtNb/NbOx-C) where Nb exists both as an alloying component with Pt and as an oxide support and is synthesized by means of a simple wet chemical method. In this work,more » the Pt–Nb alloy phase has been firmly verified with the help of multiple spectroscopic methods. This allows for the experimental evidence of the theoretical prediction that Pt–Nb alloy interactions improve the oxygen reduction reaction (ORR) activity of Pt. In addition, such a combination of multiphase Nb brings up myriad features encompassing increased ORR durability, immunity to phosphate anion poisoning, enhanced hydrogen oxidation reaction (HOR) activity, and oxidative carbon monoxide (CO) stripping, making this electrocatalyst useful in multiple fuel cell systems.« less
Evolutionary Optimization of Centrifugal Nozzles for Organic Vapours
NASA Astrophysics Data System (ADS)
Persico, Giacomo
2017-03-01
This paper discusses the shape-optimization of non-conventional centrifugal turbine nozzles for Organic Rankine Cycle applications. The optimal aerodynamic design is supported by the use of a non-intrusive, gradient-free technique specifically developed for shape optimization of turbomachinery profiles. The method is constructed as a combination of a geometrical parametrization technique based on B-Splines, a high-fidelity and experimentally validated Computational Fluid Dynamic solver, and a surrogate-based evolutionary algorithm. The non-ideal gas behaviour featuring the flow of organic fluids in the cascades of interest is introduced via a look-up-table approach, which is rigorously applied throughout the whole optimization process. Two transonic centrifugal nozzles are considered, featuring very different loading and radial extension. The use of a systematic and automatic design method to such a non-conventional configuration highlights the character of centrifugal cascades; the blades require a specific and non-trivial definition of the shape, especially in the rear part, to avoid the onset of shock waves. It is shown that the optimization acts in similar way for the two cascades, identifying an optimal curvature of the blade that both provides a relevant increase of cascade performance and a reduction of downstream gradients.
Covariance of dynamic strain responses for structural damage detection
NASA Astrophysics Data System (ADS)
Li, X. Y.; Wang, L. X.; Law, S. S.; Nie, Z. H.
2017-10-01
A new approach to address the practical problems with condition evaluation/damage detection of structures is proposed based on the distinct features of a new damage index. The covariance of strain response function (CoS) is a function of modal parameters of the structure. A local stiffness reduction in structure would cause monotonous increase in the CoS. Its sensitivity matrix with respect to local damages of structure is negative and narrow-banded. The damage extent can be estimated with an approximation to the sensitivity matrix to decouple the identification equations. The CoS sensitivity can be calibrated in practice from two previous states of measurements to estimate approximately the damage extent of a structure. A seven-storey plane frame structure is numerically studied to illustrate the features of the CoS index and the proposed method. A steel circular arch in the laboratory is tested. Natural frequencies changed due to damage in the arch and the damage occurrence can be judged. However, the proposed CoS method can identify not only damage happening but also location, even damage extent without need of an analytical model. It is promising for structural condition evaluation of selected components.
Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain
2016-05-20
Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures.
Comparison of four different reduction methods for anterior dislocation of the shoulder.
Guler, Olcay; Ekinci, Safak; Akyildiz, Faruk; Tirmik, Uzeyir; Cakmak, Selami; Ugras, Akin; Piskin, Ahmet; Mahirogullari, Mahir
2015-05-28
Shoulder dislocations account for almost 50% of all major joint dislocations and are mainly anterior. The aim is a comparative retrospective study of different reduction maneuvers without anesthesia to reduce the dislocated shoulder. Patients were treated with different reduction maneuvers, including various forms of traction and external rotation, in the emergency departments of four training hospitals between 2009 and 2012. Each of the four hospitals had different treatment protocols for reduction and applying one of four maneuvers: Spaso, Chair, Kocher, and Matsen methods. Thirty-nine patients were treated by the Spaso method, 47 by the Chair reduction method, 40 by the Kocher method, and 27 patients by Matsen's traction-countertraction method. All patients' demographic data were recorded. Dislocation number, reduction time, time interval between dislocation and reduction, and associated complications, pre- and post-reduction period, were recorded prospectively. No anesthetic method was used for the reduction. All of the methods used included traction and some external rotation. The Chair method had the shortest reduction time. All surgeons involved in the study agreed that the Kocher and Matsen methods needed more force for the reduction. Patients could contract their muscles because of the pain in these two methods. The Spaso method includes flexion of the shoulder and blocks muscle contraction somewhat. The Chair method was found to be the easiest because the patients could not contract their muscles while sitting on a chair with the affected arm at their side. We suggest that the Chair method is an effective and fast reduction maneuver that may be an alternative for the treatment of anterior shoulder dislocations. Further prospective studies with larger sample size are needed to compare safety of different reduction techniques.
Integration of active and passive cool roof system for attic temperature reduction
NASA Astrophysics Data System (ADS)
Yew, Ming Chian; Yew, Ming Kun; Saw, Lip Huat; Durairaj, Rajkumar
2017-04-01
The aim of this project is to study the capability of cool roof system in the reduction of heat transmission through metal roof into an attic. The cool roof system is designed in active and passive methods to reduce the thermal loads imposed to a building. Two main features are introduced to this cool roof system, which is thermal insulation coating (TIC) and moving air cavity (MAC) that served as active and passive manner, respectively. For MAC, two designs are introduced. Normal MAC is fabricated by six aluminium tubes whereby each aluminium tube is made up by sticking up of five aluminium cans. While improved MAC is also made by six aluminium tubes whereby each aluminium tube is custom made from steel rods and aluminium foils. MAC provides ventilation and heat reflection under the metal roof before the heat transfer into attic. It also coupled with three solar powered fans to increase heat flow inside the channel. The cool roof that incorporated TIC, MAC with solar powered fans and opened attic inlet showed a significant improvement with a reduction of up to 14 °C in the attic temperature compared to conventional roof system.
Optical coherence tomography angiography retinal vascular network assessment in multiple sclerosis.
Lanzillo, Roberta; Cennamo, Gilda; Criscuolo, Chiara; Carotenuto, Antonio; Velotti, Nunzio; Sparnelli, Federica; Cianflone, Alessandra; Moccia, Marcello; Brescia Morra, Vincenzo
2017-09-01
Optical coherence tomography (OCT) angiography is a new method to assess the density of the vascular networks. Vascular abnormalities are considered involved in multiple sclerosis (MS) pathology. To assess the presence of vascular abnormalities in MS and to evaluate their correlation to disease features. A total of 50 MS patients with and without history of optic neuritis (ON) and 46 healthy subjects were included. All underwent spectral domain (SD)-OCT and OCT angiography. Clinical history, Expanded Disability Status Scale (EDSS), Multiple Sclerosis Severity Score (MSSS) and disease duration were collected. Angio-OCT showed a vessel density reduction in eyes of MS patients when compared to controls. A statistically significant reduction in all SD-OCT and OCT angiography parameters was noticed both in eyes with and without ON when compared with control eyes. We found an inverse correlation between SD-OCT parameters and MSSS ( p = 0.003) and between vessel density parameters and EDSS ( p = 0.007). We report a vessel density reduction in retina of MS patients. We highlight the clinical correlation between vessel density and EDSS, suggesting that angio-OCT could be a good marker of disease and of disability in MS.
Alzheimer's Disease Early Diagnosis Using Manifold-Based Semi-Supervised Learning.
Khajehnejad, Moein; Saatlou, Forough Habibollahi; Mohammadzade, Hoda
2017-08-20
Alzheimer's disease (AD) is currently ranked as the sixth leading cause of death in the United States and recent estimates indicate that the disorder may rank third, just behind heart disease and cancer, as a cause of death for older people. Clearly, predicting this disease in the early stages and preventing it from progressing is of great importance. The diagnosis of Alzheimer's disease (AD) requires a variety of medical tests, which leads to huge amounts of multivariate heterogeneous data. It can be difficult and exhausting to manually compare, visualize, and analyze this data due to the heterogeneous nature of medical tests; therefore, an efficient approach for accurate prediction of the condition of the brain through the classification of magnetic resonance imaging (MRI) images is greatly beneficial and yet very challenging. In this paper, a novel approach is proposed for the diagnosis of very early stages of AD through an efficient classification of brain MRI images, which uses label propagation in a manifold-based semi-supervised learning framework. We first apply voxel morphometry analysis to extract some of the most critical AD-related features of brain images from the original MRI volumes and also gray matter (GM) segmentation volumes. The features must capture the most discriminative properties that vary between a healthy and Alzheimer-affected brain. Next, we perform a principal component analysis (PCA)-based dimension reduction on the extracted features for faster yet sufficiently accurate analysis. To make the best use of the captured features, we present a hybrid manifold learning framework which embeds the feature vectors in a subspace. Next, using a small set of labeled training data, we apply a label propagation method in the created manifold space to predict the labels of the remaining images and classify them in the two groups of mild Alzheimer's and normal condition (MCI/NC). The accuracy of the classification using the proposed method is 93.86% for the Open Access Series of Imaging Studies (OASIS) database of MRI brain images, providing, compared to the best existing methods, a 3% lower error rate.
Whitebird, Robin R; Kreitzer, Mary Jo; Lewis, Beth A; Hanson, Leah R; Crain, A Lauren; Enstad, Chris J; Mehta, Adele
2011-09-01
Caregivers for a family member with dementia experience chronic long-term stress that may benefit from new complementary therapies such as mindfulness-based stress reduction. Little is known however, about the challenges of recruiting and retaining family caregivers to research on mind-body based complementary therapies. Our pilot study is the first of its kind to successfully recruit caregivers for a family member with dementia to a randomized controlled pilot study of mindfulness-based stress reduction. The study used an array of recruitment strategies and techniques that were tailored to fit the unique features of our recruitment sources and employed retention strategies that placed high value on establishing early and ongoing communication with potential participants. Innovative recruitment methods including conducting outreach to health plan members and generating press coverage were combined with standard methods of community outreach and paid advertising. We were successful in exceeding our recruitment goal and retained 92% of the study participants at post-intervention (2 months) and 90% at 6 months. Recruitment and retention for family caregiver interventions employing mind-body based complementary therapies can be successful despite many challenges. Barriers include cultural perceptions about the use and benefit of complementary therapies, cultural differences with how the role of family caregiver is perceived, the use of group-based designs requiring significant time commitment by participants, and travel and respite care needs for busy family caregivers. Copyright © 2011 Elsevier Inc. All rights reserved.
Rentoumi, Vassiliki; Raoufian, Ladan; Ahmed, Samrah; de Jager, Celeste A; Garrard, Peter
2014-01-01
Mixed vascular and Alzheimer-type dementia and pure Alzheimer's disease are both associated with changes in spoken language. These changes have, however, seldom been subjected to systematic comparison. In the present study, we analyzed language samples obtained during the course of a longitudinal clinical study from patients in whom one or other pathology was verified at post mortem. The aims of the study were twofold: first, to confirm the presence of differences in language produced by members of the two groups using quantitative methods of evaluation; and secondly to ascertain the most informative sources of variation between the groups. We adopted a computational approach to evaluate digitized transcripts of connected speech along a range of language-related dimensions. We then used machine learning text classification to assign the samples to one of the two pathological groups on the basis of these features. The classifiers' accuracies were tested using simple lexical features, syntactic features, and more complex statistical and information theory characteristics. Maximum accuracy was achieved when word occurrences and frequencies alone were used. Features based on syntactic and lexical complexity yielded lower discrimination scores, but all combinations of features showed significantly better performance than a baseline condition in which every transcript was assigned randomly to one of the two classes. The classification results illustrate the word content specific differences in the spoken language of the two groups. In addition, those with mixed pathology were found to exhibit a marked reduction in lexical variation and complexity compared to their pure AD counterparts.
Identifying the features of an exercise addiction: A Delphi study
Macfarlane, Lucy; Owens, Glynn; Cruz, Borja del Pozo
2016-01-01
Objectives There remains limited consensus regarding the definition and conceptual basis of exercise addiction. An understanding of the factors motivating maintenance of addictive exercise behavior is important for appropriately targeting intervention. The aims of this study were twofold: first, to establish consensus on features of an exercise addiction using Delphi methodology and second, to identify whether these features are congruous with a conceptual model of exercise addiction adapted from the Work Craving Model. Methods A three-round Delphi process explored the views of participants regarding the features of an exercise addiction. The participants were selected from sport and exercise relevant domains, including physicians, physiotherapists, coaches, trainers, and athletes. Suggestions meeting consensus were considered with regard to the proposed conceptual model. Results and discussion Sixty-three items reached consensus. There was concordance of opinion that exercising excessively is an addiction, and therefore it was appropriate to consider the suggestions in light of the addiction-based conceptual model. Statements reaching consensus were consistent with all three components of the model: learned (negative perfectionism), behavioral (obsessive–compulsive drive), and hedonic (self-worth compensation and reduction of negative affect and withdrawal). Conclusions Delphi methodology allowed consensus to be reached regarding the features of an exercise addiction, and these features were consistent with our hypothesized conceptual model of exercise addiction. This study is the first to have applied Delphi methodology to the exercise addiction field, and therefore introduces a novel approach to exercise addiction research that can be used as a template to stimulate future examination using this technique. PMID:27554504
NASA Astrophysics Data System (ADS)
Kawata, Y.; Niki, N.; Ohmatsu, H.; Aokage, K.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.
2015-03-01
Advantages of CT scanners with high resolution have allowed the improved detection of lung cancers. In the recent release of positive results from the National Lung Screening Trial (NLST) in the US showing that CT screening does in fact have a positive impact on the reduction of lung cancer related mortality. While this study does show the efficacy of CT based screening, physicians often face the problems of deciding appropriate management strategies for maximizing patient survival and for preserving lung function. Several key manifold-learning approaches efficiently reveal intrinsic low-dimensional structures latent in high-dimensional data spaces. This study was performed to investigate whether the dimensionality reduction can identify embedded structures from the CT histogram feature of non-small-cell lung cancer (NSCLC) space to improve the performance in predicting the likelihood of RFS for patients with NSCLC.
Chemical fractionation-enhanced structural characterization of marine dissolved organic matter
NASA Astrophysics Data System (ADS)
Arakawa, N.; Aluwihare, L.
2016-02-01
Describing the molecular fingerprint of dissolved organic matter (DOM) requires sample processing methods and separation techniques that can adequately minimize its complexity. We have employed acid hydrolysis as a way to make the subcomponents of marine solid phase-extracted (PPL) DOM more accessible to analytical techniques. Using a combination of NMR and chemical derivatization or reduction analyzed by comprehensive (GCxGC) gas chromatography, we observed chemical features strikingly similar to terrestrial DOM. In particular, we observed reduced alicylic hydrocarbons believed to be the backbone of previously identified carboxylic rich alicyclic material (CRAM). Additionally, we found carbohydrates, amino acids and small lipids and acids.
Near wakes of advanced turbopropellers
NASA Technical Reports Server (NTRS)
Hanson, D. B.; Patrick, W. P.
1989-01-01
The flow in the wake of a model single rotation Prop-Fan rotor operating in a wind tunnel was traversed with a hot-wire anemometer system designed to determine the 3 periodic velocity components. Special data acquisition and data reduction methods were required to deal with the high data frequency, narrow wakes, and large fluctuating air angles in the tip vortex region. The model tip helical Mach number was 1.17, simulating the cruise condition. Although the flow field is complex, flow features such as viscous velocity defects, vortex sheets, tip vortices, and propagating acoustic pulses are clearly identified with the aid of a simple analytical wake theory.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2017-05-09
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
AgRISTARS. Supporting research: Algorithms for scene modelling
NASA Technical Reports Server (NTRS)
Rassbach, M. E. (Principal Investigator)
1982-01-01
The requirements for a comprehensive analysis of LANDSAT or other visual data scenes are defined. The development of a general model of a scene and a computer algorithm for finding the particular model for a given scene is discussed. The modelling system includes a boundary analysis subsystem, which detects all the boundaries and lines in the image and builds a boundary graph; a continuous variation analysis subsystem, which finds gradual variations not well approximated by a boundary structure; and a miscellaneous features analysis, which includes texture, line parallelism, etc. The noise reduction capabilities of this method and its use in image rectification and registration are discussed.
Developments in the design, analysis, and fabrication of advanced technology transmission elements
NASA Technical Reports Server (NTRS)
Drago, R. J.; Lenski, J. W., Jr.
1982-01-01
Over the last decade, the presently reported proprietary development program for the reduction of helicopter drive system weight and cost and the enhancement of reliability and survivability has produced high speed roller bearings, resin-matrix composite rotor shafts and transmission housings, gear/bearing/shaft system integrations, photoelastic investigation methods for gear tooth strength, and the automatic generation of complex FEM models for gear/shaft systems. After describing the design features and performance capabilities of the hardware developed, attention is given to the prospective benefits to be derived from application of these technologies, with emphasis on the relationship between helicopter drive system performance and cost.
Zhou, Zhen; Wang, Jian-Bao; Zang, Yu-Feng; Pan, Gang
2018-01-01
Classification approaches have been increasingly applied to differentiate patients and normal controls using resting-state functional magnetic resonance imaging data (RS-fMRI). Although most previous classification studies have reported promising accuracy within individual datasets, achieving high levels of accuracy with multiple datasets remains challenging for two main reasons: high dimensionality, and high variability across subjects. We used two independent RS-fMRI datasets (n = 31, 46, respectively) both with eyes closed (EC) and eyes open (EO) conditions. For each dataset, we first reduced the number of features to a small number of brain regions with paired t-tests, using the amplitude of low frequency fluctuation (ALFF) as a metric. Second, we employed a new method for feature extraction, named the PAIR method, examining EC and EO as paired conditions rather than independent conditions. Specifically, for each dataset, we obtained EC minus EO (EC—EO) maps of ALFF from half of subjects (n = 15 for dataset-1, n = 23 for dataset-2) and obtained EO—EC maps from the other half (n = 16 for dataset-1, n = 23 for dataset-2). A support vector machine (SVM) method was used for classification of EC RS-fMRI mapping and EO mapping. The mean classification accuracy of the PAIR method was 91.40% for dataset-1, and 92.75% for dataset-2 in the conventional frequency band of 0.01–0.08 Hz. For cross-dataset validation, we applied the classifier from dataset-1 directly to dataset-2, and vice versa. The mean accuracy of cross-dataset validation was 94.93% for dataset-1 to dataset-2 and 90.32% for dataset-2 to dataset-1 in the 0.01–0.08 Hz range. For the UNPAIR method, classification accuracy was substantially lower (mean 69.89% for dataset-1 and 82.97% for dataset-2), and was much lower for cross-dataset validation (64.69% for dataset-1 to dataset-2 and 64.98% for dataset-2 to dataset-1) in the 0.01–0.08 Hz range. In conclusion, for within-group design studies (e.g., paired conditions or follow-up studies), we recommend the PAIR method for feature extraction. In addition, dimensionality reduction with strong prior knowledge of specific brain regions should also be considered for feature selection in neuroimaging studies. PMID:29375288
Yu, Dongjun; Wu, Xiaowei; Shen, Hongbin; Yang, Jian; Tang, Zhenmin; Qi, Yong; Yang, Jingyu
2012-12-01
Membrane proteins are encoded by ~ 30% in the genome and function importantly in the living organisms. Previous studies have revealed that membrane proteins' structures and functions show obvious cell organelle-specific properties. Hence, it is highly desired to predict membrane protein's subcellular location from the primary sequence considering the extreme difficulties of membrane protein wet-lab studies. Although many models have been developed for predicting protein subcellular locations, only a few are specific to membrane proteins. Existing prediction approaches were constructed based on statistical machine learning algorithms with serial combination of multi-view features, i.e., different feature vectors are simply serially combined to form a super feature vector. However, such simple combination of features will simultaneously increase the information redundancy that could, in turn, deteriorate the final prediction accuracy. That's why it was often found that prediction success rates in the serial super space were even lower than those in a single-view space. The purpose of this paper is investigation of a proper method for fusing multiple multi-view protein sequential features for subcellular location predictions. Instead of serial strategy, we propose a novel parallel framework for fusing multiple membrane protein multi-view attributes that will represent protein samples in complex spaces. We also proposed generalized principle component analysis (GPCA) for feature reduction purpose in the complex geometry. All the experimental results through different machine learning algorithms on benchmark membrane protein subcellular localization datasets demonstrate that the newly proposed parallel strategy outperforms the traditional serial approach. We also demonstrate the efficacy of the parallel strategy on a soluble protein subcellular localization dataset indicating the parallel technique is flexible to suite for other computational biology problems. The software and datasets are available at: http://www.csbio.sjtu.edu.cn/bioinf/mpsp.
Ozerov, Ivan V; Lezhnina, Ksenia V; Izumchenko, Evgeny; Artemov, Artem V; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N; Labat, Ivan; West, Michael D; Buzdin, Anton; Cantor, Charles R; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex
2016-11-16
Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy.
Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F
2007-01-01
This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.
Ozerov, Ivan V.; Lezhnina, Ksenia V.; Izumchenko, Evgeny; Artemov, Artem V.; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N.; Labat, Ivan; West, Michael D.; Buzdin, Anton; Cantor, Charles R.; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex
2016-01-01
Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy. PMID:27848968
Electrophoretic mobility shift scanning using an automated infrared DNA sequencer.
Sano, M; Ohyama, A; Takase, K; Yamamoto, M; Machida, M
2001-11-01
Electrophoretic mobility shift assay (EMSA) is widely used in the study of sequence-specific DNA-binding proteins, including transcription factors and mismatch binding proteins. We have established a non-radioisotope-based protocol for EMSA that features an automated DNA sequencer with an infrared fluorescent dye (IRDye) detection unit. Our modification of the elec- trophoresis unit, which includes cooling the gel plates with a reduced well-to-read length, has made it possible to detect shifted bands within 1 h. Further, we have developed a rapid ligation-based method for generating IRDye-labeled probes with an approximately 60% cost reduction. This method has the advantages of real-time scanning, stability of labeled probes, and better safety associated with nonradioactive methods of detection. Analysis of a promoter from an industrially important filamentous fungus, Aspergillus oryzae, in a prototype experiment revealed that the method we describe has potential for use in systematic scanning and identification of the functionally important elements to which cellular factors bind in a sequence-specific manner.
A wavelet-based technique to predict treatment outcome for Major Depressive Disorder
Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad
2017-01-01
Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant’s treatment outcome may help during antidepressant’s selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant’s treatment outcome for the MDD patients. PMID:28152063
Pérez, Ashley; Santamaria, E Karina; Operario, Don
2017-12-15
Latino men who have sex with men (MSM) in the United States are disproportionately affected by HIV, and there have been calls to improve availability of culturally sensitive HIV prevention programs for this population. This article provides a systematic review of intervention programs to reduce condomless sex and/or increase HIV testing among Latino MSM. We searched four electronic databases using a systematic review protocol, screened 1777 unique records, and identified ten interventions analyzing data from 2871 Latino MSM. Four studies reported reductions in condomless anal intercourse, and one reported reductions in number of sexual partners. All studies incorporated surface structure cultural features such as bilingual study recruitment, but the incorporation of deep structure cultural features, such as machismo and sexual silence, was lacking. There is a need for rigorously designed interventions that incorporate deep structure cultural features in order to reduce HIV among Latino MSM.
D Object Classification Based on Thermal and Visible Imagery in Urban Area
NASA Astrophysics Data System (ADS)
Hasani, H.; Samadzadegan, F.
2015-12-01
The spatial distribution of land cover in the urban area especially 3D objects (buildings and trees) is a fundamental dataset for urban planning, ecological research, disaster management, etc. According to recent advances in sensor technologies, several types of remotely sensed data are available from the same area. Data fusion has been widely investigated for integrating different source of data in classification of urban area. Thermal infrared imagery (TIR) contains information on emitted radiation and has unique radiometric properties. However, due to coarse spatial resolution of thermal data, its application has been restricted in urban areas. On the other hand, visible image (VIS) has high spatial resolution and information in visible spectrum. Consequently, there is a complementary relation between thermal and visible imagery in classification of urban area. This paper evaluates the potential of aerial thermal hyperspectral and visible imagery fusion in classification of urban area. In the pre-processing step, thermal imagery is resampled to the spatial resolution of visible image. Then feature level fusion is applied to construct hybrid feature space include visible bands, thermal hyperspectral bands, spatial and texture features and moreover Principle Component Analysis (PCA) transformation is applied to extract PCs. Due to high dimensionality of feature space, dimension reduction method is performed. Finally, Support Vector Machines (SVMs) classify the reduced hybrid feature space. The obtained results show using thermal imagery along with visible imagery, improved the classification accuracy up to 8% respect to visible image classification.
Yang, Guang; Nawaz, Tahir; Barrick, Thomas R; Howe, Franklyn A; Slabaugh, Greg
2015-12-01
Many approaches have been considered for automatic grading of brain tumors by means of pattern recognition with magnetic resonance spectroscopy (MRS). Providing an improved technique which can assist clinicians in accurately identifying brain tumor grades is our main objective. The proposed technique, which is based on the discrete wavelet transform (DWT) of whole-spectral or subspectral information of key metabolites, combined with unsupervised learning, inspects the separability of the extracted wavelet features from the MRS signal to aid the clustering. In total, we included 134 short echo time single voxel MRS spectra (SV MRS) in our study that cover normal controls, low grade and high grade tumors. The combination of DWT-based whole-spectral or subspectral analysis and unsupervised clustering achieved an overall clustering accuracy of 94.8% and a balanced error rate of 7.8%. To the best of our knowledge, it is the first study using DWT combined with unsupervised learning to cluster brain SV MRS. Instead of dimensionality reduction on SV MRS or feature selection using model fitting, our study provides an alternative method of extracting features to obtain promising clustering results.
Hydromorphic soil development in the coastal temperate rainforest of Alaska
David V. D' Amore; Chien-Lu Ping; Paul A. Herendeen
2015-01-01
Predictive relationships between soil drainage and soil morphological features are essential for understanding hydromorphic processes in soils. The linkage between patterns of soil saturation, reduction, and reductimorphic soil properties has not been extensively studied in mountainous forested terrain. We measured soil saturation and reduction during a 4-yr period in...
U.S. Education and the Constrained Economy: From the Melting Pot to the Excluder?
ERIC Educational Resources Information Center
Hunt, F. J.
1983-01-01
Seeks to understand effects of recent reductions of federal support for education by recalling both the original grounds for providing federal aid and the modes by which such funds are distributed and used. Suggests features of federal support and draws attention to possible consequences of its reduction. (BRR)
Evaluating molecular cobalt complexes for the conversion of N2 to NH3.
Del Castillo, Trevor J; Thompson, Niklas B; Suess, Daniel L M; Ung, Gaël; Peters, Jonas C
2015-10-05
Well-defined molecular catalysts for the reduction of N2 to NH3 with protons and electrons remain very rare despite decades of interest and are currently limited to systems featuring molybdenum or iron. This report details the synthesis of a molecular cobalt complex that generates superstoichiometric yields of NH3 (>200% NH3 per Co-N2 precursor) via the direct reduction of N2 with protons and electrons. While the NH3 yields reported herein are modest by comparison to those of previously described iron and molybdenum systems, they intimate that other metals are likely to be viable as molecular N2 reduction catalysts. Additionally, a comparison of the featured tris(phosphine)borane Co-N2 complex with structurally related Co-N2 and Fe-N2 species shows how remarkably sensitive the N2 reduction performance of potential precatalysts is. These studies enable consideration of the structural and electronic effects that are likely relevant to N2 conversion activity, including the π basicity, charge state, and geometric flexibility.
The Extraction of Post-Earthquake Building Damage Informatiom Based on Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Chen, M.; Wang, X.; Dou, A.; Wu, X.
2018-04-01
The seismic damage information of buildings extracted from remote sensing (RS) imagery is meaningful for supporting relief and effective reduction of losses caused by earthquake. Both traditional pixel-based and object-oriented methods have some shortcoming in extracting information of object. Pixel-based method can't make fully use of contextual information of objects. Object-oriented method faces problem that segmentation of image is not ideal, and the choice of feature space is difficult. In this paper, a new stratage is proposed which combines Convolution Neural Network (CNN) with imagery segmentation to extract building damage information from remote sensing imagery. the key idea of this method includes two steps. First to use CNN to predicate the probability of each pixel and then integrate the probability within each segmentation spot. The method is tested through extracting the collapsed building and uncollapsed building from the aerial image which is acquired in Longtoushan Town after Ms 6.5 Ludian County, Yunnan Province earthquake. The results show that the proposed method indicates its effectiveness in extracting damage information of buildings after earthquake.
Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula
2011-01-01
Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.
A new method for calculating differential distributions directly in Mellin space
NASA Astrophysics Data System (ADS)
Mitov, Alexander
2006-12-01
We present a new method for the calculation of differential distributions directly in Mellin space without recourse to the usual momentum-fraction (or z-) space. The method is completely general and can be applied to any process. It is based on solving the integration-by-parts identities when one of the powers of the propagators is an abstract number. The method retains the full dependence on the Mellin variable and can be implemented in any program for solving the IBP identities based on algebraic elimination, like Laporta. General features of the method are: (1) faster reduction, (2) smaller number of master integrals compared to the usual z-space approach and (3) the master integrals satisfy difference instead of differential equations. This approach generalizes previous results related to fully inclusive observables like the recently calculated three-loop space-like anomalous dimensions and coefficient functions in inclusive DIS to more general processes requiring separate treatment of the various physical cuts. Many possible applications of this method exist, the most notable being the direct evaluation of the three-loop time-like splitting functions in QCD.
Iterative optimization method for design of quantitative magnetization transfer imaging experiments.
Levesque, Ives R; Sled, John G; Pike, G Bruce
2011-09-01
Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.
Final Report. Analysis and Reduction of Complex Networks Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef M.; Coles, T.; Spantini, A.
2013-09-30
The project was a collaborative effort among MIT, Sandia National Laboratories (local PI Dr. Habib Najm), the University of Southern California (local PI Prof. Roger Ghanem), and The Johns Hopkins University (local PI Prof. Omar Knio, now at Duke University). Our focus was the analysis and reduction of large-scale dynamical systems emerging from networks of interacting components. Such networks underlie myriad natural and engineered systems. Examples important to DOE include chemical models of energy conversion processes, and elements of national infrastructure—e.g., electric power grids. Time scales in chemical systems span orders of magnitude, while infrastructure networks feature both local andmore » long-distance connectivity, with associated clusters of time scales. These systems also blend continuous and discrete behavior; examples include saturation phenomena in surface chemistry and catalysis, and switching in electrical networks. Reducing size and stiffness is essential to tractable and predictive simulation of these systems. Computational singular perturbation (CSP) has been effectively used to identify and decouple dynamics at disparate time scales in chemical systems, allowing reduction of model complexity and stiffness. In realistic settings, however, model reduction must contend with uncertainties, which are often greatest in large-scale systems most in need of reduction. Uncertainty is not limited to parameters; one must also address structural uncertainties—e.g., whether a link is present in a network—and the impact of random perturbations, e.g., fluctuating loads or sources. Research under this project developed new methods for the analysis and reduction of complex multiscale networks under uncertainty, by combining computational singular perturbation (CSP) with probabilistic uncertainty quantification. CSP yields asymptotic approximations of reduceddimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty in this context raised fundamentally new issues, e.g., how is the topology of slow manifolds transformed by parametric uncertainty? How to construct dynamical models on these uncertain manifolds? To address these questions, we used stochastic spectral polynomial chaos (PC) methods to reformulate uncertain network models and analyzed them using CSP in probabilistic terms. Finding uncertain manifolds involved the solution of stochastic eigenvalue problems, facilitated by projection onto PC bases. These problems motivated us to explore the spectral properties stochastic Galerkin systems. We also introduced novel methods for rank-reduction in stochastic eigensystems—transformations of a uncertain dynamical system that lead to lower storage and solution complexity. These technical accomplishments are detailed below. This report focuses on the MIT portion of the joint project.« less
Geometric mean for subspace selection.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2009-02-01
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.
Longin, Rita; Grasse, Marina; Aspalter, Rosa; Waldherr, Karin
2012-01-01
Preliminary results indicated effectiveness of the online weight reduction program KiloCoach. The current study presents a large collection of user data and compares KiloCoach with other evaluated commercial weight loss programs. Furthermore, potential factors influencing the effectiveness of internet weight loss programs should be identified. 4,310 data sets of KiloCoach users were available, 3,150 of them were suitable for further analysis. 946 program users were considered completers (at least 60 days of continuous protocol). For comparison with other programs, different subsamples were drawn that matched to the inclusion criteria of reference studies. On average, KiloCoach overweight and obese completers lost 4.5 % of initial body weight. KiloCoach was as effective as the commercial program Weight Watchers® after 1 year (6.4% vs. 5.3% weight loss; p = 0.11) and 2 years (5.1% vs. 3.2% weight loss; p = 0.15). KiloCoach proved to be more effective than other online programs (Viktklubb, eDiets.com) as well as an in-person behavioral program, but less effective than Vtrim®, an online behavioral program providing intensive support. In comparison to reference programs, KiloCoach proved to be effective for weight reduction. The effect of online weight reduction programs seems to depend on methods and features applied.
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.
1974-01-01
General classes of nonlinear and linear transformations were investigated for the reduction of the dimensionality of the classification (feature) space so that, for a prescribed dimension m of this space, the increase of the misclassification risk is minimized.
Dixon-Gordon, Katherine L; Chapman, Alexander L; Lovasz, Nathalie; Walters, Kris
2011-10-01
Borderline personality disorder (BPD) is associated with poor social problem solving and problems with emotion regulation. In this study, the social problem-solving performance of undergraduates with high (n = 26), mid (n = 32), or low (n = 29) levels of BPD features was assessed with the Social Problem-Solving Inventory-Revised and using the means-ends problem-solving procedure before and after a social rejection stressor. The high-BP group, but not the low-BP group, showed a significant reduction in relevant solutions to social problems and more inappropriate solutions following the negative emotion induction. Increases in self-reported negative emotions during the emotion induction mediated the relationship between BP features and reductions in social problem-solving performance. In addition, the high-BP group demonstrated trait deficits in social problem solving on the Social Problem-Solving Inventory-Revised. These findings suggest that future research must examine social problem solving under differing emotional conditions, and that clinical interventions to improve social problem solving among persons with BP features should focus on responses to emotional contexts.
NASA Astrophysics Data System (ADS)
Val'kov, V. V.; Mitskan, V. A.; Dzebisashvili, D. M.; Barabanov, A. F.
2018-02-01
It is shown that for the three-band Emery p-d-model that reflects the real structure of the CuO2-plane of high-temperature superconductors in the regime of strong electron correlations, it is possible to carry out a sequence of reductions to the effective models reproducing low-energy features of elementary excitation spectrum and revealing the spin-polaron nature of the Fermi quasiparticles. The first reduction leads to the spin-fermion model in which the subsystem of spin moments, coupled by the exchange interaction and localized on copper ions, strongly interacts with oxygen holes. The second reduction deals with the transformation from the spin-fermion model to the φ-d-exchange model. An important feature of this transformation is the large energy of the φ-d-exchange coupling, which leads to the formation of spin polarons. The use of this fact allows us to carry out the third reduction, resulting in the t ˜-J˜ *-I -model. Its distinctive feature is the importance of spin-correlated hops as compared to the role of such processes in the commonly used t-J*-model derived from the Hubbard model. Based on the comparative analysis of the spectrum of Fermi excitations calculated for the obtained effective models of the CuO2-plane of high-temperature superconductors, the important role of the usually ignored long-range spin-correlated hops is determined.
Hemmateenejad, Bahram; Yazdani, Mahdieh
2009-02-16
Steroids are widely distributed in nature and are found in plants, animals, and fungi in abundance. A data set consists of a diverse set of steroids have been used to develop quantitative structure-electrochemistry relationship (QSER) models for their half-wave reduction potential. Modeling was established by means of multiple linear regression (MLR) and principle component regression (PCR) analyses. In MLR analysis, the QSPR models were constructed by first grouping descriptors and then stepwise selection of variables from each group (MLR1) and stepwise selection of predictor variables from the pool of all calculated descriptors (MLR2). Similar procedure was used in PCR analysis so that the principal components (or features) were extracted from different group of descriptors (PCR1) and from entire set of descriptors (PCR2). The resulted models were evaluated using cross-validation, chance correlation, application to prediction reduction potential of some test samples and accessing applicability domain. Both MLR approaches represented accurate results however the QSPR model found by MLR1 was statistically more significant. PCR1 approach produced a model as accurate as MLR approaches whereas less accurate results were obtained by PCR2 approach. In overall, the correlation coefficients of cross-validation and prediction of the QSPR models resulted from MLR1, MLR2 and PCR1 approaches were higher than 90%, which show the high ability of the models to predict reduction potential of the studied steroids.
Enhancement Strategies for Frame-To Uas Stereo Visual Odometry
NASA Astrophysics Data System (ADS)
Kersten, J.; Rodehorst, V.
2016-06-01
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.
Maskless micro-ion-beam reduction lithography system
Leung, Ka-Ngo; Barletta, William A.; Patterson, David O.; Gough, Richard A.
2005-05-03
A maskless micro-ion-beam reduction lithography system is a system for projecting patterns onto a resist layer on a wafer with feature size down to below 100 nm. The MMRL system operates without a stencil mask. The patterns are generated by switching beamlets on and off from a two electrode blanking system or pattern generator. The pattern generator controllably extracts the beamlet pattern from an ion source and is followed by a beam reduction and acceleration column.
Computer-aided Assessment of Regional Abdominal Fat with Food Residue Removal in CT
Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi
2014-01-01
Rationale and Objectives Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Materials and Methods Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. Results We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Conclusions Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. PMID:24119354
NASA Astrophysics Data System (ADS)
Hoell, Simon; Omenzetter, Piotr
2018-02-01
To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.
Cradock, Kevin A; ÓLaighin, Gearóid; Finucane, Francis M; Gainforth, Heather L; Quinlan, Leo R; Ginis, Kathleen A Martin
2017-02-08
Changing diet and physical activity behaviour is one of the cornerstones of type 2 diabetes treatment, but changing behaviour is challenging. The objective of this study was to identify behaviour change techniques (BCTs) and intervention features of dietary and physical activity interventions for patients with type 2 diabetes that are associated with changes in HbA 1c and body weight. We performed a systematic review of papers published between 1975-2015 describing randomised controlled trials (RCTs) that focused exclusively on both diet and physical activity. The constituent BCTs, intervention features and methodological rigour of these interventions were evaluated. Changes in HbA 1c and body weight were meta-analysed and examined in relation to use of BCTs. Thirteen RCTs were identified. Meta-analyses revealed reductions in HbA 1c at 3, 6, 12 and 24 months of -1.11 % (12 mmol/mol), -0.67 % (7 mmol/mol), -0.28 % (3 mmol/mol) and -0.26 % (2 mmol/mol) with an overall reduction of -0.53 % (6 mmol/mol [95 % CI -0.74 to -0.32, P < 0.00001]) in intervention groups compared to control groups. Meta-analyses also showed a reduction in body weight of -2.7 kg, -3.64 kg, -3.77 kg and -3.18 kg at 3, 6, 12 and 24 months, overall reduction was -3.73 kg (95 % CI -6.09 to -1.37 kg, P = 0.002). Four of 46 BCTs identified were associated with >0.3 % reduction in HbA 1c : 'instruction on how to perform a behaviour', 'behavioural practice/rehearsal', 'demonstration of the behaviour' and 'action planning', as were intervention features 'supervised physical activity', 'group sessions', 'contact with an exercise physiologist', 'contact with an exercise physiologist and a dietitian', 'baseline HbA 1c >8 %' and interventions of greater frequency and intensity. Diet and physical activity interventions achieved clinically significant reductions in HbA 1c at three and six months, but not at 12 and 24 months. Specific BCTs and intervention features identified may inform more effective structured lifestyle intervention treatment strategies for type 2 diabetes.
Lai, Jianping; Guo, Shaojun
2017-12-01
Nanocatalysts with high platinum (Pt) utilization efficiency are attracting extensive attention for oxygen reduction reactions (ORR) conducted at the cathode of fuel cells. Ultrathin Pt-based multimetallic nanostructures show obvious advantages in accelerating the sluggish cathodic ORR due to their ultrahigh Pt utilization efficiency. A focus on recent important developments is provided in using wet chemistry techniques for making/tuning the multimetallic nanostructures with high Pt utilization efficiency for boosting ORR activity and durability. First, new synthetic methods for multimetallic core/shell nanoparticles with ultrathin shell sizes for achieving highly efficient ORR catalysts are reviewed. To obtain better ORR activity and stability, multimetallic nanowires or nanosheets with well-defined structure and surface are further highlighted. Furthermore, ultrathin Pt-based multimetallic nanoframes that feature 3D molecularly accessible surfaces for achieving more efficient ORR catalysis are discussed. Finally, the remaining challenges and outlooks for the future will be provided for this promising research field. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Automated detection of lung nodules with three-dimensional convolutional neural networks
NASA Astrophysics Data System (ADS)
Pérez, Gustavo; Arbeláez, Pablo
2017-11-01
Lung cancer is the cancer type with highest mortality rate worldwide. It has been shown that early detection with computer tomography (CT) scans can reduce deaths caused by this disease. Manual detection of cancer nodules is costly and time-consuming. We present a general framework for the detection of nodules in lung CT images. Our method consists of the pre-processing of a patient's CT with filtering and lung extraction from the entire volume using a previously calculated mask for each patient. From the extracted lungs, we perform a candidate generation stage using morphological operations, followed by the training of a three-dimensional convolutional neural network for feature representation and classification of extracted candidates for false positive reduction. We perform experiments on the publicly available LIDC-IDRI dataset. Our candidate extraction approach is effective to produce precise candidates with a recall of 99.6%. In addition, false positive reduction stage manages to successfully classify candidates and increases precision by a factor of 7.000.
NASA Astrophysics Data System (ADS)
Khudiyev, Tural; Dogan, Tamer; Bayindir, Mehmet
2014-04-01
Biological systems serve as fundamental sources of inspiration for the development of artificially colored devices, and their investigation provides a great number of photonic design opportunities. While several successful biomimetic designs have been detailed in the literature, conventional fabrication techniques nonetheless remain inferior to their natural counterparts in complexity, ease of production and material economy. Here, we investigate the iridescent neck feathers of Anas platyrhynchos drakes, show that they feature an unusual arrangement of two-dimensional (2D) photonic crystals and further exhibit a superhydrophobic surface, and mimic this multifunctional structure using a nanostructure composite fabricated by a recently developed top-down iterative size reduction method, which avoids the above-mentioned fabrication challenges, provides macroscale control and enhances hydrophobicity through the surface structure. Our 2D solid core photonic crystal fibres strongly resemble drake neck plumage in structure and fully polymeric material composition, and can be produced in wide array of colors by minor alterations during the size reduction process.
Khudiyev, Tural; Dogan, Tamer; Bayindir, Mehmet
2014-01-01
Biological systems serve as fundamental sources of inspiration for the development of artificially colored devices, and their investigation provides a great number of photonic design opportunities. While several successful biomimetic designs have been detailed in the literature, conventional fabrication techniques nonetheless remain inferior to their natural counterparts in complexity, ease of production and material economy. Here, we investigate the iridescent neck feathers of Anas platyrhynchos drakes, show that they feature an unusual arrangement of two-dimensional (2D) photonic crystals and further exhibit a superhydrophobic surface, and mimic this multifunctional structure using a nanostructure composite fabricated by a recently developed top-down iterative size reduction method, which avoids the above-mentioned fabrication challenges, provides macroscale control and enhances hydrophobicity through the surface structure. Our 2D solid core photonic crystal fibres strongly resemble drake neck plumage in structure and fully polymeric material composition, and can be produced in wide array of colors by minor alterations during the size reduction process. PMID:24751587