Sample records for feature reduction technique

  1. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah

    2017-02-01

    Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.

  2. Integrating dimension reduction and out-of-sample extension in automated classification of ex vivo human patellar cartilage on phase contrast X-ray computed tomography.

    PubMed

    Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Wismüller, Axel

    2015-01-01

    Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns.

  3. Integrating Dimension Reduction and Out-of-Sample Extension in Automated Classification of Ex Vivo Human Patellar Cartilage on Phase Contrast X-Ray Computed Tomography

    PubMed Central

    Nagarajan, Mahesh B.; Coan, Paola; Huber, Markus B.; Diemoz, Paul C.; Wismüller, Axel

    2015-01-01

    Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns. PMID:25710875

  4. Prediction of cause of death from forensic autopsy reports using text classification techniques: A comparative study.

    PubMed

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa

    2018-07-01

    Automatic text classification techniques are useful for classifying plaintext medical documents. This study aims to automatically predict the cause of death from free text forensic autopsy reports by comparing various schemes for feature extraction, term weighing or feature value representation, text classification, and feature reduction. For experiments, the autopsy reports belonging to eight different causes of death were collected, preprocessed and converted into 43 master feature vectors using various schemes for feature extraction, representation, and reduction. The six different text classification techniques were applied on these 43 master feature vectors to construct a classification model that can predict the cause of death. Finally, classification model performance was evaluated using four performance measures i.e. overall accuracy, macro precision, macro-F-measure, and macro recall. From experiments, it was found that that unigram features obtained the highest performance compared to bigram, trigram, and hybrid-gram features. Furthermore, in feature representation schemes, term frequency, and term frequency with inverse document frequency obtained similar and better results when compared with binary frequency, and normalized term frequency with inverse document frequency. Furthermore, the chi-square feature reduction approach outperformed Pearson correlation, and information gain approaches. Finally, in text classification algorithms, support vector machine classifier outperforms random forest, Naive Bayes, k-nearest neighbor, decision tree, and ensemble-voted classifier. Our results and comparisons hold practical importance and serve as references for future works. Moreover, the comparison outputs will act as state-of-art techniques to compare future proposals with existing automated text classification techniques. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  5. Hypergraph Based Feature Selection Technique for Medical Diagnosis.

    PubMed

    Somu, Nivethitha; Raman, M R Gauthama; Kirthivasan, Kannan; Sriram, V S Shankar

    2016-11-01

    The impact of internet and information systems across various domains have resulted in substantial generation of multidimensional datasets. The use of data mining and knowledge discovery techniques to extract the original information contained in the multidimensional datasets play a significant role in the exploitation of complete benefit provided by them. The presence of large number of features in the high dimensional datasets incurs high computational cost in terms of computing power and time. Hence, feature selection technique has been commonly used to build robust machine learning models to select a subset of relevant features which projects the maximal information content of the original dataset. In this paper, a novel Rough Set based K - Helly feature selection technique (RSKHT) which hybridize Rough Set Theory (RST) and K - Helly property of hypergraph representation had been designed to identify the optimal feature subset or reduct for medical diagnostic applications. Experiments carried out using the medical datasets from the UCI repository proves the dominance of the RSKHT over other feature selection techniques with respect to the reduct size, classification accuracy and time complexity. The performance of the RSKHT had been validated using WEKA tool, which shows that RSKHT had been computationally attractive and flexible over massive datasets.

  6. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ochilov, S.; Alam, M. S.; Bal, A.

    2006-05-01

    Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.

  7. The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jason L. Wright; Milos Manic

    2010-05-01

    This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.

  8. Assessing clutter reduction in parallel coordinates using image processing techniques

    NASA Astrophysics Data System (ADS)

    Alhamaydh, Heba; Alzoubi, Hussein; Almasaeid, Hisham

    2018-01-01

    Information visualization has appeared as an important research field for multidimensional data and correlation analysis in recent years. Parallel coordinates (PCs) are one of the popular techniques to visual high-dimensional data. A problem with the PCs technique is that it suffers from crowding, a clutter which hides important data and obfuscates the information. Earlier research has been conducted to reduce clutter without loss in data content. We introduce the use of image processing techniques as an approach for assessing the performance of clutter reduction techniques in PC. We use histogram analysis as our first measure, where the mean feature of the color histograms of the possible alternative orderings of coordinates for the PC images is calculated and compared. The second measure is the extracted contrast feature from the texture of PC images based on gray-level co-occurrence matrices. The results show that the best PC image is the one that has the minimal mean value of the color histogram feature and the maximal contrast value of the texture feature. In addition to its simplicity, the proposed assessment method has the advantage of objectively assessing alternative ordering of PC visualization.

  9. AHIMSA - Ad hoc histogram information measure sensing algorithm for feature selection in the context of histogram inspired clustering techniques

    NASA Technical Reports Server (NTRS)

    Dasarathy, B. V.

    1976-01-01

    An algorithm is proposed for dimensionality reduction in the context of clustering techniques based on histogram analysis. The approach is based on an evaluation of the hills and valleys in the unidimensional histograms along the different features and provides an economical means of assessing the significance of the features in a nonparametric unsupervised data environment. The method has relevance to remote sensing applications.

  10. Effects of band selection on endmember extraction for forestry applications

    NASA Astrophysics Data System (ADS)

    Karathanassi, Vassilia; Andreou, Charoula; Andronis, Vassilis; Kolokoussis, Polychronis

    2014-10-01

    In spectral unmixing theory, data reduction techniques play an important role as hyperspectral imagery contains an immense amount of data, posing many challenging problems such as data storage, computational efficiency, and the so called "curse of dimensionality". Feature extraction and feature selection are the two main approaches for dimensionality reduction. Feature extraction techniques are used for reducing the dimensionality of the hyperspectral data by applying transforms on hyperspectral data. Feature selection techniques retain the physical meaning of the data by selecting a set of bands from the input hyperspectral dataset, which mainly contain the information needed for spectral unmixing. Although feature selection techniques are well-known for their dimensionality reduction potentials they are rarely used in the unmixing process. The majority of the existing state-of-the-art dimensionality reduction methods set criteria to the spectral information, which is derived by the whole wavelength, in order to define the optimum spectral subspace. These criteria are not associated with any particular application but with the data statistics, such as correlation and entropy values. However, each application is associated with specific land c over materials, whose spectral characteristics present variations in specific wavelengths. In forestry for example, many applications focus on tree leaves, in which specific pigments such as chlorophyll, xanthophyll, etc. determine the wavelengths where tree species, diseases, etc., can be detected. For such applications, when the unmixing process is applied, the tree species, diseases, etc., are considered as the endmembers of interest. This paper focuses on investigating the effects of band selection on the endmember extraction by exploiting the information of the vegetation absorbance spectral zones. More precisely, it is explored whether endmember extraction can be optimized when specific sets of initial bands related to leaf spectral characteristics are selected. Experiments comprise application of well-known signal subspace estimation and endmember extraction methods on a hyperspectral imagery that presents a forest area. Evaluation of the extracted endmembers showed that more forest species can be extracted as endmembers using selected bands.

  11. Computer-Aided Breast Cancer Diagnosis with Optimal Feature Sets: Reduction Rules and Optimization Techniques.

    PubMed

    Mathieson, Luke; Mendes, Alexandre; Marsden, John; Pond, Jeffrey; Moscato, Pablo

    2017-01-01

    This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.

  12. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    PubMed

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Design, manufacturing and characterization of aero-elastically scaled wind turbine blades for testing active and passive load alleviation techniques within a ABL wind tunnel

    NASA Astrophysics Data System (ADS)

    Campagnolo, Filippo; Bottasso, Carlo L.; Bettini, Paolo

    2014-06-01

    In the research described in this paper, a scaled wind turbine model featuring individual pitch control (IPC) capabilities, and equipped with aero-elastically scaled blades featuring passive load reduction capabilities (bend-twist coupling, BTC), was constructed to investigate, by means of wind tunnel testing, the load alleviation potential of BTC and its synergy with active load reduction techniques. The paper mainly focus on the design of the aero-elastic blades and their dynamic and static structural characterization. The experimental results highlight that manufactured blades show desired bend-twist coupling behavior and are a first milestone toward their testing in the wind tunnel.

  14. A new technique for solving puzzles.

    PubMed

    Makridis, Michael; Papamarkos, Nikos

    2010-06-01

    This paper proposes a new technique for solving jigsaw puzzles. The novelty of the proposed technique is that it provides an automatic jigsaw puzzle solution without any initial restriction about the shape of pieces, the number of neighbor pieces, etc. The proposed technique uses both curve- and color-matching similarity features. A recurrent procedure is applied, which compares and merges puzzle pieces in pairs, until the original puzzle image is reformed. Geometrical and color features are extracted on the characteristic points (CPs) of the puzzle pieces. CPs, which can be considered as high curvature points, are detected by a rotationally invariant corner detection algorithm. The features which are associated with color are provided by applying a color reduction technique using the Kohonen self-organized feature map. Finally, a postprocessing stage checks and corrects the relative position between puzzle pieces to improve the quality of the resulting image. Experimental results prove the efficiency of the proposed technique, which can be further extended to deal with even more complex jigsaw puzzle problems.

  15. Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics

    NASA Astrophysics Data System (ADS)

    Wehmeyer, Christoph; Noé, Frank

    2018-06-01

    Inspired by the success of deep learning techniques in the physical and chemical sciences, we apply a modification of an autoencoder type deep neural network to the task of dimension reduction of molecular dynamics data. We can show that our time-lagged autoencoder reliably finds low-dimensional embeddings for high-dimensional feature spaces which capture the slow dynamics of the underlying stochastic processes—beyond the capabilities of linear dimension reduction techniques.

  16. Classification of small lesions on dynamic breast MRI: Integrating dimension reduction and out-of-sample extension into CADx methodology

    PubMed Central

    Nagarajan, Mahesh B.; Huber, Markus B.; Schlossbauer, Thomas; Leinsinger, Gerda; Krol, Andrzej; Wismüller, Axel

    2014-01-01

    Objective While dimension reduction has been previously explored in computer aided diagnosis (CADx) as an alternative to feature selection, previous implementations of its integration into CADx do not ensure strict separation between training and test data required for the machine learning task. This compromises the integrity of the independent test set, which serves as the basis for evaluating classifier performance. Methods and Materials We propose, implement and evaluate an improved CADx methodology where strict separation is maintained. This is achieved by subjecting the training data alone to dimension reduction; the test data is subsequently processed with out-of-sample extension methods. Our approach is demonstrated in the research context of classifying small diagnostically challenging lesions annotated on dynamic breast magnetic resonance imaging (MRI) studies. The lesions were dynamically characterized through topological feature vectors derived from Minkowski functionals. These feature vectors were then subject to dimension reduction with different linear and non-linear algorithms applied in conjunction with out-of-sample extension techniques. This was followed by classification through supervised learning with support vector regression. Area under the receiver-operating characteristic curve (AUC) was evaluated as the metric of classifier performance. Results Of the feature vectors investigated, the best performance was observed with Minkowski functional ’perimeter’ while comparable performance was observed with ’area’. Of the dimension reduction algorithms tested with ’perimeter’, the best performance was observed with Sammon’s mapping (0.84 ± 0.10) while comparable performance was achieved with exploratory observation machine (0.82 ± 0.09) and principal component analysis (0.80 ± 0.10). Conclusions The results reported in this study with the proposed CADx methodology present a significant improvement over previous results reported with such small lesions on dynamic breast MRI. In particular, non-linear algorithms for dimension reduction exhibited better classification performance than linear approaches, when integrated into our CADx methodology. We also note that while dimension reduction techniques may not necessarily provide an improvement in classification performance over feature selection, they do allow for a higher degree of feature compaction. PMID:24355697

  17. Drug-target interaction prediction using ensemble learning and dimensionality reduction.

    PubMed

    Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong

    2017-10-01

    Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Exploring nonlinear feature space dimension reduction and data representation in breast Cadx with Laplacian eigenmaps and t-SNE.

    PubMed

    Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.

  19. Automatic topic identification of health-related messages in online health community using text classification.

    PubMed

    Lu, Yingjie

    2013-01-01

    To facilitate patient involvement in online health community and obtain informative support and emotional support they need, a topic identification approach was proposed in this paper for identifying automatically topics of the health-related messages in online health community, thus assisting patients in reaching the most relevant messages for their queries efficiently. Feature-based classification framework was presented for automatic topic identification in our study. We first collected the messages related to some predefined topics in a online health community. Then we combined three different types of features, n-gram-based features, domain-specific features and sentiment features to build four feature sets for health-related text representation. Finally, three different text classification techniques, C4.5, Naïve Bayes and SVM were adopted to evaluate our topic classification model. By comparing different feature sets and different classification techniques, we found that n-gram-based features, domain-specific features and sentiment features were all considered to be effective in distinguishing different types of health-related topics. In addition, feature reduction technique based on information gain was also effective to improve the topic classification performance. In terms of classification techniques, SVM outperformed C4.5 and Naïve Bayes significantly. The experimental results demonstrated that the proposed approach could identify the topics of online health-related messages efficiently.

  20. New bandwidth selection criterion for Kernel PCA: approach to dimensionality reduction and classification problems.

    PubMed

    Thomas, Minta; De Brabanter, Kris; De Moor, Bart

    2014-05-10

    DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.

  1. Spectral Data Reduction via Wavelet Decomposition

    NASA Technical Reports Server (NTRS)

    Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)

    2002-01-01

    The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.

  2. Multiview Locally Linear Embedding for Effective Medical Image Retrieval

    PubMed Central

    Shen, Hualei; Tao, Dacheng; Ma, Dianfu

    2013-01-01

    Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277

  3. Exploring nonlinear feature space dimension reduction and data representation in breast CADx with Laplacian eigenmaps and t-SNE

    PubMed Central

    Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497

  4. The correlation study of parallel feature extractor and noise reduction approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dewi, Deshinta Arrova; Sundararajan, Elankovan; Prabuwono, Anton Satria

    2015-05-15

    This paper presents literature reviews that show variety of techniques to develop parallel feature extractor and finding its correlation with noise reduction approaches for low light intensity images. Low light intensity images are normally displayed as darker images and low contrast. Without proper handling techniques, those images regularly become evidences of misperception of objects and textures, the incapability to section them. The visual illusions regularly clues to disorientation, user fatigue, poor detection and classification performance of humans and computer algorithms. Noise reduction approaches (NR) therefore is an essential step for other image processing steps such as edge detection, image segmentation,more » image compression, etc. Parallel Feature Extractor (PFE) meant to capture visual contents of images involves partitioning images into segments, detecting image overlaps if any, and controlling distributed and redistributed segments to extract the features. Working on low light intensity images make the PFE face challenges and closely depend on the quality of its pre-processing steps. Some papers have suggested many well established NR as well as PFE strategies however only few resources have suggested or mentioned the correlation between them. This paper reviews best approaches of the NR and the PFE with detailed explanation on the suggested correlation. This finding may suggest relevant strategies of the PFE development. With the help of knowledge based reasoning, computational approaches and algorithms, we present the correlation study between the NR and the PFE that can be useful for the development and enhancement of other existing PFE.« less

  5. A machine learning heuristic to identify biologically relevant and minimal biomarker panels from omics data

    PubMed Central

    2015-01-01

    Background Investigations into novel biomarkers using omics techniques generate large amounts of data. Due to their size and numbers of attributes, these data are suitable for analysis with machine learning methods. A key component of typical machine learning pipelines for omics data is feature selection, which is used to reduce the raw high-dimensional data into a tractable number of features. Feature selection needs to balance the objective of using as few features as possible, while maintaining high predictive power. This balance is crucial when the goal of data analysis is the identification of highly accurate but small panels of biomarkers with potential clinical utility. In this paper we propose a heuristic for the selection of very small feature subsets, via an iterative feature elimination process that is guided by rule-based machine learning, called RGIFE (Rule-guided Iterative Feature Elimination). We use this heuristic to identify putative biomarkers of osteoarthritis (OA), articular cartilage degradation and synovial inflammation, using both proteomic and transcriptomic datasets. Results and discussion Our RGIFE heuristic increased the classification accuracies achieved for all datasets when no feature selection is used, and performed well in a comparison with other feature selection methods. Using this method the datasets were reduced to a smaller number of genes or proteins, including those known to be relevant to OA, cartilage degradation and joint inflammation. The results have shown the RGIFE feature reduction method to be suitable for analysing both proteomic and transcriptomics data. Methods that generate large ‘omics’ datasets are increasingly being used in the area of rheumatology. Conclusions Feature reduction methods are advantageous for the analysis of omics data in the field of rheumatology, as the applications of such techniques are likely to result in improvements in diagnosis, treatment and drug discovery. PMID:25923811

  6. Rotation, scale, and translation invariant pattern recognition using feature extraction

    NASA Astrophysics Data System (ADS)

    Prevost, Donald; Doucet, Michel; Bergeron, Alain; Veilleux, Luc; Chevrette, Paul C.; Gingras, Denis J.

    1997-03-01

    A rotation, scale and translation invariant pattern recognition technique is proposed.It is based on Fourier- Mellin Descriptors (FMD). Each FMD is taken as an independent feature of the object, and a set of those features forms a signature. FMDs are naturally rotation invariant. Translation invariance is achieved through pre- processing. A proper normalization of the FMDs gives the scale invariance property. This approach offers the double advantage of providing invariant signatures of the objects, and a dramatic reduction of the amount of data to process. The compressed invariant feature signature is next presented to a multi-layered perceptron neural network. This final step provides some robustness to the classification of the signatures, enabling good recognition behavior under anamorphically scaled distortion. We also present an original feature extraction technique, adapted to optical calculation of the FMDs. A prototype optical set-up was built, and experimental results are presented.

  7. Improved EEG Event Classification Using Differential Energy.

    PubMed

    Harati, A; Golmohammadi, M; Lopez, S; Obeid, I; Picone, J

    2015-12-01

    Feature extraction for automatic classification of EEG signals typically relies on time frequency representations of the signal. Techniques such as cepstral-based filter banks or wavelets are popular analysis techniques in many signal processing applications including EEG classification. In this paper, we present a comparison of a variety of approaches to estimating and postprocessing features. To further aid in discrimination of periodic signals from aperiodic signals, we add a differential energy term. We evaluate our approaches on the TUH EEG Corpus, which is the largest publicly available EEG corpus and an exceedingly challenging task due to the clinical nature of the data. We demonstrate that a variant of a standard filter bank-based approach, coupled with first and second derivatives, provides a substantial reduction in the overall error rate. The combination of differential energy and derivatives produces a 24 % absolute reduction in the error rate and improves our ability to discriminate between signal events and background noise. This relatively simple approach proves to be comparable to other popular feature extraction approaches such as wavelets, but is much more computationally efficient.

  8. Evaluation of stabilization techniques for ion implant processing

    NASA Astrophysics Data System (ADS)

    Ross, Matthew F.; Wong, Selmer S.; Minter, Jason P.; Marlowe, Trey; Narcy, Mark E.; Livesay, William R.

    1999-06-01

    With the integration of high current ion implant processing into volume CMOS manufacturing, the need for photoresist stabilization to achieve a stable ion implant process is critical. This study compares electron beam stabilization, a non-thermal process, with more traditional thermal stabilization techniques such as hot plate baking and vacuum oven processing. The electron beam processing is carried out in a flood exposure system with no active heating of the wafer. These stabilization techniques are applied to typical ion implant processes that might be found in a CMOS production process flow. The stabilization processes are applied to a 1.1 micrometers thick PFI-38A i-line photoresist film prior to ion implant processing. Post stabilization CD variation is detailed with respect to wall slope and feature integrity. SEM photographs detail the effects of the stabilization technique on photoresist features. The thermal stability of the photoresist is shown for different levels of stabilization and post stabilization thermal cycling. Thermal flow stability of the photoresist is detailed via SEM photographs. A significant improvement in thermal stability is achieved with the electron beam process, such that photoresist features are stable to temperatures in excess of 200 degrees C. Ion implant processing parameters are evaluated and compared for the different stabilization methods. Ion implant system end-station chamber pressure is detailed as a function of ion implant process and stabilization condition. The ion implant process conditions are detailed for varying factors such as ion current, energy, and total dose. A reduction in the ion implant systems end-station chamber pressure is achieved with the electron beam stabilization process over the other techniques considered. This reduction in end-station chamber pressure is shown to provide a reduction in total process time for a given ion implant dose. Improvements in the ion implant process are detailed across several combinations of current and energy.

  9. Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.

    PubMed

    Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi

    2017-09-22

    DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.

  10. Applying manifold learning techniques to the CAESAR database

    NASA Astrophysics Data System (ADS)

    Mendoza-Schrock, Olga; Patrick, James; Arnold, Gregory; Ferrara, Matthew

    2010-04-01

    Understanding and organizing data is the first step toward exploiting sensor phenomenology for dismount tracking. What image features are good for distinguishing people and what measurements, or combination of measurements, can be used to classify the dataset by demographics including gender, age, and race? A particular technique, Diffusion Maps, has demonstrated the potential to extract features that intuitively make sense [1]. We want to develop an understanding of this tool by validating existing results on the Civilian American and European Surface Anthropometry Resource (CAESAR) database. This database, provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International, is a rich dataset which includes 40 traditional, anthropometric measurements of 4400 human subjects. If we could specifically measure the defining features for classification, from this database, then the future question will then be to determine a subset of these features that can be measured from imagery. This paper briefly describes the Diffusion Map technique, shows potential for dimension reduction of the CAESAR database, and describes interesting problems to be further explored.

  11. Non-invasive detection of the freezing of gait in Parkinson's disease using spectral and wavelet features.

    PubMed

    Nazarzadeh, Kimia; Arjunan, Sridhar P; Kumar, Dinesh K; Das, Debi Prasad

    2016-08-01

    In this study, we have analyzed the accelerometer data recorded during gait analysis of Parkinson disease patients for detecting freezing of gait (FOG) episodes. The proposed method filters the recordings for noise reduction of the leg movement changes and computes the wavelet coefficients to detect FOG events. Publicly available FOG database was used and the technique was evaluated using receiver operating characteristic (ROC) analysis. Results show a higher performance of the wavelet feature in discrimination of the FOG events from the background activity when compared with the existing technique.

  12. Speckle noise reduction in quantitative optical metrology techniques by application of the discrete wavelet transformation

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2002-06-01

    Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.

  13. Using machine learning techniques to automate sky survey catalog generation

    NASA Technical Reports Server (NTRS)

    Fayyad, Usama M.; Roden, J. C.; Doyle, R. J.; Weir, Nicholas; Djorgovski, S. G.

    1993-01-01

    We describe the application of machine classification techniques to the development of an automated tool for the reduction of a large scientific data set. The 2nd Palomar Observatory Sky Survey provides comprehensive photographic coverage of the northern celestial hemisphere. The photographic plates are being digitized into images containing on the order of 10(exp 7) galaxies and 10(exp 8) stars. Since the size of this data set precludes manual analysis and classification of objects, our approach is to develop a software system which integrates independently developed techniques for image processing and data classification. Image processing routines are applied to identify and measure features of sky objects. Selected features are used to determine the classification of each object. GID3* and O-BTree, two inductive learning techniques, are used to automatically learn classification decision trees from examples. We describe the techniques used, the details of our specific application, and the initial encouraging results which indicate that our approach is well-suited to the problem. The benefits of the approach are increased data reduction throughput, consistency of classification, and the automated derivation of classification rules that will form an objective, examinable basis for classifying sky objects. Furthermore, astronomers will be freed from the tedium of an intensely visual task to pursue more challenging analysis and interpretation problems given automatically cataloged data.

  14. Application of machine learning techniques to analyse the effects of physical exercise in ventricular fibrillation.

    PubMed

    Caravaca, Juan; Soria-Olivas, Emilio; Bataller, Manuel; Serrano, Antonio J; Such-Miquel, Luis; Vila-Francés, Joan; Guerrero, Juan F

    2014-02-01

    This work presents the application of machine learning techniques to analyse the influence of physical exercise in the physiological properties of the heart, during ventricular fibrillation. To this end, different kinds of classifiers (linear and neural models) are used to classify between trained and sedentary rabbit hearts. The use of those classifiers in combination with a wrapper feature selection algorithm allows to extract knowledge about the most relevant features in the problem. The obtained results show that neural models outperform linear classifiers (better performance indices and a better dimensionality reduction). The most relevant features to describe the benefits of physical exercise are those related to myocardial heterogeneity, mean activation rate and activation complexity. © 2013 Published by Elsevier Ltd.

  15. Motorcyclists safety system to avoid rear end collisions based on acoustic signatures

    NASA Astrophysics Data System (ADS)

    Muzammel, M.; Yusoff, M. Zuki; Malik, A. Saeed; Mohamad Saad, M. Naufal; Meriaudeau, F.

    2017-03-01

    In many Asian countries, motorcyclists have a higher fatality rate as compared to other vehicles. Among many other factors, rear end collisions are also contributing for these fatalities. Collision detection systems can be useful to minimize these accidents. However, the designing of efficient and cost effective collision detection system for motorcyclist is still a major challenge. In this paper, an acoustic information based, cost effective and efficient collision detection system is proposed for motorcycle applications. The proposed technique uses the Short time Fourier Transform (STFT) to extract the features from the audio signal and Principal component analysis (PCA) has been used to reduce the feature vector length. The reduction of feature length, further increases the performance of this technique. The proposed technique has been tested on self recorded dataset and gives accuracy of 97.87%. We believe that this method can help to reduce a significant number of motorcycle accidents.

  16. Solution of the symmetric eigenproblem AX=lambda BX by delayed division

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.; Bains, N. J. C.

    1986-01-01

    Delayed division is an iterative method for solving the linear eigenvalue problem AX = lambda BX for a limited number of small eigenvalues and their corresponding eigenvectors. The distinctive feature of the method is the reduction of the problem to an approximate triangular form by systematically dropping quadratic terms in the eigenvalue lambda. The report describes the pivoting strategy in the reduction and the method for preserving symmetry in submatrices at each reduction step. Along with the approximate triangular reduction, the report extends some techniques used in the method of inverse subspace iteration. Examples are included for problems of varying complexity.

  17. A novel approach for dimension reduction of microarray.

    PubMed

    Aziz, Rabia; Verma, C K; Srivastava, Namita

    2017-12-01

    This paper proposes a new hybrid search technique for feature (gene) selection (FS) using Independent component analysis (ICA) and Artificial Bee Colony (ABC) called ICA+ABC, to select informative genes based on a Naïve Bayes (NB) algorithm. An important trait of this technique is the optimization of ICA feature vector using ABC. ICA+ABC is a hybrid search algorithm that combines the benefits of extraction approach, to reduce the size of data and wrapper approach, to optimize the reduced feature vectors. This hybrid search technique is facilitated by evaluating the performance of ICA+ABC on six standard gene expression datasets of classification. Extensive experiments were conducted to compare the performance of ICA+ABC with the results obtained from recently published Minimum Redundancy Maximum Relevance (mRMR) +ABC algorithm for NB classifier. Also to check the performance that how ICA+ABC works as feature selection with NB classifier, compared the combination of ICA with popular filter techniques and with other similar bio inspired algorithm such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The result shows that ICA+ABC has a significant ability to generate small subsets of genes from the ICA feature vector, that significantly improve the classification accuracy of NB classifier compared to other previously suggested methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A data reduction technique and associated computer program for obtaining vehicle attitudes with a single onboard camera

    NASA Technical Reports Server (NTRS)

    Bendura, R. J.; Renfroe, P. G.

    1974-01-01

    A detailed discussion of the application of a previously method to determine vehicle flight attitude using a single camera onboard the vehicle is presented with emphasis on the digital computer program format and data reduction techniques. Application requirements include film and earth-related coordinates of at least two landmarks (or features), location of the flight vehicle with respect to the earth, and camera characteristics. Included in this report are a detailed discussion of the program input and output format, a computer program listing, a discussion of modifications made to the initial method, a step-by-step basic data reduction procedure, and several example applications. The computer program is written in FORTRAN 4 language for the Control Data 6000 series digital computer.

  19. Toward On-Demand Deep Brain Stimulation Using Online Parkinson's Disease Prediction Driven by Dynamic Detection.

    PubMed

    Mohammed, Ameer; Zamani, Majid; Bayford, Richard; Demosthenous, Andreas

    2017-12-01

    In Parkinson's disease (PD), on-demand deep brain stimulation is required so that stimulation is regulated to reduce side effects resulting from continuous stimulation and PD exacerbation due to untimely stimulation. Also, the progressive nature of PD necessitates the use of dynamic detection schemes that can track the nonlinearities in PD. This paper proposes the use of dynamic feature extraction and dynamic pattern classification to achieve dynamic PD detection taking into account the demand for high accuracy, low computation, and real-time detection. The dynamic feature extraction and dynamic pattern classification are selected by evaluating a subset of feature extraction, dimensionality reduction, and classification algorithms that have been used in brain-machine interfaces. A novel dimensionality reduction technique, the maximum ratio method (MRM) is proposed, which provides the most efficient performance. In terms of accuracy and complexity for hardware implementation, a combination having discrete wavelet transform for feature extraction, MRM for dimensionality reduction, and dynamic k-nearest neighbor for classification was chosen as the most efficient. It achieves a classification accuracy of 99.29%, an F1-score of 97.90%, and a choice probability of 99.86%.

  20. Recognizing human activities using appearance metric feature and kinematics feature

    NASA Astrophysics Data System (ADS)

    Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye

    2017-05-01

    The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.

  1. A 540-[Formula: see text] Duty Controlled RSSI With Current Reusing Technique for Human Body Communication.

    PubMed

    Jang, Jaeeun; Lee, Yongsu; Cho, Hyunwoo; Yoo, Hoi-Jun

    2016-08-01

    An ultra-low-power duty controlled received signal strength indicator (RSSI) is implemented for human body communication (HBC) in 180 nm CMOS technology under 1.5 V supply. The proposed RSSI adopted 3 following key features for low-power consumption; 1) current reusing technique (CR-RSSI) with replica bias circuit and calibration unit, 2) duty controller, and 3) reconfigurable gm-boosting LNA. The CR-RSSI utilizes stacked amplifier-rectifier-cell (AR-cell) to reuse the supply current of each blocks. As a result, the power consumption becomes 540 [Formula: see text] with +/-2 dB accuracy and 75 dB dynamic range. The replica bias circuit and calibration unit are adopted to increase the reliability of CR-RSSI. In addition, the duty controller turns off the RSSI when it is not required, and this function leads 70% power reduction. At last, the gm-boosting reconfigurable LNA can adaptively vary its noise and linearity performance with respect to input signal strength. Fro current reusing technique m this feature, we achieve 62% power reduction in the LNA. Thanks to these schemes, compared to the previous works, we can save 70% of power in RSSI and LNA.

  2. A versatile breast reduction technique: Conical plicated central U shaped (COPCUs) mammaplasty

    PubMed Central

    Copcu, Eray

    2009-01-01

    Background There have been numerous studies on reduction mammaplasty and its modifications in the literature. The multitude of modifications of reduction mammaplasty indicates that the ideal technique has yet to be found. There are four reasons for seeking the ideal technique. One reason is to preserve functional features of the breast: breastfeeding and arousal. Other reasons are to achieve the real geometric and aesthetic shape of the breast with the least scar and are to minimize complications of prior surgical techniques without causing an additional complication. Last reason is the limitation of the techniques described before. To these aims, we developed a new versatile reduction mammaplasty technique, which we called conical plicated central U shaped (COPCUs) mammaplasty. Methods We performed central plication to achieve a juvenile look in the superior pole of the breast and to prevent postoperative pseudoptosis and used central U shaped flap to achieve maximum NAC safety and to preserve lactation and nipple sensation. The central U flap was 6 cm in width and the superior conical plication was performed with 2/0 PDS. Preoperative and postoperative standard measures of the breast including the superior pole fullness were compared. Results Forty six patients were operated with the above mentioned technique. All of the patients were satisfied with functional and aesthetic results and none of them had major complications. There were no changes in the nipple innervation. Six patients becoming pregnant after surgery did not experience any problems with lactation. None of the patients required scar revision. Conclusion Our technique is a versatile, safe, reliable technique which creates the least scar, avoids previously described disadvantages, provides maximum preservation of functions, can be employed in all breasts regardless of their sizes. PMID:19575809

  3. Behaviour change techniques targeting both diet and physical activity in type 2 diabetes: A systematic review and meta-analysis.

    PubMed

    Cradock, Kevin A; ÓLaighin, Gearóid; Finucane, Francis M; Gainforth, Heather L; Quinlan, Leo R; Ginis, Kathleen A Martin

    2017-02-08

    Changing diet and physical activity behaviour is one of the cornerstones of type 2 diabetes treatment, but changing behaviour is challenging. The objective of this study was to identify behaviour change techniques (BCTs) and intervention features of dietary and physical activity interventions for patients with type 2 diabetes that are associated with changes in HbA 1c and body weight. We performed a systematic review of papers published between 1975-2015 describing randomised controlled trials (RCTs) that focused exclusively on both diet and physical activity. The constituent BCTs, intervention features and methodological rigour of these interventions were evaluated. Changes in HbA 1c and body weight were meta-analysed and examined in relation to use of BCTs. Thirteen RCTs were identified. Meta-analyses revealed reductions in HbA 1c at 3, 6, 12 and 24 months of -1.11 % (12 mmol/mol), -0.67 % (7 mmol/mol), -0.28 % (3 mmol/mol) and -0.26 % (2 mmol/mol) with an overall reduction of -0.53 % (6 mmol/mol [95 % CI -0.74 to -0.32, P < 0.00001]) in intervention groups compared to control groups. Meta-analyses also showed a reduction in body weight of -2.7 kg, -3.64 kg, -3.77 kg and -3.18 kg at 3, 6, 12 and 24 months, overall reduction was -3.73 kg (95 % CI -6.09 to -1.37 kg, P = 0.002). Four of 46 BCTs identified were associated with >0.3 % reduction in HbA 1c : 'instruction on how to perform a behaviour', 'behavioural practice/rehearsal', 'demonstration of the behaviour' and 'action planning', as were intervention features 'supervised physical activity', 'group sessions', 'contact with an exercise physiologist', 'contact with an exercise physiologist and a dietitian', 'baseline HbA 1c >8 %' and interventions of greater frequency and intensity. Diet and physical activity interventions achieved clinically significant reductions in HbA 1c at three and six months, but not at 12 and 24 months. Specific BCTs and intervention features identified may inform more effective structured lifestyle intervention treatment strategies for type 2 diabetes.

  4. Nonlinear model-order reduction for compressible flow solvers using the Discrete Empirical Interpolation Method

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis

    2016-11-01

    Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.

  5. Breast cancer detection in rotational thermography images using texture features

    NASA Astrophysics Data System (ADS)

    Francis, Sheeja V.; Sasikala, M.; Bhavani Bharathi, G.; Jaipurkar, Sandeep D.

    2014-11-01

    Breast cancer is a major cause of mortality in young women in the developing countries. Early diagnosis is the key to improve survival rate in cancer patients. Breast thermography is a diagnostic procedure that non-invasively images the infrared emissions from breast surface to aid in the early detection of breast cancer. Due to limitations in imaging protocol, abnormality detection by conventional breast thermography, is often a challenging task. Rotational thermography is a novel technique developed in order to overcome the limitations of conventional breast thermography. This paper evaluates this technique's potential for automatic detection of breast abnormality, from the perspective of cold challenge. Texture features are extracted in the spatial domain, from rotational thermogram series, prior to and post the application of cold challenge. These features are fed to a support vector machine for automatic classification of normal and malignant breasts, resulting in a classification accuracy of 83.3%. Feature reduction has been performed by principal component analysis. As a novel attempt, the ability of this technique to locate the abnormality has been studied. The results of the study indicate that rotational thermography holds great potential as a screening tool for breast cancer detection.

  6. A Quantum Hybrid PSO Combined with Fuzzy k-NN Approach to Feature Selection and Cell Classification in Cervical Cancer Detection.

    PubMed

    Iliyasu, Abdullah M; Fatichah, Chastine

    2017-12-19

    A quantum hybrid (QH) intelligent approach that blends the adaptive search capability of the quantum-behaved particle swarm optimisation (QPSO) method with the intuitionistic rationality of traditional fuzzy k -nearest neighbours (Fuzzy k -NN) algorithm (known simply as the Q-Fuzzy approach) is proposed for efficient feature selection and classification of cells in cervical smeared (CS) images. From an initial multitude of 17 features describing the geometry, colour, and texture of the CS images, the QPSO stage of our proposed technique is used to select the best subset features (i.e., global best particles) that represent a pruned down collection of seven features. Using a dataset of almost 1000 images, performance evaluation of our proposed Q-Fuzzy approach assesses the impact of our feature selection on classification accuracy by way of three experimental scenarios that are compared alongside two other approaches: the All-features (i.e., classification without prior feature selection) and another hybrid technique combining the standard PSO algorithm with the Fuzzy k -NN technique (P-Fuzzy approach). In the first and second scenarios, we further divided the assessment criteria in terms of classification accuracy based on the choice of best features and those in terms of the different categories of the cervical cells. In the third scenario, we introduced new QH hybrid techniques, i.e., QPSO combined with other supervised learning methods, and compared the classification accuracy alongside our proposed Q-Fuzzy approach. Furthermore, we employed statistical approaches to establish qualitative agreement with regards to the feature selection in the experimental scenarios 1 and 3. The synergy between the QPSO and Fuzzy k -NN in the proposed Q-Fuzzy approach improves classification accuracy as manifest in the reduction in number cell features, which is crucial for effective cervical cancer detection and diagnosis.

  7. Shape component analysis: structure-preserving dimension reduction on biological shape spaces.

    PubMed

    Lee, Hao-Chih; Liao, Tao; Zhang, Yongjie Jessica; Yang, Ge

    2016-03-01

    Quantitative shape analysis is required by a wide range of biological studies across diverse scales, ranging from molecules to cells and organisms. In particular, high-throughput and systems-level studies of biological structures and functions have started to produce large volumes of complex high-dimensional shape data. Analysis and understanding of high-dimensional biological shape data require dimension-reduction techniques. We have developed a technique for non-linear dimension reduction of 2D and 3D biological shape representations on their Riemannian spaces. A key feature of this technique is that it preserves distances between different shapes in an embedded low-dimensional shape space. We demonstrate an application of this technique by combining it with non-linear mean-shift clustering on the Riemannian spaces for unsupervised clustering of shapes of cellular organelles and proteins. Source code and data for reproducing results of this article are freely available at https://github.com/ccdlcmu/shape_component_analysis_Matlab The implementation was made in MATLAB and supported on MS Windows, Linux and Mac OS. geyang@andrew.cmu.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Intelligent Color Vision System for Ripeness Classification of Oil Palm Fresh Fruit Bunch

    PubMed Central

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Halim, Zaini Abdul; Ibrahim, Haidi; Ali, Syed Salim Syed

    2012-01-01

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category. PMID:23202043

  9. Intelligent color vision system for ripeness classification of oil palm fresh fruit bunch.

    PubMed

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Abdul Halim, Zaini; Ibrahim, Haidi; Syed Ali, Syed Salim

    2012-10-22

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category.

  10. NUMERICAL ANALYSIS TECHNIQUE USING THE STATISTICAL ENERGY ANALYSIS METHOD CONCERNING THE BLASTING NOISE REDUCTION BY THE SOUND INSULATION DOOR USED IN TUNNEL CONSTRUCTIONS

    NASA Astrophysics Data System (ADS)

    Ishida, Shigeki; Mori, Atsuo; Shinji, Masato

    The main method to reduce the blasting charge noise which occurs in a tunnel under construction is to install the sound insulation door in the tunnel. However, the numerical analysis technique to predict the accurate effect of the transmission loss in the sound insulation door is not established. In this study, we measured the blasting charge noise and the vibration of the sound insulation door in the tunnel with the blasting charge, and performed analysis and modified acoustic feature. In addition, we reproduced the noise reduction effect of the sound insulation door by statistical energy analysis method and confirmed that numerical simulation is possible by this procedure.

  11. Classification enhancement for post-stroke dementia using fuzzy neighborhood preserving analysis with QR-decomposition.

    PubMed

    Al-Qazzaz, Noor Kamal; Ali, Sawal; Ahmad, Siti Anom; Escudero, Javier

    2017-07-01

    The aim of the present study was to discriminate the electroencephalogram (EEG) of 5 patients with vascular dementia (VaD), 15 patients with stroke-related mild cognitive impairment (MCI), and 15 control normal subjects during a working memory (WM) task. We used independent component analysis (ICA) and wavelet transform (WT) as a hybrid preprocessing approach for EEG artifact removal. Three different features were extracted from the cleaned EEG signals: spectral entropy (SpecEn), permutation entropy (PerEn) and Tsallis entropy (TsEn). Two classification schemes were applied - support vector machine (SVM) and k-nearest neighbors (kNN) - with fuzzy neighborhood preserving analysis with QR-decomposition (FNPAQR) as a dimensionality reduction technique. The FNPAQR dimensionality reduction technique increased the SVM classification accuracy from 82.22% to 90.37% and from 82.6% to 86.67% for kNN. These results suggest that FNPAQR consistently improves the discrimination of VaD, MCI patients and control normal subjects and it could be a useful feature selection to help the identification of patients with VaD and MCI.

  12. A statistical-textural-features based approach for classification of solid drugs using surface microscopic images.

    PubMed

    Tahir, Fahima; Fahiem, Muhammad Abuzar

    2014-01-01

    The quality of pharmaceutical products plays an important role in pharmaceutical industry as well as in our lives. Usage of defective tablets can be harmful for patients. In this research we proposed a nondestructive method to identify defective and nondefective tablets using their surface morphology. Three different environmental factors temperature, humidity and moisture are analyzed to evaluate the performance of the proposed method. Multiple textural features are extracted from the surface of the defective and nondefective tablets. These textural features are gray level cooccurrence matrix, run length matrix, histogram, autoregressive model and HAAR wavelet. Total textural features extracted from images are 281. We performed an analysis on all those 281, top 15, and top 2 features. Top 15 features are extracted using three different feature reduction techniques: chi-square, gain ratio and relief-F. In this research we have used three different classifiers: support vector machine, K-nearest neighbors and naïve Bayes to calculate the accuracies against proposed method using two experiments, that is, leave-one-out cross-validation technique and train test models. We tested each classifier against all selected features and then performed the comparison of their results. The experimental work resulted in that in most of the cases SVM performed better than the other two classifiers.

  13. Digital Signal Processing For Low Bit Rate TV Image Codecs

    NASA Astrophysics Data System (ADS)

    Rao, K. R.

    1987-06-01

    In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Shaobu; Lu, Shuai; Zhou, Ning

    In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highestmore » similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.« less

  15. A novel clinical decision support system using improved adaptive genetic algorithm for the assessment of fetal well-being.

    PubMed

    Ravindran, Sindhu; Jambek, Asral Bahari; Muthusamy, Hariharan; Neoh, Siew-Chin

    2015-01-01

    A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm.

  16. A Hybrid Classification System for Heart Disease Diagnosis Based on the RFRS Method.

    PubMed

    Liu, Xiao; Wang, Xiaoli; Su, Qiang; Zhang, Mo; Zhu, Yanhong; Wang, Qiugen; Wang, Qian

    2017-01-01

    Heart disease is one of the most common diseases in the world. The objective of this study is to aid the diagnosis of heart disease using a hybrid classification system based on the ReliefF and Rough Set (RFRS) method. The proposed system contains two subsystems: the RFRS feature selection system and a classification system with an ensemble classifier. The first system includes three stages: (i) data discretization, (ii) feature extraction using the ReliefF algorithm, and (iii) feature reduction using the heuristic Rough Set reduction algorithm that we developed. In the second system, an ensemble classifier is proposed based on the C4.5 classifier. The Statlog (Heart) dataset, obtained from the UCI database, was used for experiments. A maximum classification accuracy of 92.59% was achieved according to a jackknife cross-validation scheme. The results demonstrate that the performance of the proposed system is superior to the performances of previously reported classification techniques.

  17. Parallelized modelling and solution scheme for hierarchically scaled simulations

    NASA Technical Reports Server (NTRS)

    Padovan, Joe

    1995-01-01

    This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.

  18. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  19. Integrand Reduction Reloaded: Algebraic Geometry and Finite Fields

    NASA Astrophysics Data System (ADS)

    Sameshima, Ray D.; Ferroglia, Andrea; Ossola, Giovanni

    2017-01-01

    The evaluation of scattering amplitudes in quantum field theory allows us to compare the phenomenological prediction of particle theory with the measurement at collider experiments. The study of scattering amplitudes, in terms of their symmetries and analytic properties, provides a theoretical framework to develop techniques and efficient algorithms for the evaluation of physical cross sections and differential distributions. Tree-level calculations have been known for a long time. Loop amplitudes, which are needed to reduce the theoretical uncertainty, are more challenging since they involve a large number of Feynman diagrams, expressed as integrals of rational functions. At one-loop, the problem has been solved thanks to the combined effect of integrand reduction, such as the OPP method, and unitarity. However, plenty of work is still needed at higher orders, starting with the two-loop case. Recently, integrand reduction has been revisited using algebraic geometry. In this presentation, we review the salient features of integrand reduction for dimensionally regulated Feynman integrals, and describe an interesting technique for their reduction based on multivariate polynomial division. We also show a novel approach to improve its efficiency by introducing finite fields. Supported in part by the National Science Foundation under Grant PHY-1417354.

  20. Thermally tailored gradient topography surface on elastomeric thin films.

    PubMed

    Roy, Sudeshna; Bhandaru, Nandini; Das, Ritopa; Harikrishnan, G; Mukherjee, Rabibrata

    2014-05-14

    We report a simple method for creating a nanopatterned surface with continuous variation in feature height on an elastomeric thin film. The technique is based on imprinting the surface of a film of thermo-curable elastomer (Sylgard 184), which has continuous variation in cross-linking density introduced by means of differential heating. This results in variation of viscoelasticity across the length of the surface and the film exhibits differential partial relaxation after imprinting with a flexible stamp and subjecting it to an externally applied stress for a transient duration. An intrinsic perfect negative replica of the stamp pattern is initially created over the entire film surface as long as the external force remains active. After the external force is withdrawn, there is partial relaxation of the applied stresses, which is manifested as reduction in amplitude of the imprinted features. Due to the spatial viscoelasticity gradient, the extent of stress relaxation induced feature height reduction varies across the length of the film (L), resulting in a surface with a gradient topography with progressively varying feature heights (hF). The steepness of the gradient can be controlled by varying the temperature gradient as well as the duration of precuring of the film prior to imprinting. The method has also been utilized for fabricating wettability gradient surfaces using a high aspect ratio biomimetic stamp. The use of a flexible stamp allows the technique to be extended for creating a gradient topography on nonplanar surfaces as well. We also show that the gradient surfaces with regular structures can be used in combinatorial studies related to pattern directed dewetting.

  1. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Analysis of the Growth Process of Neural Cells in Culture Environment Using Image Processing Techniques

    NASA Astrophysics Data System (ADS)

    Mirsafianf, Atefeh S.; Isfahani, Shirin N.; Kasaei, Shohreh; Mobasheri, Hamid

    Here we present an approach for processing neural cells images to analyze their growth process in culture environment. We have applied several image processing techniques for: 1- Environmental noise reduction, 2- Neural cells segmentation, 3- Neural cells classification based on their dendrites' growth conditions, and 4- neurons' features Extraction and measurement (e.g., like cell body area, number of dendrites, axon's length, and so on). Due to the large amount of noise in the images, we have used feed forward artificial neural networks to detect edges more precisely.

  3. Development of advanced avionics systems applicable to terminal-configured vehicles

    NASA Technical Reports Server (NTRS)

    Heimbold, R. L.; Lee, H. P.; Leffler, M. F.

    1980-01-01

    A technique to add the time constraint to the automatic descent feature of the existing L-1011 aircraft Flight Management System (FMS) was developed. Software modifications were incorporated in the FMS computer program and the results checked by lab simulation and on a series of eleven test flights. An arrival time dispersion (2 sigma) of 19 seconds was achieved. The 4 D descent technique can be integrated with the time-based metering method of air traffic control. Substantial reductions in delays at today's busy airports should result.

  4. Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teramoto, Atsushi, E-mail: teramoto@fujita-hu.ac.jp; Fujita, Hiroshi; Yamamuro, Osamu

    Purpose: Automated detection of solitary pulmonary nodules using positron emission tomography (PET) and computed tomography (CT) images shows good sensitivity; however, it is difficult to detect nodules in contact with normal organs, and additional efforts are needed so that the number of false positives (FPs) can be further reduced. In this paper, the authors propose an improved FP-reduction method for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks (CNNs). Methods: The overall scheme detects pulmonary nodules using both CT and PET images. In the CT images, a massive region is first detected using anmore » active contour filter, which is a type of contrast enhancement filter that has a deformable kernel shape. Subsequently, high-uptake regions detected by the PET images are merged with the regions detected by the CT images. FP candidates are eliminated using an ensemble method; it consists of two feature extractions, one by shape/metabolic feature analysis and the other by a CNN, followed by a two-step classifier, one step being rule based and the other being based on support vector machines. Results: The authors evaluated the detection performance using 104 PET/CT images collected by a cancer-screening program. The sensitivity in detecting candidates at an initial stage was 97.2%, with 72.8 FPs/case. After performing the proposed FP-reduction method, the sensitivity of detection was 90.1%, with 4.9 FPs/case; the proposed method eliminated approximately half the FPs existing in the previous study. Conclusions: An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images. The authors’ ensemble FP-reduction method eliminated 93% of the FPs; their proposed method using CNN technique eliminates approximately half the FPs existing in the previous study. These results indicate that their method may be useful in the computer-aided detection of pulmonary nodules using PET/CT images.« less

  5. Comparison of imaging characteristics of multiple-beam equalization and storage phosphor direct digitizer radiographic systems

    NASA Astrophysics Data System (ADS)

    Sankaran, A.; Chuang, Keh-Shih; Yonekawa, Hisashi; Huang, H. K.

    1992-06-01

    The imaging characteristics of two chest radiographic equipment, Advanced Multiple Beam Equalization Radiography (AMBER) and Konica Direct Digitizer [using a storage phosphor (SP) plate] systems have been compared. The variables affecting image quality and the computer display/reading systems used are detailed. Utilizing specially designed wedge, geometric, and anthropomorphic phantoms, studies were conducted on: exposure and energy response of detectors; nodule detectability; different exposure techniques; various look- up tables (LUTs), gray scale displays and laser printers. Methods for scatter estimation and reduction were investigated. It is concluded that AMBER with screen-film and equalization techniques provides better nodule detectability than SP plates. However, SP plates have other advantages such as flexibility in the selection of exposure techniques, image processing features, and excellent sensitivity when combined with optimum reader operating modes. The equalization feature of AMBER provides better nodule detectability under the denser regions of the chest. Results of diagnostic accuracy are demonstrated with nodule detectability plots and analysis of images obtained with phantoms.

  6. A regularized approach for geodesic-based semisupervised multimanifold learning.

    PubMed

    Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun

    2014-05-01

    Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.

  7. Imaging mass spectrometry data reduction: automated feature identification and extraction.

    PubMed

    McDonnell, Liam A; van Remoortere, Alexandra; de Velde, Nico; van Zeijl, René J M; Deelder, André M

    2010-12-01

    Imaging MS now enables the parallel analysis of hundreds of biomolecules, spanning multiple molecular classes, which allows tissues to be described by their molecular content and distribution. When combined with advanced data analysis routines, tissues can be analyzed and classified based solely on their molecular content. Such molecular histology techniques have been used to distinguish regions with differential molecular signatures that could not be distinguished using established histologic tools. However, its potential to provide an independent, complementary analysis of clinical tissues has been limited by the very large file sizes and large number of discrete variables associated with imaging MS experiments. Here we demonstrate data reduction tools, based on automated feature identification and extraction, for peptide, protein, and lipid imaging MS, using multiple imaging MS technologies, that reduce data loads and the number of variables by >100×, and that highlight highly-localized features that can be missed using standard data analysis strategies. It is then demonstrated how these capabilities enable multivariate analysis on large imaging MS datasets spanning multiple tissues. Copyright © 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.

  8. Simultaneous fabrication of very high aspect ratio positive nano- to milliscale structures.

    PubMed

    Chen, Long Qing; Chan-Park, Mary B; Zhang, Qing; Chen, Peng; Li, Chang Ming; Li, Sai

    2009-05-01

    A simple and inexpensive technique for the simultaneous fabrication of positive (i.e., protruding), very high aspect (>10) ratio nanostructures together with micro- or millistructures is developed. The method involves using residual patterns of thin-film over-etching (RPTO) to produce sub-micro-/nanoscale features. The residual thin-film nanopattern is used as an etching mask for Si deep reactive ion etching. The etched Si structures are further reduced in size by Si thermal oxidation to produce amorphous SiO(2), which is subsequently etched away by HF. Two arrays of positive Si nanowalls are demonstrated with this combined RPTO-SiO(2)-HF technique. One array has a feature size of 150 nm and an aspect ratio of 26.7 and another has a feature size of 50 nm and an aspect ratio of 15. No other parallel reduction technique can achieve such a very high aspect ratio for 50-nm-wide nanowalls. As a demonstration of the technique to simultaneously achieve nano- and milliscale features, a simple Si nanofluidic master mold with positive features with dimensions varying continuously from 1 mm to 200 nm and a highest aspect ratio of 6.75 is fabricated; the narrow 200-nm section is 4.5 mm long. This Si master mold is then used as a mold for UV embossing. The embossed open channels are then closed by a cover with glue bonding. A high aspect ratio is necessary to produce unblocked closed channels after the cover bonding process of the nanofluidic chip. The combined method of RPTO, Si thermal oxidation, and HF etching can be used to make complex nanofluidic systems and nano-/micro-/millistructures for diverse applications.

  9. Understanding and utilization of Thematic Mapper and other remotely sensed data for vegetation monitoring

    NASA Technical Reports Server (NTRS)

    Crist, E. P.; Cicone, R. C.; Metzler, M. D.; Parris, T. M.; Rice, D. P.; Sampson, R. E.

    1983-01-01

    The TM Tasseled Cap transformation, which provides both a 50% reduction in data volume with little or no loss of important information and spectral features with direct physical association, is presented and discussed. Using both simulated and actual TM data, some important characteristics of vegetation and soils in this feature space are described, as are the effects of solar elevation angle and atmospheric haze. A preliminary spectral haze diagnostic feature, based on only simulated data, is also examined. The characteristics of the TM thermal band are discussed, as is a demonstration of the use of TM data in energy balance studies. Some characteristics of AVHRR data are described, as are the sensitivities to scene content of several LANDSAT-MSS preprocessing techniques.

  10. Diet Behavior Change Techniques in Type 2 Diabetes: A Systematic Review and Meta-analysis.

    PubMed

    Cradock, Kevin A; ÓLaighin, Gearóid; Finucane, Francis M; McKay, Rhyann; Quinlan, Leo R; Martin Ginis, Kathleen A; Gainforth, Heather L

    2017-12-01

    Dietary behavior is closely connected to type 2 diabetes. The purpose of this meta-analysis was to identify behavior change techniques (BCTs) and specific components of dietary interventions for patients with type 2 diabetes associated with changes in HbA 1c and body weight. The Cochrane Library, CINAHL, Embase, PubMed, PsycINFO, and Scopus databases were searched. Reports of randomized controlled trials published during 1975-2017 that focused on changing dietary behavior were selected, and methodological rigor, use of BCTs, and fidelity and intervention features were evaluated. In total, 54 studies were included, with 42 different BCTs applied and an average of 7 BCTs used per study. Four BCTs-"problem solving," "feedback on behavior," "adding objects to the environment," and "social comparison"-and the intervention feature "use of theory" were associated with >0.3% (3.3 mmol/mol) reduction in HbA 1c . Meta-analysis revealed that studies that aimed to control or change the environment showed a greater reduction in HbA 1c of 0.5% (5.5 mmol/mol) (95% CI -0.65, -0.34), compared with 0.32% (3.5 mmol/mol) (95% CI -0.40, -0.23) for studies that aimed to change behavior. Limitations of our study were the heterogeneity of dietary interventions and poor quality of reporting of BCTs. This study provides evidence that changing the dietary environment may have more of an effect on HbA 1c in adults with type 2 diabetes than changing dietary behavior. Diet interventions achieved clinically significant reductions in HbA 1c , although initial reductions in body weight diminished over time. If appropriate BCTs and theory are applied, dietary interventions may result in better glucose control. © 2017 by the American Diabetes Association.

  11. Discrimination of poorly exposed lithologies in AVIRIS data

    NASA Technical Reports Server (NTRS)

    Farrand, William H.; Harsanyi, Joseph C.

    1993-01-01

    One of the advantages afforded by imaging spectrometers such as AVIRIS is the capability to detect target materials at a sub-pixel scale. This paper presents several examples of the identification of poorly exposed geologic materials - materials which are either subpixel in scale or which, while having some surface expression over several pixels, are partially covered by vegetation or other materials. Sabol et al. (1992) noted that a primary factor in the ability to distinguish sub-pixel targets is the spectral contrast between the target and its surroundings. In most cases, this contrast is best expressed as an absorption feature or features present in the target but absent in the surroundings. Under such circumstances, techniques such as band depth mapping (Clark et al., 1992) are feasible. However, the only difference between a target material and its surroundings is often expressed solely in the continuum. We define the 'continuum' as the reflectance or radiance spanning spectral space between spectral features. Differences in continuum slope and shape can only be determined by reduction techniques which considers the entire spectral range; i.e., techniques such as spectral mixture analysis (Adams et al., 1989) and recently developed techniques which utilize an orthogonal subspace projection operator (Harsanyi, 1993). Two of the three examples considered herein deal with cases where the target material differs from its surroundings only by such a subtle continuum change.

  12. Single-Step Laser-Assisted Graphene Oxide Reduction and Nonlinear Optical Properties Exploration via CW Laser Excitation

    NASA Astrophysics Data System (ADS)

    Ghasemi, Fatemeh; Razi, Sepehr; Madanipour, Khosro

    2018-02-01

    The synthesis of reduced graphene oxide using pulsed laser irradiation is experimentally investigated. For this purpose, various irradiation conditions were selected and the chemical features of the different products were explored using ultraviolet-visible, Fourier transform infrared and Raman spectroscopy techniques. Moreover, the nonlinear optical properties of the synthesized products were assessed by using open and closed aperture Z-scan techniques, in which continuous wave laser irradiating at 532-nm wavelength was utilized as the exciting source. The results clearly revealed that the degree of graphene oxide reduction not only depends on the amount of the irradiation dose (energy of the laser beam × exposure time) but also on the light source wavelength. Furthermore, strong dependency between the nonlinear optical properties of the products and the amount of the de-oxygenation was observed. The experimental results are discussed in detail.

  13. Reduction of Vanadium Oxide (VOx) under High Vacuum Conditions as Investigated by X-Ray Photoelectron Spectroscopy

    NASA Astrophysics Data System (ADS)

    Chourasia, A.

    2015-03-01

    Vanadium oxide thin films were formed by depositing thin films of vanadium on quartz substrates and oxidizing them in an atmosphere of oxygen. The deposition was done by the e-beam technique. The oxide films were annealed at different temperatures for different times under high vacuum conditions. The technique of x-ray photoelectron spectroscopy has been employed to study the changes in the oxidation states of vanadium and oxygen in such films. The spectral features in the vanadium 2p, oxygen 1s, and the x-ray excited Auger regions were investigated. The Auger parameter has been utilized to study the changes. The complete oxidation of elemental vanadium to V2O5 was observed to occur at 700°C. At any other temperature, a mixture of oxides consisting of V2O5 and VO2 was observed in the films. Annealing of the films resulted in the gradual loss of oxygen followed by reduction in the oxidation state from +5 to 0. The reduction was observed to depend upon the annealing temperature and the annealing time. Organized Research, TAMU-Commerce.

  14. Venus gravity: Summary and coming events

    NASA Technical Reports Server (NTRS)

    Sjogren, W. L.

    1992-01-01

    The first significant dataset to provide local measures of venusian gravity field variations was that acquired from the Pioneer Venus Orbiter (PVO) during the 1979-1981 period. These observations were S-band Doppler radio signals from the orbiting spacecraft received at Earth-based tracking stations. Early reductions of these data were performed using two quite different techniques. Estimates of the classical spherical harmonics were made to various degrees and orders up to 10. At that time, solutions of much higher degree and order were very difficult due to computer limitations. These reductions, because of low degree and order, revealed only the most prominent features with poor spatial resolution and very reduced peak amplitudes.

  15. A Decision Support Methodology for Space Technology Advocacy.

    DTIC Science & Technology

    1984-12-01

    determine their parameters. Program control is usually exercised by level of effort funding. 63xx is the designator for advanced development pro- grams... designing systems or models that successfully aid the decision-maker. One remedy for this deficiency in the techniques is to increase the...methodology for use by the Air Force Space Technology Advocate is designed to provide the following features [l11:146-1471: meaningful reduction of available

  16. Realization of State-Space Models for Wave Propagation Simulations

    DTIC Science & Technology

    2012-01-01

    reduction techniques can be applied to reduce the dimension of the model further if warranted. INFRASONIC PROPAGATION MODEL Infrasound is sound below 20...capable of scatter- ing and blocking the propagation. This is because the infrasound wavelengths are near the scales of topographic features. These...and Development Center (ERDC) Big Black Test Site (BBTS) and an infrasound -sensing array at the ERDC Waterways Experiment Station (WES). Both are

  17. Fronto-orbital feminization technique. A surgical strategy using fronto-orbital burring with or without eggshell technique to optimize the risk/benefit ratio.

    PubMed

    Villepelet, A; Jafari, A; Baujat, B

    2018-05-04

    The demand for facial feminization is increasing in transsexual patients. Masculine foreheads present extensive supraorbital bossing with a more acute glabellar angle, whereas female foreheads show softer features. The aim of this article is to describe our surgical technique for fronto-orbital feminization. The mask-lift technique is an upper face-lift. It provides rejuvenation by correcting collapsed features, and fronto-orbital feminization through burring of orbital rims and lateral canthopexies. Depending on the size of the frontal sinus and the thickness of its anterior wall, frontal remodeling is achieved using simple burring or by means of the eggshell technique. Orbital remodeling comprises a superolateral orbital opening, a reduction of ridges and a trough at the lateral orbital rim to support the lateral canthopexy. Frontal, corrugator and procerus myectomies, plus minimal scalp excision, complete the surgery. Our technique results in significant, natural-looking feminization. No complications were observed in our series of patients. The eggshell technique is an alternative to bone flap on over-pneumatized sinus. Fronto-orbital feminization fits into a wider surgical strategy. It can be associated to rhinoplasty, genioplasty, mandibular angle remodeling, face lift and laryngoplasty. Achieving facial feminization in 2 or 3 stages improves psychological and physiological tolerance. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  18. Automated system for characterization and classification of malaria-infected stages using light microscopic images of thin blood smears.

    PubMed

    Das, D K; Maiti, A K; Chakraborty, C

    2015-03-01

    In this paper, we propose a comprehensive image characterization cum classification framework for malaria-infected stage detection using microscopic images of thin blood smears. The methodology mainly includes microscopic imaging of Leishman stained blood slides, noise reduction and illumination correction, erythrocyte segmentation, feature selection followed by machine classification. Amongst three-image segmentation algorithms (namely, rule-based, Chan-Vese-based and marker-controlled watershed methods), marker-controlled watershed technique provides better boundary detection of erythrocytes specially in overlapping situations. Microscopic features at intensity, texture and morphology levels are extracted to discriminate infected and noninfected erythrocytes. In order to achieve subgroup of potential features, feature selection techniques, namely, F-statistic and information gain criteria are considered here for ranking. Finally, five different classifiers, namely, Naive Bayes, multilayer perceptron neural network, logistic regression, classification and regression tree (CART), RBF neural network have been trained and tested by 888 erythrocytes (infected and noninfected) for each features' subset. Performance evaluation of the proposed methodology shows that multilayer perceptron network provides higher accuracy for malaria-infected erythrocytes recognition and infected stage classification. Results show that top 90 features ranked by F-statistic (specificity: 98.64%, sensitivity: 100%, PPV: 99.73% and overall accuracy: 96.84%) and top 60 features ranked by information gain provides better results (specificity: 97.29%, sensitivity: 100%, PPV: 99.46% and overall accuracy: 96.73%) for malaria-infected stage classification. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  19. Turbulence scalings in pipe flows exhibiting polymer-induced drag reduction

    NASA Astrophysics Data System (ADS)

    Zadrazil, Ivan; Markides, Christos

    2014-11-01

    Non-intrusive laser based diagnostics technique, namely Particle Image Velocimetry, was used to in detail characterise polymer induced drag reduction in a turbulent pipe flow. The effect of polymer additives was investigated in a pneumatically-driven flow facility featuring a horizontal pipe test section of inner diameter 25.3 mm and length 8 m. Three high molecular weight polymers (2, 4 and 8 MDa) at concentrations of 5 - 250 wppm were used at Reynolds numbers from 35000 to 210000. The PIV derived results show that the level of drag reduction scales with different normalised turbulence parameters, e.g. streamwise and spanwise velocity fluctuations, vorticity or Reynolds stresses. These scalings are dependent of the distance from the wall, however, are independent of the Reynolds numbers range investigated.

  20. Weighted Distance Functions Improve Analysis of High-Dimensional Data: Application to Molecular Dynamics Simulations.

    PubMed

    Blöchliger, Nicolas; Caflisch, Amedeo; Vitalis, Andreas

    2015-11-10

    Data mining techniques depend strongly on how the data are represented and how distance between samples is measured. High-dimensional data often contain a large number of irrelevant dimensions (features) for a given query. These features act as noise and obfuscate relevant information. Unsupervised approaches to mine such data require distance measures that can account for feature relevance. Molecular dynamics simulations produce high-dimensional data sets describing molecules observed in time. Here, we propose to globally or locally weight simulation features based on effective rates. This emphasizes, in a data-driven manner, slow degrees of freedom that often report on the metastable states sampled by the molecular system. We couple this idea to several unsupervised learning protocols. Our approach unmasks slow side chain dynamics within the native state of a miniprotein and reveals additional metastable conformations of a protein. The approach can be combined with most algorithms for clustering or dimensionality reduction.

  1. An approach to predict Sudden Cardiac Death (SCD) using time domain and bispectrum features from HRV signal.

    PubMed

    Houshyarifar, Vahid; Chehel Amirani, Mehdi

    2016-08-12

    In this paper we present a method to predict Sudden Cardiac Arrest (SCA) with higher order spectral (HOS) and linear (Time) features extracted from heart rate variability (HRV) signal. Predicting the occurrence of SCA is important in order to avoid the probability of Sudden Cardiac Death (SCD). This work is a challenge to predict five minutes before SCA onset. The method consists of four steps: pre-processing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In second step, bispectrum features of HRV signal and time-domain features are obtained. Six features are extracted from bispectrum and two features from time-domain. In the next step, these features are reduced to one feature by the linear discriminant analysis (LDA) technique. Finally, KNN and support vector machine-based classifiers are used to classify the HRV signals. We used two database named, MIT/BIH Sudden Cardiac Death (SCD) Database and Physiobank Normal Sinus Rhythm (NSR). In this work we achieved prediction of SCD occurrence for six minutes before the SCA with the accuracy over 91%.

  2. Structures of the recurrence plot of heart rate variability signal as a tool for predicting the onset of paroxysmal atrial fibrillation.

    PubMed

    Mohebbi, Maryam; Ghassemian, Hassan; Asl, Babak Mohammadzadeh

    2011-05-01

    This paper aims to propose an effective paroxysmal atrial fibrillation (PAF) predictor which is based on the analysis of the heart rate variability (HRV) signal. Predicting the onset of PAF, based on non-invasive techniques, is clinically important and can be invaluable in order to avoid useless therapeutic interventions and to minimize the risks for the patients. This method consists of four steps: Preprocessing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In the next step, the recurrence plot (RP) of HRV signal is obtained and six features are extracted to characterize the basic patterns of the RP. These features consist of length of longest diagonal segments, average length of the diagonal lines, entropy, trapping time, length of longest vertical line, and recurrence trend. In the third step, these features are reduced to three features by the linear discriminant analysis (LDA) technique. Using LDA not only reduces the number of the input features, but also increases the classification accuracy by selecting the most discriminating features. Finally, a support vector machine-based classifier is used to classify the HRV signals. The performance of the proposed method in prediction of PAF episodes was evaluated using the Atrial Fibrillation Prediction Database which consists of both 30-minutes ECG recordings end just prior to the onset of PAF and segments at least 45 min distant from any PAF events. The obtained sensitivity, specificity, and positive predictivity were 96.55%, 100%, and 100%, respectively.

  3. A multiscale product approach for an automatic classification of voice disorders from endoscopic high-speed videos.

    PubMed

    Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Joerg

    2013-01-01

    Direct observation of vocal fold vibration is indispensable for a clinical diagnosis of voice disorders. Among current imaging techniques, high-speed videoendoscopy constitutes a state-of-the-art method capturing several thousand frames per second of the vocal folds during phonation. Recently, a method for extracting descriptive features from phonovibrograms, a two-dimensional image containing the spatio-temporal pattern of vocal fold dynamics, was presented. The derived features are closely related to a clinically established protocol for functional assessment of pathologic voices. The discriminative power of these features for different pathologic findings and configurations has not been assessed yet. In the current study, a collective of 220 subjects is considered for two- and multi-class problems of healthy and pathologic findings. The performance of the proposed feature set is compared to conventional feature reduction routines and was found to clearly outperform these. As such, the proposed procedure shows great potential for diagnostical issues of vocal fold disorders.

  4. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Direct detection of light dark matter and solar neutrinos via color center production in crystals

    NASA Astrophysics Data System (ADS)

    Budnik, Ranny; Cheshnovsky, Ori; Slone, Oren; Volansky, Tomer

    2018-07-01

    We propose a new low-threshold direct-detection concept for dark matter and for coherent nuclear scattering of solar neutrinos, based on the dissociation of atoms and subsequent creation of color center type defects within a lattice. The novelty in our approach lies in its ability to detect single defects in a macroscopic bulk of material. This class of experiments features ultra-low energy thresholds which allows for the probing of dark matter as light as O (10) MeV through nuclear scattering. Another feature of defect creation in crystals is directional information, which presents as a spectacular signal and a handle on background reduction in the form of daily modulation of the interaction rate. We discuss the envisioned setup and detection technique, as well as background reduction. We further calculate the expected rates for dark matter and solar neutrinos in two example crystals for which available data exists, demonstrating the prospective sensitivity of such experiments.

  6. Chemical fractionation-enhanced structural characterization of marine dissolved organic matter

    NASA Astrophysics Data System (ADS)

    Arakawa, N.; Aluwihare, L.

    2016-02-01

    Describing the molecular fingerprint of dissolved organic matter (DOM) requires sample processing methods and separation techniques that can adequately minimize its complexity. We have employed acid hydrolysis as a way to make the subcomponents of marine solid phase-extracted (PPL) DOM more accessible to analytical techniques. Using a combination of NMR and chemical derivatization or reduction analyzed by comprehensive (GCxGC) gas chromatography, we observed chemical features strikingly similar to terrestrial DOM. In particular, we observed reduced alicylic hydrocarbons believed to be the backbone of previously identified carboxylic rich alicyclic material (CRAM). Additionally, we found carbohydrates, amino acids and small lipids and acids.

  7. A new time-frequency method for identification and classification of ball bearing faults

    NASA Astrophysics Data System (ADS)

    Attoui, Issam; Fergani, Nadir; Boutasseta, Nadir; Oudjani, Brahim; Deliou, Adel

    2017-06-01

    In order to fault diagnosis of ball bearing that is one of the most critical components of rotating machinery, this paper presents a time-frequency procedure incorporating a new feature extraction step that combines the classical wavelet packet decomposition energy distribution technique and a new feature extraction technique based on the selection of the most impulsive frequency bands. In the proposed procedure, firstly, as a pre-processing step, the most impulsive frequency bands are selected at different bearing conditions using a combination between Fast-Fourier-Transform FFT and Short-Frequency Energy SFE algorithms. Secondly, once the most impulsive frequency bands are selected, the measured machinery vibration signals are decomposed into different frequency sub-bands by using discrete Wavelet Packet Decomposition WPD technique to maximize the detection of their frequency contents and subsequently the most useful sub-bands are represented in the time-frequency domain by using Short Time Fourier transform STFT algorithm for knowing exactly what the frequency components presented in those frequency sub-bands are. Once the proposed feature vector is obtained, three feature dimensionality reduction techniques are employed using Linear Discriminant Analysis LDA, a feedback wrapper method and Locality Sensitive Discriminant Analysis LSDA. Lastly, the Adaptive Neuro-Fuzzy Inference System ANFIS algorithm is used for instantaneous identification and classification of bearing faults. In order to evaluate the performances of the proposed method, different testing data set to the trained ANFIS model by using different conditions of healthy and faulty bearings under various load levels, fault severities and rotating speed. The conclusion resulting from this paper is highlighted by experimental results which prove that the proposed method can serve as an intelligent bearing fault diagnosis system.

  8. Structural Acoustic Characteristics of Aircraft and Active Control of Interior Noise

    NASA Technical Reports Server (NTRS)

    Fuller, C. R.

    1998-01-01

    The reduction of aircraft cabin sound levels to acceptable values still remains a topic of much research. The use of conventional passive approaches has been extensively studied and implemented. However performance limits of these techniques have been reached. In this project, new techniques for understanding the structural acoustic behavior of aircraft fuselages and the use of this knowledge in developing advanced new control approaches are investigated. A central feature of the project is the Aircraft Fuselage Test Facility at Va Tech which is based around a full scale Cessna Citation III fuselage. The work is divided into two main parts; the first part investigates the use of an inverse technique for identifying dominant fuselage vibrations. The second part studies the development and implementation of active and active-passive techniques for controlling aircraft interior noise.

  9. ELM mitigation studies in JET and implications for ITER

    NASA Astrophysics Data System (ADS)

    de La Luna, Elena

    2009-11-01

    Type I edge localized modes (ELMs) remain a serious concern for ITER because of the high transient heat and particle flux that can lead to rapid erosion of the divertor plates. This has stimulated worldwide research on exploration of different methods to avoid or at least mitigate the ELM energy loss while maintaining adequate confinement. ITER will require reliable ELM control over a wide range of operating conditions, including changes in the edge safety factor, therefore a suite of different techniques is highly desirable. In JET several techniques have been demonstrated for control the frequency and size of type I ELMs, including resonant perturbations of the edge magnetic field (RMP), ELM magnetic triggering by fast vertical movement of the plasma column (``vertical kicks'') and ELM pacing using pellet injection. In this paper we present results from recent dedicated experiments in JET focusing on integrating the different ELM mitigation methods into similar plasma scenarios. Plasma parameter scans provide comparison of the performance of the different techniques in terms of both the reduction in ELM size and on the impact of each control method on plasma confinement. The compatibility of different ELM mitigation schemes has also been investigated. The plasma response to RMP and vertical kicks during the ELM mitigation phase shares common features: the reduction in ELM size (up to a factor of 3) is accompanied by a reduction in pedestal pressure (mainly due to a loss of density) with only minor (< 10%) reduction of the stored energy. Interestingly, it has been found that the combined application of RMP and kicks leads to a reduction of the threshold perturbation level (vertical displacement in the case of the kicks) necessary for the ELM mitigation to occur. The implication of these results for ITER will be discussed.

  10. Integration of adaptive guided filtering, deep feature learning, and edge-detection techniques for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing

    2017-11-01

    The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.

  11. Energy and life-cycle cost analysis of a six-story office building

    NASA Astrophysics Data System (ADS)

    Turiel, I.

    1981-10-01

    An energy analysis computer program, DOE-2, was used to compute annual energy use for a typical office building as originally designed and with several energy conserving design modifications. The largest energy use reductions were obtained with the incorporation of daylighting techniques, the use of double pane windows, night temperature setback, and the reduction of artificial lighting levels. A life-cycle cost model was developed to assess the cost-effectiveness of the design modifications discussed. The model incorporates such features as inclusion of taxes, depreciation, and financing of conservation investments. The energy conserving strategies are ranked according to economic criteria such as net present benefit, discounted payback period, and benefit to cost ratio.

  12. Neurodynamic evaluation of hearing aid features using EEG correlates of listening effort.

    PubMed

    Bernarding, Corinna; Strauss, Daniel J; Hannemann, Ronny; Seidler, Harald; Corona-Strauss, Farah I

    2017-06-01

    In this study, we propose a novel estimate of listening effort using electroencephalographic data. This method is a translation of our past findings, gained from the evoked electroencephalographic activity, to the oscillatory EEG activity. To test this technique, electroencephalographic data from experienced hearing aid users with moderate hearing loss were recorded, wearing hearing aids. The investigated hearing aid settings were: a directional microphone combined with a noise reduction algorithm in a medium and a strong setting, the noise reduction setting turned off, and a setting using omnidirectional microphones without any noise reduction. The results suggest that the electroencephalographic estimate of listening effort seems to be a useful tool to map the exerted effort of the participants. In addition, the results indicate that a directional processing mode can reduce the listening effort in multitalker listening situations.

  13. Applying machine learning classification techniques to automate sky object cataloguing

    NASA Astrophysics Data System (ADS)

    Fayyad, Usama M.; Doyle, Richard J.; Weir, W. Nick; Djorgovski, Stanislav

    1993-08-01

    We describe the application of an Artificial Intelligence machine learning techniques to the development of an automated tool for the reduction of a large scientific data set. The 2nd Mt. Palomar Northern Sky Survey is nearly completed. This survey provides comprehensive coverage of the northern celestial hemisphere in the form of photographic plates. The plates are being transformed into digitized images whose quality will probably not be surpassed in the next ten to twenty years. The images are expected to contain on the order of 107 galaxies and 108 stars. Astronomers wish to determine which of these sky objects belong to various classes of galaxies and stars. Unfortunately, the size of this data set precludes analysis in an exclusively manual fashion. Our approach is to develop a software system which integrates the functions of independently developed techniques for image processing and data classification. Digitized sky images are passed through image processing routines to identify sky objects and to extract a set of features for each object. These routines are used to help select a useful set of attributes for classifying sky objects. Then GID3 (Generalized ID3) and O-B Tree, two inductive learning techniques, learns classification decision trees from examples. These classifiers will then be applied to new data. These developmnent process is highly interactive, with astronomer input playing a vital role. Astronomers refine the feature set used to construct sky object descriptions, and evaluate the performance of the automated classification technique on new data. This paper gives an overview of the machine learning techniques with an emphasis on their general applicability, describes the details of our specific application, and reports the initial encouraging results. The results indicate that our machine learning approach is well-suited to the problem. The primary benefit of the approach is increased data reduction throughput. Another benefit is consistency of classification. The classification rules which are the product of the inductive learning techniques will form an objective, examinable basis for classifying sky objects. A final, not to be underestimated benefit is that astronomers will be freed from the tedium of an intensely visual task to pursue more challenging analysis and interpretation problems based on automatically catalogued data.

  14. A comparative study of new and current methods for dental micro-CT image denoising

    PubMed Central

    Lashgari, Mojtaba; Qin, Jie; Swain, Michael

    2016-01-01

    Objectives: The aim of the current study was to evaluate the application of two advanced noise-reduction algorithms for dental micro-CT images and to implement a comparative analysis of the performance of new and current denoising algorithms. Methods: Denoising was performed using gaussian and median filters as the current filtering approaches and the block-matching and three-dimensional (BM3D) method and total variation method as the proposed new filtering techniques. The performance of the denoising methods was evaluated quantitatively using contrast-to-noise ratio (CNR), edge preserving index (EPI) and blurring indexes, as well as qualitatively using the double-stimulus continuous quality scale procedure. Results: The BM3D method had the best performance with regard to preservation of fine textural features (CNREdge), non-blurring of the whole image (blurring index), the clinical visual score in images with very fine features and the overall visual score for all types of images. On the other hand, the total variation method provided the best results with regard to smoothing of images in texture-free areas (CNRTex-free) and in preserving the edges and borders of image features (EPI). Conclusions: The BM3D method is the most reliable technique for denoising dental micro-CT images with very fine textural details, such as shallow enamel lesions, in which the preservation of the texture and fine features is of the greatest importance. On the other hand, the total variation method is the technique of choice for denoising images without very fine textural details in which the clinician or researcher is interested mainly in anatomical features and structural measurements. PMID:26764583

  15. Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA

    NASA Astrophysics Data System (ADS)

    He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong

    2018-04-01

    This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.

  16. Soft-lithography fabrication of microfluidic features using thiol-ene formulations.

    PubMed

    Ashley, John F; Cramer, Neil B; Davis, Robert H; Bowman, Christopher N

    2011-08-21

    In this work, a novel thiol-ene based photopolymerizable resin formulation was shown to exhibit highly desirable characteristics, such as low cure time and the ability to overcome oxygen inhibition, for the photolithographic fabrication of microfluidic devices. The feature fidelity, as well as various aspects of the feature shape and quality, were assessed as functions of various resin attributes, particularly the exposure conditions, initiator concentration and inhibitor to initiator ratio. An optical technique was utilized to evaluate the feature fidelity as well as the feature shape and quality. These results were used to optimize the thiol-ene resin formulation to produce high fidelity, high aspect ratio features without significant reductions in feature quality. For structures with aspect ratios below 2, little difference (<3%) in feature quality was observed between thiol-ene and acrylate based formulations. However, at higher aspect ratios, the thiol-ene resin exhibited significantly improved feature quality. At an aspect ratio of 8, raised feature quality for the thiol-ene resin was dramatically better than that achieved by using the acrylate resin. The use of the thiol-ene based resin enabled fabrication of a pinched-flow microfluidic device that has complex channel geometry, small (50 μm) channel dimensions, and high aspect ratio (14) features. This journal is © The Royal Society of Chemistry 2011

  17. On the generation of cnoidal waves in ion beam-dusty plasma containing superthermal electrons and ions

    NASA Astrophysics Data System (ADS)

    El-Bedwehy, N. A.

    2016-07-01

    The reductive perturbation technique is used for investigating an ion beam-dusty plasma system consisting of two opposite polarity dusty grains, and superthermal electrons and ions in addition to ion beam. A two-dimensional Kadomtsev-Petviashvili equation is derived. The solution of this equation, employing Painlevé analysis, leads to cnoidal waves. The dependence of the structural features of these waves on the physical plasma parameters is investigated.

  18. On the generation of cnoidal waves in ion beam-dusty plasma containing superthermal electrons and ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El-Bedwehy, N. A., E-mail: nab-elbedwehy@yahoo.com

    2016-07-15

    The reductive perturbation technique is used for investigating an ion beam-dusty plasma system consisting of two opposite polarity dusty grains, and superthermal electrons and ions in addition to ion beam. A two-dimensional Kadomtsev–Petviashvili equation is derived. The solution of this equation, employing Painlevé analysis, leads to cnoidal waves. The dependence of the structural features of these waves on the physical plasma parameters is investigated.

  19. Structures of the Recurrence Plot of Heart Rate Variability Signal as a Tool for Predicting the Onset of Paroxysmal Atrial Fibrillation

    PubMed Central

    Mohebbi, Maryam; Ghassemian, Hassan; Asl, Babak Mohammadzadeh

    2011-01-01

    This paper aims to propose an effective paroxysmal atrial fibrillation (PAF) predictor which is based on the analysis of the heart rate variability (HRV) signal. Predicting the onset of PAF, based on non-invasive techniques, is clinically important and can be invaluable in order to avoid useless therapeutic interventions and to minimize the risks for the patients. This method consists of four steps: Preprocessing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In the next step, the recurrence plot (RP) of HRV signal is obtained and six features are extracted to characterize the basic patterns of the RP. These features consist of length of longest diagonal segments, average length of the diagonal lines, entropy, trapping time, length of longest vertical line, and recurrence trend. In the third step, these features are reduced to three features by the linear discriminant analysis (LDA) technique. Using LDA not only reduces the number of the input features, but also increases the classification accuracy by selecting the most discriminating features. Finally, a support vector machine-based classifier is used to classify the HRV signals. The performance of the proposed method in prediction of PAF episodes was evaluated using the Atrial Fibrillation Prediction Database which consists of both 30-minutes ECG recordings end just prior to the onset of PAF and segments at least 45 min distant from any PAF events. The obtained sensitivity, specificity, and positive predictivity were 96.55%, 100%, and 100%, respectively. PMID:22606666

  20. Remembering Irving I. Gottesman: Twin Research Colleague and Friend Extraordinaire/Research Studies: Face-Lift Technique Comparison in Identical Twins; Raising Preterm Twins; Fetal Behavior in Dichorionic Twin Pregnancies; Co-Bedding and Stress Reduction in Twins/Public Interest: Identical Co-Twins' Same Day Delivery; Teaching Twins in Bosnia; Twin Auctioneers; Sister, the Play.

    PubMed

    Segal, Nancy L

    2016-12-01

    Dr Irving I. Gottesman, a colleague, friend, and long-time member of the International Society of Twin Studies passed away on June 29, 2016. His contributions to twin research and some personal reflections are presented to honor both the man and the memory. This tribute is followed by short reviews of twin research concerning differences between cosmetic surgical techniques, the rearing of preterm twins, behavioral observations of dichorionic fetal twins, and the outcomes of co-bedding twins with reference to stress reduction. Interesting and informative articles in the media describe identical co-twins who delivered infants on the same day, educational policies regarding twins in Bosnia and the United Kingdom, unusual practices of twin auctioneers, and a theatrical production, Sister, featuring identical twins in the leading roles.

  1. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  2. Automatic age and gender classification using supervised appearance model

    NASA Astrophysics Data System (ADS)

    Bukar, Ali Maina; Ugail, Hassan; Connah, David

    2016-11-01

    Age and gender classification are two important problems that recently gained popularity in the research community, due to their wide range of applications. Research has shown that both age and gender information are encoded in the face shape and texture, hence the active appearance model (AAM), a statistical model that captures shape and texture variations, has been one of the most widely used feature extraction techniques for the aforementioned problems. However, AAM suffers from some drawbacks, especially when used for classification. This is primarily because principal component analysis (PCA), which is at the core of the model, works in an unsupervised manner, i.e., PCA dimensionality reduction does not take into account how the predictor variables relate to the response (class labels). Rather, it explores only the underlying structure of the predictor variables, thus, it is no surprise if PCA discards valuable parts of the data that represent discriminatory features. Toward this end, we propose a supervised appearance model (sAM) that improves on AAM by replacing PCA with partial least-squares regression. This feature extraction technique is then used for the problems of age and gender classification. Our experiments show that sAM has better predictive power than the conventional AAM.

  3. Discrete Wavelet Transform-Based Whole-Spectral and Subspectral Analysis for Improved Brain Tumor Clustering Using Single Voxel MR Spectroscopy.

    PubMed

    Yang, Guang; Nawaz, Tahir; Barrick, Thomas R; Howe, Franklyn A; Slabaugh, Greg

    2015-12-01

    Many approaches have been considered for automatic grading of brain tumors by means of pattern recognition with magnetic resonance spectroscopy (MRS). Providing an improved technique which can assist clinicians in accurately identifying brain tumor grades is our main objective. The proposed technique, which is based on the discrete wavelet transform (DWT) of whole-spectral or subspectral information of key metabolites, combined with unsupervised learning, inspects the separability of the extracted wavelet features from the MRS signal to aid the clustering. In total, we included 134 short echo time single voxel MRS spectra (SV MRS) in our study that cover normal controls, low grade and high grade tumors. The combination of DWT-based whole-spectral or subspectral analysis and unsupervised clustering achieved an overall clustering accuracy of 94.8% and a balanced error rate of 7.8%. To the best of our knowledge, it is the first study using DWT combined with unsupervised learning to cluster brain SV MRS. Instead of dimensionality reduction on SV MRS or feature selection using model fitting, our study provides an alternative method of extracting features to obtain promising clustering results.

  4. Development of a household waste treatment subsystem, volume 1. [with water conservation features

    NASA Technical Reports Server (NTRS)

    Gresko, T. M.; Murray, R. W.

    1973-01-01

    The domestic waste treatment subsystem was developed to process the daily liquid and non-metallic solid wastes provided by a family of four people. The subsystem was designed to be connected to the sewer line of a household which contained water conservation features. The system consisted of an evaporation technique to separate liquids from solids, an incineration technique for solids reduction, and a catalytic oxidizer for eliminating noxious gases from evaporation and incineration processes. All wastes were passed through a grinder which masticated the solids and deposited them in a settling tank. The liquids were transferred through a cleanable filter into a holding tank. From here the liquids were sprayed into an evaporator and a spray chamber where evaporation occurred. The resulting vapors were processed by catalytic oxidation. Water and latent energy were recovered in a combination evaporator/condenser heat exchanger. The solids were conveyed into an incinerator and reduced to ash while the incineration gases were passed through the catalytic oxidizer along with the processed water vapor.

  5. A hybrid method for classifying cognitive states from fMRI data.

    PubMed

    Parida, S; Dehuri, S; Cho, S-B; Cacha, L A; Poznanski, R R

    2015-09-01

    Functional magnetic resonance imaging (fMRI) makes it possible to detect brain activities in order to elucidate cognitive-states. The complex nature of fMRI data requires under-standing of the analyses applied to produce possible avenues for developing models of cognitive state classification and improving brain activity prediction. While many models of classification task of fMRI data analysis have been developed, in this paper, we present a novel hybrid technique through combining the best attributes of genetic algorithms (GAs) and ensemble decision tree technique that consistently outperforms all other methods which are being used for cognitive-state classification. Specifically, this paper illustrates the combined effort of decision-trees ensemble and GAs for feature selection through an extensive simulation study and discusses the classification performance with respect to fMRI data. We have shown that our proposed method exhibits significant reduction of the number of features with clear edge classification accuracy over ensemble of decision-trees.

  6. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE PAGES

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; ...

    2016-12-07

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.« less

  7. Evaluating lossy data compression on climate simulation data within a large ensemble

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; Xu, Haiying; Stolpe, Martin B.; Naveau, Phillipe; Sanderson, Ben; Ebert-Uphoff, Imme; Samarasinghe, Savini; De Simone, Francesco; Carbone, Francesco; Gencarelli, Christian N.; Dennis, John M.; Kay, Jennifer E.; Lindstrom, Peter

    2016-12-01

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.

  8. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.« less

  9. Efficiency reduction and pseudo-convergence in replica exchange sampling of peptide folding unfolding equilibria

    NASA Astrophysics Data System (ADS)

    Denschlag, Robert; Lingenheil, Martin; Tavan, Paul

    2008-06-01

    Replica exchange (RE) molecular dynamics (MD) simulations are frequently applied to sample the folding-unfolding equilibria of β-hairpin peptides in solution, because efficiency gains are expected from this technique. Using a three-state Markov model featuring key aspects of β-hairpin folding we show that RE simulations can be less efficient than conventional techniques. Furthermore we demonstrate that one is easily seduced to erroneously assign convergence to the RE sampling, because RE ensembles can rapidly reach long-lived stationary states. We conclude that typical REMD simulations covering a few tens of nanoseconds are by far too short for sufficient sampling of β-hairpin folding-unfolding equilibria.

  10. Formation of plasmonic silver nanoparticles using rapid thermal annealing at low temperature and study in reflectance reduction of Si surface

    NASA Astrophysics Data System (ADS)

    Barman, Bidyut; Dhasmana, Hrishikesh; Verma, Abhishek; Kumar, Amit; Pratap Chaudhary, Shiv; Jain, V. K.

    2017-09-01

    This work presents studies of plasmonic silver nanoparticles (AgNPs) formation at low temperatures (200 °C-300 °C) onto Si surface by sputtering followed with rapid thermal processing (RTP) for different time durations(5-30 min). The study reveals that 20 min RTP at all temperatures show minimum average size of AgNPs (60.42 nm) with corresponding reduction in reflectance of Si surface from 40.12% to mere 1.15% only in wavelength region 300-800 nm for RTP at 200 °C. A detailed supporting growth mechanism is also discussed. This low temperature technique can be helpful in achieving efficiency improvement in solar cells via reflectance reduction with additional features such as reproducibility, minimal time and very good adhesion without damaging underlying layers device parameters.

  11. Automated variance reduction for MCNP using deterministic methods.

    PubMed

    Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B

    2005-01-01

    In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.

  12. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    PubMed

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the individual subjects, therefore, it can be used as a significant tool in clinical practice.

  13. Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas.

    PubMed

    Chang, P; Grinband, J; Weinberg, B D; Bardis, M; Khy, M; Cadena, G; Su, M-Y; Cha, S; Filippi, C G; Bota, D; Baldi, P; Poisson, L M; Jain, R; Chow, D

    2018-05-10

    The World Health Organization has recently placed new emphasis on the integration of genetic information for gliomas. While tissue sampling remains the criterion standard, noninvasive imaging techniques may provide complimentary insight into clinically relevant genetic mutations. Our aim was to train a convolutional neural network to independently predict underlying molecular genetic mutation status in gliomas with high accuracy and identify the most predictive imaging features for each mutation. MR imaging data and molecular information were retrospectively obtained from The Cancer Imaging Archives for 259 patients with either low- or high-grade gliomas. A convolutional neural network was trained to classify isocitrate dehydrogenase 1 ( IDH1 ) mutation status, 1p/19q codeletion, and O6-methylguanine-DNA methyltransferase ( MGMT ) promotor methylation status. Principal component analysis of the final convolutional neural network layer was used to extract the key imaging features critical for successful classification. Classification had high accuracy: IDH1 mutation status, 94%; 1p/19q codeletion, 92%; and MGMT promotor methylation status, 83%. Each genetic category was also associated with distinctive imaging features such as definition of tumor margins, T1 and FLAIR suppression, extent of edema, extent of necrosis, and textural features. Our results indicate that for The Cancer Imaging Archives dataset, machine-learning approaches allow classification of individual genetic mutations of both low- and high-grade gliomas. We show that relevant MR imaging features acquired from an added dimensionality-reduction technique demonstrate that neural networks are capable of learning key imaging components without prior feature selection or human-directed training. © 2018 by American Journal of Neuroradiology.

  14. Machine learning approach for automated screening of malaria parasite using light microscopic images.

    PubMed

    Das, Dev Kumar; Ghosh, Madhumala; Pal, Mallika; Maiti, Asok K; Chakraborty, Chandan

    2013-02-01

    The aim of this paper is to address the development of computer assisted malaria parasite characterization and classification using machine learning approach based on light microscopic images of peripheral blood smears. In doing this, microscopic image acquisition from stained slides, illumination correction and noise reduction, erythrocyte segmentation, feature extraction, feature selection and finally classification of different stages of malaria (Plasmodium vivax and Plasmodium falciparum) have been investigated. The erythrocytes are segmented using marker controlled watershed transformation and subsequently total ninety six features describing shape-size and texture of erythrocytes are extracted in respect to the parasitemia infected versus non-infected cells. Ninety four features are found to be statistically significant in discriminating six classes. Here a feature selection-cum-classification scheme has been devised by combining F-statistic, statistical learning techniques i.e., Bayesian learning and support vector machine (SVM) in order to provide the higher classification accuracy using best set of discriminating features. Results show that Bayesian approach provides the highest accuracy i.e., 84% for malaria classification by selecting 19 most significant features while SVM provides highest accuracy i.e., 83.5% with 9 most significant features. Finally, the performance of these two classifiers under feature selection framework has been compared toward malaria parasite classification. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. A locally p-adaptive approach for Large Eddy Simulation of compressible flows in a DG framework

    NASA Astrophysics Data System (ADS)

    Tugnoli, Matteo; Abbà, Antonella; Bonaventura, Luca; Restelli, Marco

    2017-11-01

    We investigate the possibility of reducing the computational burden of LES models by employing local polynomial degree adaptivity in the framework of a high-order DG method. A novel degree adaptation technique especially featured to be effective for LES applications is proposed and its effectiveness is compared to that of other criteria already employed in the literature. The resulting locally adaptive approach allows to achieve significant reductions in computational cost of representative LES computations.

  16. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samei, Ehsan, E-mail: samei@duke.edu; Richard, Samuel

    2015-01-15

    Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD,more » Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks.« less

  17. Fast traffic sign recognition with a rotation invariant binary pattern based feature.

    PubMed

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-19

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed.

  18. Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature

    PubMed Central

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-01

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed. PMID:25608217

  19. Functional feature embedded space mapping of fMRI data.

    PubMed

    Hu, Jin; Tian, Jie; Yang, Lei

    2006-01-01

    We have proposed a new method for fMRI data analysis which is called Functional Feature Embedded Space Mapping (FFESM). Our work mainly focuses on the experimental design with periodic stimuli which can be described by a number of Fourier coefficients in the frequency domain. A nonlinear dimension reduction technique Isomap is applied to the high dimensional features obtained from frequency domain of the fMRI data for the first time. Finally, the presence of activated time series is identified by the clustering method in which the information theoretic criterion of minimum description length (MDL) is used to estimate the number of clusters. The feasibility of our algorithm is demonstrated by real human experiments. Although we focus on analyzing periodic fMRI data, the approach can be extended to analyze non-periodic fMRI data (event-related fMRI) by replacing the Fourier analysis with a wavelet analysis.

  20. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    PubMed

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  1. Statistical analysis for validating ACO-KNN algorithm as feature selection in sentiment analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Rohaidah; Yusop, Nurhafizah Moziyana Mohd; Bakar, Azuraliza Abu; Yaakub, Mohd Ridzwan

    2017-10-01

    This research paper aims to propose a hybrid of ant colony optimization (ACO) and k-nearest neighbor (KNN) algorithms as feature selections for selecting and choosing relevant features from customer review datasets. Information gain (IG), genetic algorithm (GA), and rough set attribute reduction (RSAR) were used as baseline algorithms in a performance comparison with the proposed algorithm. This paper will also discuss the significance test, which was used to evaluate the performance differences between the ACO-KNN, IG-GA, and IG-RSAR algorithms. This study evaluated the performance of the ACO-KNN algorithm using precision, recall, and F-score, which were validated using the parametric statistical significance tests. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. In addition, the experimental results have proven that the ACO-KNN can be used as a feature selection technique in sentiment analysis to obtain quality, optimal feature subset that can represent the actual data in customer review data.

  2. Potential for the use of reconstructed IASI radiances in the detection of atmospheric trace gases

    NASA Astrophysics Data System (ADS)

    Atkinson, N. C.; Hilton, F. I.; Illingworth, S. M.; Eyre, J. R.; Hultberg, T.

    2010-07-01

    Principal component (PC) analysis has received considerable attention as a technique for the extraction of meteorological signals from hyperspectral infra-red sounders such as the Infrared Atmospheric Sounding Interferometer (IASI) and the Atmospheric Infrared Sounder (AIRS). In addition to achieving substantial bit-volume reductions for dissemination purposes, the technique can also be used to generate reconstructed radiances in which random instrument noise has been reduced. Studies on PC analysis of hyperspectral infrared sounder data have been undertaken in the context of numerical weather prediction, instrument monitoring and geophysical variable retrieval, as well as data compression. This study examines the potential of PC analysis for chemistry applications. A major concern in the use of PC analysis for chemistry is that the spectral features associated with trace gases may not be well represented in the reconstructed spectra, either due to deficiencies in the training set or due to the limited number of PC scores used in the radiance reconstruction. In this paper we show examples of reconstructed IASI radiances for several trace gases: ammonia, sulphur dioxide, methane and carbon monoxide. It is shown that care must be taken in the selection of spectra for the initial training set: an iterative technique, in which outlier spectra are added to a base training set, gives the best results. For the four trace gases examined, key features of the chemical signatures are retained in the reconstructed radiances, whilst achieving a substantial reduction in instrument noise. A new regional re-transmission service for IASI is scheduled to start in 2010, as part of the EUMETSAT Advanced Retransmission Service (EARS). For this EARS-IASI service it is intended to include PC scores as part of the data stream. The paper describes the generation of the reference eigenvectors for this new service.

  3. Smokers' and drinkers' choice of smartphone applications and expectations of engagement: a think aloud and interview study.

    PubMed

    Perski, Olga; Blandford, Ann; Ubhi, Harveen Kaur; West, Robert; Michie, Susan

    2017-02-28

    Public health organisations such as the National Health Service in the United Kingdom and the National Institutes of Health in the United States provide access to online libraries of publicly endorsed smartphone applications (apps); however, there is little evidence that users rely on this guidance. Rather, one of the most common methods of finding new apps is to search an online store. As hundreds of smoking cessation and alcohol-related apps are currently available on the market, smokers and drinkers must actively choose which app to download prior to engaging with it. The influences on this choice are yet to be identified. This study aimed to investigate 1) design features that shape users' choice of smoking cessation or alcohol reduction apps, and 2) design features judged to be important for engagement. Adult smokers (n = 10) and drinkers (n = 10) interested in using an app to quit/cut down were asked to search an online store to identify and explore a smoking cessation or alcohol reduction app of their choice whilst thinking aloud. Semi-structured interview techniques were used to allow participants to elaborate on their statements. An interpretivist theoretical framework informed the analysis. Verbal reports were audio recorded, transcribed verbatim and analysed using inductive thematic analysis. Participants chose apps based on their immediate look and feel, quality as judged by others' ratings and brand recognition ('social proof'), and titles judged to be realistic and relevant. Monitoring and feedback, goal setting, rewards and prompts were identified as important for engagement, fostering motivation and autonomy. Tailoring of content, a non-judgmental communication style, privacy and accuracy were viewed as important for engagement, fostering a sense of personal relevance and trust. Sharing progress on social media and the use of craving management techniques in social settings were judged not to be engaging because of concerns about others' negative reactions. Choice of a smoking cessation or alcohol reduction app may be influenced by its immediate look and feel, 'social proof' and titles that appear realistic. Design features that enhance motivation, autonomy, personal relevance and credibility may be important for engagement.

  4. Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.

    PubMed

    Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng

    2018-01-01

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  5. Pattern classification using an olfactory model with PCA feature selection in electronic noses: study and application.

    PubMed

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  6. Noise reduction with complex bilateral filter.

    PubMed

    Matsumoto, Mitsuharu

    2017-12-01

    This study introduces a noise reduction technique that uses a complex bilateral filter. A bilateral filter is a nonlinear filter originally developed for images that can reduce noise while preserving edge information. It is an attractive filter and has been used in many applications in image processing. When it is applied to an acoustical signal, small-amplitude noise is reduced while the speech signal is preserved. However, a bilateral filter cannot handle noise with relatively large amplitudes owing to its innate characteristics. In this study, the noisy signal is transformed into the time-frequency domain and the filter is improved to handle complex spectra. The high-amplitude noise is reduced in the time-frequency domain via the proposed filter. The features and the potential of the proposed filter are also confirmed through experiments.

  7. Ultrasound speckle reduction based on fractional order differentiation.

    PubMed

    Shao, Dangguo; Zhou, Ting; Liu, Fan; Yi, Sanli; Xiang, Yan; Ma, Lei; Xiong, Xin; He, Jianfeng

    2017-07-01

    Ultrasound images show a granular pattern of noise known as speckle that diminishes their quality and results in difficulties in diagnosis. To preserve edges and features, this paper proposes a fractional differentiation-based image operator to reduce speckle in ultrasound. An image de-noising model based on fractional partial differential equations with balance relation between k (gradient modulus threshold that controls the conduction) and v (the order of fractional differentiation) was constructed by the effective combination of fractional calculus theory and a partial differential equation, and the numerical algorithm of it was achieved using a fractional differential mask operator. The proposed algorithm has better speckle reduction and structure preservation than the three existing methods [P-M model, the speckle reducing anisotropic diffusion (SRAD) technique, and the detail preserving anisotropic diffusion (DPAD) technique]. And it is significantly faster than bilateral filtering (BF) in producing virtually the same experimental results. Ultrasound phantom testing and in vivo imaging show that the proposed method can improve the quality of an ultrasound image in terms of tissue SNR, CNR, and FOM values.

  8. Features of flow around the flying wing model at various attack and slip angle

    NASA Astrophysics Data System (ADS)

    Pavlenko, A. M.; Zanin, B. Yu.; Katasonov, M. M.

    2017-10-01

    Experimental study of flow features around aircraft model having "flying wing" form and belonging to the category of small-unmanned aerial vehicleswas carried out. Hot-wire anemometry and flow visualization techniques were used in the investigation to get quantitative data and streamlines pictures ofthe flow near the model surface. Evolution of vortex structures depending on the attack and slip angle was demonstrated. The possibility of flow control and reduction of flow separation zones on the wing surface by means of ledges in the form of cones was also investigated. It was shown, that the laminar-turbulent transition scenario on the flying wing model is identical to the one on a straight wing and occurs through the development of a package of unstable oscillations in the boundary layer separation.

  9. Green and facile synthesis of fibrous Ag/cotton composites and their catalytic properties for 4-nitrophenol reduction

    NASA Astrophysics Data System (ADS)

    Li, Ziyu; Jia, Zhigang; Ni, Tao; Li, Shengbiao

    2017-12-01

    Natural cotton, featuring abundant oxygen-containing functional groups, has been utilized as a reductant to synthesize Ag nanoparticles on its surface. Through the facile and environment-friendly reduction process, the fibrous Ag/cotton composite (FAC) was conveniently synthesized. Various characterization techniques including XRD, XPS, TEM, SEM, EDS and FT-IR had been utilized to study the material microstructure and surface properties. The resulting FAC exhibited favorable activity on the catalytic reduction of 4-nitrophenol with high reaction rate. Moreover, the fibrous Ag/cotton composites were capable to form a desirable catalytic mat for catalyzing and simultaneous product separation. Reactants passing through the mat could be catalytically transformed to product, which is of great significance for water treatment. Such catalyst (FAC) was thus expected to have the potential as a highly efficient, cost-effective and eco-friendly catalyst for industrial applications. More importantly, this newly developed synthetic methodology could serve as a general tool to design and synthesize other metal/biomass composites catalysts for a wider range of catalytic applications.

  10. Reducing and Analyzing the PHAT Survey with the Cloud

    NASA Astrophysics Data System (ADS)

    Williams, Benjamin F.; Olsen, Knut; Khan, Rubab; Pirone, Daniel; Rosema, Keith

    2018-05-01

    We discuss the technical challenges we faced and the techniques we used to overcome them when reducing the Panchromatic Hubble Andromeda Treasury (PHAT) photometric data set on the Amazon Elastic Compute Cloud (EC2). We first describe the architecture of our photometry pipeline, which we found particularly efficient for reducing the data in multiple ways for different purposes. We then describe the features of EC2 that make this architecture both efficient to use and challenging to implement. We describe the techniques we adopted to process our data, and suggest ways these techniques may be improved for those interested in trying such reductions in the future. Finally, we summarize the output photometry data products, which are now hosted publicly in two places in two formats. They are in simple fits tables in the high-level science products on MAST, and on a queryable database available through the NOAO Data Lab.

  11. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography

    PubMed Central

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J.; French, Paul M. W.; McGinty, James

    2015-01-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound. PMID:25909009

  12. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography.

    PubMed

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J; French, Paul M W; McGinty, James

    2015-04-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound.

  13. Wireless AE Event and Environmental Monitoring for Wind Turbine Blades at Low Sampling Rates

    NASA Astrophysics Data System (ADS)

    Bouzid, Omar M.; Tian, Gui Y.; Cumanan, K.; Neasham, J.

    Integration of acoustic wireless technology in structural health monitoring (SHM) applications introduces new challenges due to requirements of high sampling rates, additional communication bandwidth, memory space, and power resources. In order to circumvent these challenges, this chapter proposes a novel solution through building a wireless SHM technique in conjunction with acoustic emission (AE) with field deployment on the structure of a wind turbine. This solution requires a low sampling rate which is lower than the Nyquist rate. In addition, features extracted from aliased AE signals instead of reconstructing the original signals on-board the wireless nodes are exploited to monitor AE events, such as wind, rain, strong hail, and bird strike in different environmental conditions in conjunction with artificial AE sources. Time feature extraction algorithm, in addition to the principal component analysis (PCA) method, is used to extract and classify the relevant information, which in turn is used to classify or recognise a testing condition that is represented by the response signals. This proposed novel technique yields a significant data reduction during the monitoring process of wind turbine blades.

  14. Wavelet median denoising of ultrasound images

    NASA Astrophysics Data System (ADS)

    Macey, Katherine E.; Page, Wyatt H.

    2002-05-01

    Ultrasound images are contaminated with both additive and multiplicative noise, which is modeled by Gaussian and speckle noise respectively. Distinguishing small features such as fallopian tubes in the female genital tract in the noisy environment is problematic. A new method for noise reduction, Wavelet Median Denoising, is presented. Wavelet Median Denoising consists of performing a standard noise reduction technique, median filtering, in the wavelet domain. The new method is tested on 126 images, comprised of 9 original images each with 14 levels of Gaussian or speckle noise. Results for both separable and non-separable wavelets are evaluated, relative to soft-thresholding in the wavelet domain, using the signal-to-noise ratio and subjective assessment. The performance of Wavelet Median Denoising is comparable to that of soft-thresholding. Both methods are more successful in removing Gaussian noise than speckle noise. Wavelet Median Denoising outperforms soft-thresholding for a larger number of cases of speckle noise reduction than of Gaussian noise reduction. Noise reduction is more successful using non-separable wavelets than separable wavelets. When both methods are applied to ultrasound images obtained from a phantom of the female genital tract a small improvement is seen; however, a substantial improvement is required prior to clinical use.

  15. System Complexity Reduction via Feature Selection

    ERIC Educational Resources Information Center

    Deng, Houtao

    2011-01-01

    This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree…

  16. Long-range prediction of Indian summer monsoon rainfall using data mining and statistical approaches

    NASA Astrophysics Data System (ADS)

    H, Vathsala; Koolagudi, Shashidhar G.

    2017-10-01

    This paper presents a hybrid model to better predict Indian summer monsoon rainfall. The algorithm considers suitable techniques for processing dense datasets. The proposed three-step algorithm comprises closed itemset generation-based association rule mining for feature selection, cluster membership for dimensionality reduction, and simple logistic function for prediction. The application of predicting rainfall into flood, excess, normal, deficit, and drought based on 36 predictors consisting of land and ocean variables is presented. Results show good accuracy in the considered study period of 37years (1969-2005).

  17. HClass: Automatic classification tool for health pathologies using artificial intelligence techniques.

    PubMed

    Garcia-Chimeno, Yolanda; Garcia-Zapirain, Begonya

    2015-01-01

    The classification of subjects' pathologies enables a rigorousness to be applied to the treatment of certain pathologies, as doctors on occasions play with so many variables that they can end up confusing some illnesses with others. Thanks to Machine Learning techniques applied to a health-record database, it is possible to make using our algorithm. hClass contains a non-linear classification of either a supervised, non-supervised or semi-supervised type. The machine is configured using other techniques such as validation of the set to be classified (cross-validation), reduction in features (PCA) and committees for assessing the various classifiers. The tool is easy to use, and the sample matrix and features that one wishes to classify, the number of iterations and the subjects who are going to be used to train the machine all need to be introduced as inputs. As a result, the success rate is shown either via a classifier or via a committee if one has been formed. A 90% success rate is obtained in the ADABoost classifier and 89.7% in the case of a committee (comprising three classifiers) when PCA is applied. This tool can be expanded to allow the user to totally characterise the classifiers by adjusting them to each classification use.

  18. Computerized detection of unruptured aneurysms in MRA images: reduction of false positives using anatomical location features

    NASA Astrophysics Data System (ADS)

    Uchiyama, Yoshikazu; Gao, Xin; Hara, Takeshi; Fujita, Hiroshi; Ando, Hiromichi; Yamakawa, Hiroyasu; Asano, Takahiko; Kato, Hiroki; Iwama, Toru; Kanematsu, Masayuki; Hoshi, Hiroaki

    2008-03-01

    The detection of unruptured aneurysms is a major subject in magnetic resonance angiography (MRA). However, their accurate detection is often difficult because of the overlapping between the aneurysm and the adjacent vessels on maximum intensity projection images. The purpose of this study is to develop a computerized method for the detection of unruptured aneurysms in order to assist radiologists in image interpretation. The vessel regions were first segmented using gray-level thresholding and a region growing technique. The gradient concentration (GC) filter was then employed for the enhancement of the aneurysms. The initial candidates were identified in the GC image using a gray-level threshold. For the elimination of false positives (FPs), we determined shape features and an anatomical location feature. Finally, rule-based schemes and quadratic discriminant analysis were employed along with these features for distinguishing between the aneurysms and the FPs. The sensitivity for the detection of unruptured aneurysms was 90.0% with 1.52 FPs per patient. Our computerized scheme can be useful in assisting the radiologists in the detection of unruptured aneurysms in MRA images.

  19. Localized contourlet features in vehicle make and model recognition

    NASA Astrophysics Data System (ADS)

    Zafar, I.; Edirisinghe, E. A.; Acar, B. S.

    2009-02-01

    Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic Number Plate Recognition (ANPR) systems. Several vehicle MMR systems have been proposed in literature. In parallel to this, the usefulness of multi-resolution based feature analysis techniques leading to efficient object classification algorithms have received close attention from the research community. To this effect, Contourlet transforms that can provide an efficient directional multi-resolution image representation has recently been introduced. Already an attempt has been made in literature to use Curvelet/Contourlet transforms in vehicle MMR. In this paper we propose a novel localized feature detection method in Contourlet transform domain that is capable of increasing the classification rates up to 4%, as compared to the previously proposed Contourlet based vehicle MMR approach in which the features are non-localized and thus results in sub-optimal classification. Further we show that the proposed algorithm can achieve the increased classification accuracy of 96% at significantly lower computational complexity due to the use of Two Dimensional Linear Discriminant Analysis (2DLDA) for dimensionality reduction by preserving the features with high between-class variance and low inter-class variance.

  20. Wing download reduction using vortex trapping plates

    NASA Technical Reports Server (NTRS)

    Light, Jeffrey S.; Stremel, Paul M.; Bilanin, Alan J.

    1994-01-01

    A download reduction technique using spanwise plates on the upper and lower wing surfaces has been examined. Experimental and analytical techniques were used to determine the download reduction obtained using this technique. Simple two-dimensional wind tunnel testing confirmed the validity of the technique for reducing two-dimensional airfoil drag. Computations using a two-dimensional Navier-Stokes analysis provided insight into the mechanism causing the drag reduction. Finally, the download reduction technique was tested using a rotor and wing to determine the benefits for a semispan configuration representative of a tilt rotor aircraft.

  1. Curved planar reformation and optimal path tracing (CROP) method for false positive reduction in computer-aided detection of pulmonary embolism in CTPA

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Guo, Yanhui; Wei, Jun; Chughtai, Aamer; Hadjiiski, Lubomir M.; Sundaram, Baskaran; Patel, Smita; Kuriakose, Jean W.; Kazerooni, Ella A.

    2013-03-01

    The curved planar reformation (CPR) method re-samples the vascular structures along the vessel centerline to generate longitudinal cross-section views. The CPR technique has been commonly used in coronary CTA workstation to facilitate radiologists' visual assessment of coronary diseases, but has not yet been used for pulmonary vessel analysis in CTPA due to the complicated tree structures and the vast network of pulmonary vasculature. In this study, a new curved planar reformation and optimal path tracing (CROP) method was developed to facilitate feature extraction and false positive (FP) reduction and improve our PE detection system. PE candidates are first identified in the segmented pulmonary vessels at prescreening. Based on Dijkstra's algorithm, the optimal path (OP) is traced from the pulmonary trunk bifurcation point to each PE candidate. The traced vessel is then straightened and a reformatted volume is generated using CPR. Eleven new features that characterize the intensity, gradient, and topology are extracted from the PE candidate in the CPR volume and combined with the previously developed 9 features to form a new feature space for FP classification. With IRB approval, CTPA of 59 PE cases were retrospectively collected from our patient files (UM set) and 69 PE cases from the PIOPED II data set with access permission. 595 and 800 PEs were manually marked by experienced radiologists as reference standard for the UM and PIOPED set, respectively. At a test sensitivity of 80%, the average FP rate was improved from 18.9 to 11.9 FPs/case with the new method for the PIOPED set when the UM set was used for training. The FP rate was improved from 22.6 to 14.2 FPs/case for the UM set when the PIOPED set was used for training. The improvement in the free response receiver operating characteristic (FROC) curves was statistically significant (p<0.05) by JAFROC analysis, indicating that the new features extracted from the CROP method are useful for FP reduction.

  2. Health Technology Assessment of pathogen reduction technologies applied to plasma for clinical use

    PubMed Central

    Cicchetti, Americo; Berrino, Alexandra; Casini, Marina; Codella, Paola; Facco, Giuseppina; Fiore, Alessandra; Marano, Giuseppe; Marchetti, Marco; Midolo, Emanuela; Minacori, Roberta; Refolo, Pietro; Romano, Federica; Ruggeri, Matteo; Sacchini, Dario; Spagnolo, Antonio G.; Urbina, Irene; Vaglio, Stefania; Grazzini, Giuliano; Liumbruno, Giancarlo M.

    2016-01-01

    Although existing clinical evidence shows that the transfusion of blood components is becoming increasingly safe, the risk of transmission of known and unknown pathogens, new pathogens or re-emerging pathogens still persists. Pathogen reduction technologies may offer a new approach to increase blood safety. The study is the output of collaboration between the Italian National Blood Centre and the Post-Graduate School of Health Economics and Management, Catholic University of the Sacred Heart, Rome, Italy. A large, multidisciplinary team was created and divided into six groups, each of which addressed one or more HTA domains. Plasma treated with amotosalen + UV light, riboflavin + UV light, methylene blue or a solvent/detergent process was compared to fresh-frozen plasma with regards to current use, technical features, effectiveness, safety, economic and organisational impact, and ethical, social and legal implications. The available evidence is not sufficient to state which of the techniques compared is superior in terms of efficacy, safety and cost-effectiveness. Evidence on efficacy is only available for the solvent/detergent method, which proved to be non-inferior to untreated fresh-frozen plasma in the treatment of a wide range of congenital and acquired bleeding disorders. With regards to safety, the solvent/detergent technique apparently has the most favourable risk-benefit profile. Further research is needed to provide a comprehensive overview of the cost-effectiveness profile of the different pathogen-reduction techniques. The wide heterogeneity of results and the lack of comparative evidence are reasons why more comparative studies need to be performed. PMID:27403740

  3. Tiopronin Gold Nanoparticle Precursor Forms Aurophilic Ring Tetramer

    PubMed Central

    Simpson, Carrie A.; Farrow, Christopher L.; Tian, Peng; Billinge, Simon J.L.; Huffman, Brian J.; Harkness, Kellen M.; Cliffel, David E.

    2010-01-01

    In the two step synthesis of thiolate-monolayer protected clusters (MPCs), the first step of the reaction is a mild reduction of gold(III) by thiols that generates gold(I) thiolate complexes as intermediates. Using tiopronin (Tio) as the thiol reductant, the characterization of the intermediate Au4Tio4 complex was accomplished with various analytical and structural techniques. Nuclear magnetic resonance (NMR), elemental analysis, thermogravimetric analysis (TGA), and matrix-assisted laser desorption/ionization-mass spectrometry (MALDI-MS) were all consistent with a cyclic gold(I)-thiol tetramer structure, and final structural analysis was gathered through the use of powder diffraction and pair distribution functions (PDF). Crystallographic data has proved challenging for almost all previous gold(I)-thiolate complexes. Herein, a novel characterization technique when combined with standard analytical assessment to elucidate structure without crystallographic data proved invaluable to the study of these complexes. This in conjunction with other analytical techniques, in particular mass spectrometry, can elucidate a structure when crystallographic data is unavailable. In addition, luminescent properties provided evidence of aurophilicity within the molecule. The concept of aurophilicity has been introduced to describe a select group of gold-thiolate structures, which possess unique characteristics, mainly red photoluminescence and a distinct Au-Au intramolecular distance indicating a weak metal-metal bond as also evidenced by the structural model of the tetramer. Significant features of both the tetrameric and aurophilic properties of the intermediate gold(I) tiopronin complex are retained after borohydride reduction to form the MPC, including gold(I) tiopronin partial rings as capping motifs, or “staples”, and weak red photoluminescence that extends into the Near Infrared region. PMID:21067183

  4. Milch versus Stimson technique for nonsedated reduction of anterior shoulder dislocation: a prospective randomized trial and analysis of factors affecting success.

    PubMed

    Amar, Eyal; Maman, Eran; Khashan, Morsi; Kauffman, Ehud; Rath, Ehud; Chechik, Ofir

    2012-11-01

    The shoulder is regarded as the most commonly dislocated major joint in the human body. Most dislocations can be reduced by simple methods in the emergency department, whereas others require more complicated approaches. We compared the efficacy, safety, pain, and duration of the reduction between the Milch technique and the Stimson technique in treating dislocations. We also identified factors that affected success rate. All enrolled patients were randomized to either the Milch technique or the Stimson technique for dislocated shoulder reduction. The study cohort consisted of 60 patients (mean age, 43.9 years; age range, 18-88 years) who were randomly assigned to treatment by either the Stimson technique (n = 25) or the Milch technique (n = 35). Oral analgesics were available for both groups. The 2 groups were similar in demographics, patient characteristics, and pain levels. The first reduction attempt in the Milch and Stimson groups was successful in 82.8% and 28% of cases, respectively (P < .001), and the mean reduction time was 4.68 and 8.84 minutes, respectively (P = .007). The success rate was found to be affected by the reduction technique, the interval between dislocation occurrence and first reduction attempt, and the pain level on admittance. The success rate and time to achieve reduction without sedation were superior for the Milch technique compared with the Stimson technique. Early implementation of reduction measures and low pain levels at presentation favor successful reduction, which--in combination with oral pain medication--constitutes an acceptable and reasonable management alternative to reduction with sedation. Copyright © 2012 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  5. Trigger learning and ECG parameter customization for remote cardiac clinical care information system.

    PubMed

    Bashir, Mohamed Ezzeldin A; Lee, Dong Gyu; Li, Meijing; Bae, Jang-Whan; Shon, Ho Sun; Cho, Myung Chan; Ryu, Keun Ho

    2012-07-01

    Coronary heart disease is being identified as the largest single cause of death along the world. The aim of a cardiac clinical information system is to achieve the best possible diagnosis of cardiac arrhythmias by electronic data processing. Cardiac information system that is designed to offer remote monitoring of patient who needed continues follow up is demanding. However, intra- and interpatient electrocardiogram (ECG) morphological descriptors are varying through the time as well as the computational limits pose significant challenges for practical implementations. The former requires that the classification model be adjusted continuously, and the latter requires a reduction in the number and types of ECG features, and thus, the computational burden, necessary to classify different arrhythmias. We propose the use of adaptive learning to automatically train the classifier on up-to-date ECG data, and employ adaptive feature selection to define unique feature subsets pertinent to different types of arrhythmia. Experimental results show that this hybrid technique outperforms conventional approaches and is, therefore, a promising new intelligent diagnostic tool.

  6. Autonomous spatially adaptive sampling in experiments based on curvature, statistical error and sample spacing with applications in LDA measurements

    NASA Astrophysics Data System (ADS)

    Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.

    2015-06-01

    Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.

  7. Investigation of Error Patterns in Geographical Databases

    NASA Technical Reports Server (NTRS)

    Dryer, David; Jacobs, Derya A.; Karayaz, Gamze; Gronbech, Chris; Jones, Denise R. (Technical Monitor)

    2002-01-01

    The objective of the research conducted in this project is to develop a methodology to investigate the accuracy of Airport Safety Modeling Data (ASMD) using statistical, visualization, and Artificial Neural Network (ANN) techniques. Such a methodology can contribute to answering the following research questions: Over a representative sampling of ASMD databases, can statistical error analysis techniques be accurately learned and replicated by ANN modeling techniques? This representative ASMD sample should include numerous airports and a variety of terrain characterizations. Is it possible to identify and automate the recognition of patterns of error related to geographical features? Do such patterns of error relate to specific geographical features, such as elevation or terrain slope? Is it possible to combine the errors in small regions into an error prediction for a larger region? What are the data density reduction implications of this work? ASMD may be used as the source of terrain data for a synthetic visual system to be used in the cockpit of aircraft when visual reference to ground features is not possible during conditions of marginal weather or reduced visibility. In this research, United States Geologic Survey (USGS) digital elevation model (DEM) data has been selected as the benchmark. Artificial Neural Networks (ANNS) have been used and tested as alternate methods in place of the statistical methods in similar problems. They often perform better in pattern recognition, prediction and classification and categorization problems. Many studies show that when the data is complex and noisy, the accuracy of ANN models is generally higher than those of comparable traditional methods.

  8. Robust extrema features for time-series data analysis.

    PubMed

    Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N

    2013-06-01

    The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.

  9. Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    PubMed Central

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6∼8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3∼5 pattern classes considering the trade-off between time consumption and classification rate. PMID:22736979

  10. Hybrid approach combining chemometrics and likelihood ratio framework for reporting the evidential value of spectra.

    PubMed

    Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema

    2016-08-10

    Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Effectiveness of streambank-stabilization techniques along the Kenai River, Alaska

    USGS Publications Warehouse

    Dorava, Joseph M.

    1999-01-01

    The Kenai River in southcentral Alaska is the State's most popular sport fishery and an economically important salmon river that generates as much as $70 million annually. Boatwake-induced streambank erosion and the associated damage to riparian and riverine habitat present a potential threat to this fishery. Bank-stabilization techniques commonly in use along the Kenai River were selected for evaluation of their effectiveness at attenuating boatwakes and retarding streambank erosion. Spruce trees cabled to the bank and biodegradable man-made logs (called 'bio-logs') pinned to the bank were tested because they are commonly used techniques along the river. These two techniques were compared for their ability to reduce wake heights that strike the bank and to reduce erosion of bank material, as well as for the amount and quality of habitat they provide for juvenile chinook salmon. Additionally, an engineered bank-stabilization project was evaluated because this method of bank protection is being encouraged by managers of the river. During a test that included 20 controlled boat passes, the spruce trees and the bio-log provided a similar reduction in boatwake height and bank erosion; however, the spruce trees provided a greater amount of protective habitat than the bio-log. The engineered bank-stabilization project eroded less during nine boat passes and provided more protective cover than the adjacent unprotected natural bank. Features of the bank-stabilization techniques, such as tree limbs and willow plantings that extended into the water from the bank, attenuated the boatwakes, which helped reduce erosion. These features also provided protective cover to juvenile salmon.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Jing; Hu, Enyuan; Nordlund, Dennis

    The phase transition, charge compensation, and local chemical environment of Ni in LiNiO 2 were investigated to understand the degradation mechanism. The electrode was subjected to a variety of bulk and surface-sensitive characterization techniques under different charge–discharge cycling conditions. We observed the phase transition from the original hexagonal H1 phase to another two hexagonal phases (H2 and H3) upon Li deintercalation. Moreover, the gradual loss of H3-phase features was revealed during the repeated charges. The reduction in Ni redox activity occurred at both the charge and the discharge states, and it appeared both in the bulk and at the surfacemore » over the extended cycles. In conclusion, the degradation of crystal structure significantly contributes to the reduction of Ni redox activity, which in turn causes the cycling performance decay of LiNiO 2.« less

  13. Analysis of the Westland Data Set

    NASA Technical Reports Server (NTRS)

    Wen, Fang; Willett, Peter; Deb, Somnath

    2001-01-01

    The "Westland" set of empirical accelerometer helicopter data with seeded and labeled faults is analyzed with the aim of condition monitoring. The autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; and it has also been found that augmentation of these by harmonic and other parameters call improve classification significantly. Several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior oil training data and is thus able to quantify probability of error in all exact manner, such that features may be discarded or coarsened appropriately.

  14. Data mining framework for identification of myocardial infarction stages in ultrasound: A hybrid feature extraction paradigm (PART 2).

    PubMed

    Sudarshan, Vidya K; Acharya, U Rajendra; Ng, E Y K; Tan, Ru San; Chou, Siaw Meng; Ghista, Dhanjoo N

    2016-04-01

    Early expansion of infarcted zone after Acute Myocardial Infarction (AMI) has serious short and long-term consequences and contributes to increased mortality. Thus, identification of moderate and severe phases of AMI before leading to other catastrophic post-MI medical condition is most important for aggressive treatment and management. Advanced image processing techniques together with robust classifier using two-dimensional (2D) echocardiograms may aid for automated classification of the extent of infarcted myocardium. Therefore, this paper proposes novel algorithms namely Curvelet Transform (CT) and Local Configuration Pattern (LCP) for an automated detection of normal, moderately infarcted and severely infarcted myocardium using 2D echocardiograms. The methodology extracts the LCP features from CT coefficients of echocardiograms. The obtained features are subjected to Marginal Fisher Analysis (MFA) dimensionality reduction technique followed by fuzzy entropy based ranking method. Different classifiers are used to differentiate ranked features into three classes normal, moderate and severely infarcted based on the extent of damage to myocardium. The developed algorithm has achieved an accuracy of 98.99%, sensitivity of 98.48% and specificity of 100% for Support Vector Machine (SVM) classifier using only six features. Furthermore, we have developed an integrated index called Myocardial Infarction Risk Index (MIRI) to detect the normal, moderately and severely infarcted myocardium using a single number. The proposed system may aid the clinicians in faster identification and quantification of the extent of infarcted myocardium using 2D echocardiogram. This system may also aid in identifying the person at risk of developing heart failure based on the extent of infarcted myocardium. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Mining protein database using machine learning techniques.

    PubMed

    Camargo, Renata da Silva; Niranjan, Mahesan

    2008-08-25

    With a large amount of information relating to proteins accumulating in databases widely available online, it is of interest to apply machine learning techniques that, by extracting underlying statistical regularities in the data, make predictions about the functional and evolutionary characteristics of unseen proteins. Such predictions can help in achieving a reduction in the space over which experiment designers need to search in order to improve our understanding of the biochemical properties. Previously it has been suggested that an integration of features computable by comparing a pair of proteins can be achieved by an artificial neural network, hence predicting the degree to which they may be evolutionary related and homologous.
    We compiled two datasets of pairs of proteins, each pair being characterised by seven distinct features. We performed an exhaustive search through all possible combinations of features, for the problem of separating remote homologous from analogous pairs, we note that significant performance gain was obtained by the inclusion of sequence and structure information. We find that the use of a linear classifier was enough to discriminate a protein pair at the family level. However, at the superfamily level, to detect remote homologous pairs was a relatively harder problem. We find that the use of nonlinear classifiers achieve significantly higher accuracies.
    In this paper, we compare three different pattern classification methods on two problems formulated as detecting evolutionary and functional relationships between pairs of proteins, and from extensive cross validation and feature selection based studies quantify the average limits and uncertainties with which such predictions may be made. Feature selection points to a \\"knowledge gap\\" in currently available functional annotations. We demonstrate how the scheme may be employed in a framework to associate an individual protein with an existing family of evolutionarily related proteins.

  16. Noise-robust unsupervised spike sorting based on discriminative subspace learning with outlier handling.

    PubMed

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2017-06-01

    Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.

  17. Noise-robust unsupervised spike sorting based on discriminative subspace learning with outlier handling

    NASA Astrophysics Data System (ADS)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2017-06-01

    Objective. Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. Approach. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Main results. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. Significance. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.

  18. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    NASA Astrophysics Data System (ADS)

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  19. Using Innovative Technologies for Manufacturing and Evaluating Rocket Engine Hardware

    NASA Technical Reports Server (NTRS)

    Betts, Erin M.; Hardin, Andy

    2011-01-01

    Many of the manufacturing and evaluation techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing and evaluating hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) and white light scanning are being adopted and evaluated for their use on J-2X, with hopes of employing both technologies on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powdered metal manufacturing process in order to produce complex part geometries. The white light technique is a non-invasive method that can be used to inspect for geometric feature alignment. Both the DMLS manufacturing method and the white light scanning technique have proven to be viable options for manufacturing and evaluating rocket engine hardware, and further development and use of these techniques is recommended.

  20. FSR: feature set reduction for scalable and accurate multi-class cancer subtype classification based on copy number.

    PubMed

    Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam

    2012-01-15

    Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.

  1. Diffusive tunneling for alleviating Knudsen-layer reactivity reduction under hydrodynamic mix

    NASA Astrophysics Data System (ADS)

    Tang, Xianzhu; McDevitt, Chris; Guo, Zehua

    2017-10-01

    Hydrodynamic mix will produce small features for intermixed deuterium-tritium fuel and inert pusher materials. The geometrical characteristics of the mix feature have a large impact on Knudsen layer yield reduction. We considered two features. One is planar structure, and the other is fuel cells segmented by inert pusher material which can be represented by a spherical DT bubble enclosed by a pusher shell. The truly 3D fuel feature, the spherical bubble, has the largest degree of yield reduction, due to fast ions being lost in all directions. The planar fuel structure, which can be regarded as 1D features, has modest amount of potential for yield degradation. While the increasing yield reduction with increasing Knudsen number of the fuel region is straightforwardly anticipated, we also show, by a combination of direct simulation and simple model, that once the pusher materials is stretched sufficiently thin by hydrodynamic mix, the fast fuel ions diffusively tunnel through them with minimal energy loss, so the Knudsen layer yield reduction becomes alleviated. This yield recovery can occur in a chunk-mixed plasma, way before the far more stringent, asymptotic limit of an atomically homogenized fuel and pusher assembly. Work supported by LANL LDRD program.

  2. A dimension reduction strategy for improving the efficiency of computer-aided detection for CT colonography

    NASA Astrophysics Data System (ADS)

    Song, Bowen; Zhang, Guopeng; Wang, Huafeng; Zhu, Wei; Liang, Zhengrong

    2013-02-01

    Various types of features, e.g., geometric features, texture features, projection features etc., have been introduced for polyp detection and differentiation tasks via computer aided detection and diagnosis (CAD) for computed tomography colonography (CTC). Although these features together cover more information of the data, some of them are statistically highly-related to others, which made the feature set redundant and burdened the computation task of CAD. In this paper, we proposed a new dimension reduction method which combines hierarchical clustering and principal component analysis (PCA) for false positives (FPs) reduction task. First, we group all the features based on their similarity using hierarchical clustering, and then PCA is employed within each group. Different numbers of principal components are selected from each group to form the final feature set. Support vector machine is used to perform the classification. The results show that when three principal components were chosen from each group we can achieve an area under the curve of receiver operating characteristics of 0.905, which is as high as the original dataset. Meanwhile, the computation time is reduced by 70% and the feature set size is reduce by 77%. It can be concluded that the proposed method captures the most important information of the feature set and the classification accuracy is not affected after the dimension reduction. The result is promising and further investigation, such as automatically threshold setting, are worthwhile and are under progress.

  3. Stellar refraction - A tool to monitor the height of the tropopause from space

    NASA Technical Reports Server (NTRS)

    Schuerman, D. W.; Giovane, F.; Greenberg, J. M.

    1975-01-01

    Calculations of stellar refraction for a setting or rising star as viewed from a spacecraft show that the tropopause is a discernible feature in a plot of refraction vs time. The height of the tropopause is easily obtained from such a plot. Since the refraction suffered by the starlight appears to be measurable with some precision from orbital altitudes, this technique is suggested as a method for remotely monitoring the height of the tropopause. Although limited to nighttime measurements, the method is independent of supporting data or model fitting and easily lends itself to on-line data reduction.

  4. Muographic mapping of the subsurface density structures in Miura, Boso and Izu peninsulas, Japan

    NASA Astrophysics Data System (ADS)

    Tanaka, Hiroyuki K. M.

    2015-02-01

    While the benefits of determining the bulk density distribution of a landmass are evident, established experimental techniques reliant on gravity measurements cannot uniquely determine the underground density distribution. We address this problem by taking advantage of traffic tunnels densely distributed throughout the country. Cosmic ray muon flux is measured in the tunnels to determine the average density of each rock overburden. After analyzing the data collected from 146 observation points in Miura, South-Boso and South-Izu Peninsula, Japan as an example, we mapped out the shallow density distribution of an area of 1340 km2. We find a good agreement between muographically determined density distribution and geologic features as described in existing geological studies. The average shallow density distribution below each peninsula was determined with a great accuracy (less than +/-0.8%). We also observed a significant reduction in density along fault lines and interpreted that as due to the presence of multiple cracks caused by mechanical stress during recurrent seismic events. We show that this new type of muography technique can be applied to estimate the terrain density and porosity distribution, thus determining more precise Bouguer reduction densities.

  5. Muographic mapping of the subsurface density structures in Miura, Boso and Izu peninsulas, Japan

    PubMed Central

    Tanaka, Hiroyuki K. M.

    2015-01-01

    While the benefits of determining the bulk density distribution of a landmass are evident, established experimental techniques reliant on gravity measurements cannot uniquely determine the underground density distribution. We address this problem by taking advantage of traffic tunnels densely distributed throughout the country. Cosmic ray muon flux is measured in the tunnels to determine the average density of each rock overburden. After analyzing the data collected from 146 observation points in Miura, South-Boso and South-Izu Peninsula, Japan as an example, we mapped out the shallow density distribution of an area of 1340 km2. We find a good agreement between muographically determined density distribution and geologic features as described in existing geological studies. The average shallow density distribution below each peninsula was determined with a great accuracy (less than ±0.8%). We also observed a significant reduction in density along fault lines and interpreted that as due to the presence of multiple cracks caused by mechanical stress during recurrent seismic events. We show that this new type of muography technique can be applied to estimate the terrain density and porosity distribution, thus determining more precise Bouguer reduction densities. PMID:25660352

  6. Influences of Normalization Method on Biomarker Discovery in Gas Chromatography-Mass Spectrometry-Based Untargeted Metabolomics: What Should Be Considered?

    PubMed

    Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo

    2017-05-16

    Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.

  7. Muographic mapping of the subsurface density structures in Miura, Boso and Izu peninsulas, Japan.

    PubMed

    Tanaka, Hiroyuki K M

    2015-02-09

    While the benefits of determining the bulk density distribution of a landmass are evident, established experimental techniques reliant on gravity measurements cannot uniquely determine the underground density distribution. We address this problem by taking advantage of traffic tunnels densely distributed throughout the country. Cosmic ray muon flux is measured in the tunnels to determine the average density of each rock overburden. After analyzing the data collected from 146 observation points in Miura, South-Boso and South-Izu Peninsula, Japan as an example, we mapped out the shallow density distribution of an area of 1340 km(2). We find a good agreement between muographically determined density distribution and geologic features as described in existing geological studies. The average shallow density distribution below each peninsula was determined with a great accuracy (less than ±0.8%). We also observed a significant reduction in density along fault lines and interpreted that as due to the presence of multiple cracks caused by mechanical stress during recurrent seismic events. We show that this new type of muography technique can be applied to estimate the terrain density and porosity distribution, thus determining more precise Bouguer reduction densities.

  8. Analysis and Countermeasures of Wind Power Accommodation by Aluminum Electrolysis Pot-Lines in China

    NASA Astrophysics Data System (ADS)

    Zhang, Hongliang; Ran, Ling; He, Guixiong; Wang, Zhenyu; Li, Jie

    2017-10-01

    The unit energy consumption and its price have become the main obstacles for the future development of the aluminum electrolysis industry in China. Meanwhile, wind power is widely being abandoned because of its instability. In this study, a novel idea for wind power accommodation is proposed to achieve a win-win situation: the idea is for nearby aluminum electrolysis plants to absorb the wind power. The features of the wind power distribution and aluminum electrolysis industry are first summarized, and the concept of wind power accommodation by the aluminum industry is introduced. Then, based on the characteristics of aluminum reduction cells, the key problems, including the bus-bar status, thermal balance, and magnetohydrodynamics instabilities, are analyzed. In addition, a whole accommodation implementation plan for wind power by aluminum reduction is introduced to explain the theoretical value of accommodation, evaluation of the reduction cells, and the industrial experiment scheme. A numerical simulation of a typical scenario proves that there is large accommodation potential for the aluminum reduction cells. Aluminum electrolysis can accommodate wind power and remain stable under the proper technique and accommodation scheme, which will provide promising benefits for the aluminum plant and the wind energy plant.

  9. Rocket launcher: A novel reduction technique for posterior hip dislocations and review of current literature.

    PubMed

    Dan, Michael; Phillips, Alfred; Simonian, Marcus; Flannagan, Scott

    2015-06-01

    We provide a review of literature on reduction techniques for posterior hip dislocations and present our experience with a novel technique for the reduction of acute posterior hip dislocations in the ED, 'the rocket launcher' technique. We present our results with six patients with prosthetic posterior hip dislocation treated in our rural ED. We recorded patient demographics. The technique involves placing the patient's knee over the shoulder, and holding the lower leg like a 'Rocket Launcher' allow the physician's shoulder to work as a fulcrum, in an ergonomically friendly manner for the reducer. We used Fisher's t-test for cohort analysis between reduction techniques. Of our patients, the mean age was 74 years (range 66 to 85 years). We had a 83% success rate. The one patient who the 'rocket launcher' failed in, was a hemi-arthroplasty patient who also failed all other closed techniques and needed open reduction. When compared with Allis (62% success rate), Whistler (60% success rate) and Captain Morgan (92% success rate) techniques, there was no statistically significant difference in the successfulness of the reduction techniques. There were no neurovascular or periprosthetic complications. We have described a reduction technique for posterior hip dislocations. Placing the patient's knee over the shoulder, and holding the lower leg like a 'Rocket Launcher' allow the physician's shoulder to work as a fulcrum, thus mechanically and ergonomically superior to standard techniques. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  10. Intramedullary osteosynthesis versus plate osteosynthesis in subtrochanteric fractures.

    PubMed

    Burnei, C; Popescu, Gh; Barbu, D; Capraru, F

    2011-11-14

    Due to an ever-aging population and a growing prevalence of osteoporosis and motor vehicle accidents, the number of subtrochanteric fractures is increasing worldwide. The choice of the appropriate implant continues to be critical for fixation of unstable hip fractures. The subtrochanteric region has certain anatomical and biomechanical features that can make fractures in this region difficult to treat. The preferred type of device is a matter of debate. Increased understandings of biomechanical characteristics of the hip and improvement of the implant materials have reduced the incidence of complications. The surgeons choose between the two methods according to Seinsheimer's classification and also to their personal preferences. As a general principle, the open reduction and internal fixation were performed in stable fractures, and the closed reduction and internal fixation were performed in unstable fractures. The advantages of intramedullary nailing consist in a small skin incision, lower operating times, preservation of fracture hematoma and the possibility of early weight bearing. The disadvantages consist in a difficult closed reduction due to important muscular forces, although the nail can be used as a reduction instrument, and higher implant cost. In open reduction internal fixation techniques, the advantage is represented by anatomical reduction which, in our opinion, is not necessary. The disadvantages are represented by: higher operating time, demanding surgery, large devascularization, higher infection rates, late weight bearing, medial instability, refracture after plate removal and inesthetic approach.

  11. Intramedullary osteosynthesis versus plate osteosynthesis in subtrochanteric fractures

    PubMed Central

    Burnei, C; Popescu, Gh; Barbu, D; Capraru, F

    2011-01-01

    Due to an ever-aging population and a growing prevalence of osteoporosis and motor vehicle accidents, the number of subtrochanteric fractures is increasing worldwide. The choice of the appropriate implant continues to be critical for fixation of unstable hip fractures. The subtrochanteric region has certain anatomical and biomechanical features that can make fractures in this region difficult to treat. The preferred type of device is a matter of debate. Increased understandings of biomechanical characteristics of the hip and improvement of the implant materials have reduced the incidence of complications. The surgeons choose between the two methods according to Seinsheimer's classification and also to their personal preferences. As a general principle, the open reduction and internal fixation were performed in stable fractures, and the closed reduction and internal fixation were performed in unstable fractures. The advantages of intramedullary nailing consist in a small skin incision, lower operating times, preservation of fracture hematoma and the possibility of early weight bearing. The disadvantages consist in a difficult closed reduction due to important muscular forces, although the nail can be used as a reduction instrument, and higher implant cost. In open reduction internal fixation techniques, the advantage is represented by anatomical reduction which, in our opinion, is not necessary. The disadvantages are represented by: higher operating time, demanding surgery, large devascularization, higher infection rates, late weight bearing, medial instability, refracture after plate removal and inesthetic approach. PMID:22514563

  12. Evolutionary Optimization of Centrifugal Nozzles for Organic Vapours

    NASA Astrophysics Data System (ADS)

    Persico, Giacomo

    2017-03-01

    This paper discusses the shape-optimization of non-conventional centrifugal turbine nozzles for Organic Rankine Cycle applications. The optimal aerodynamic design is supported by the use of a non-intrusive, gradient-free technique specifically developed for shape optimization of turbomachinery profiles. The method is constructed as a combination of a geometrical parametrization technique based on B-Splines, a high-fidelity and experimentally validated Computational Fluid Dynamic solver, and a surrogate-based evolutionary algorithm. The non-ideal gas behaviour featuring the flow of organic fluids in the cascades of interest is introduced via a look-up-table approach, which is rigorously applied throughout the whole optimization process. Two transonic centrifugal nozzles are considered, featuring very different loading and radial extension. The use of a systematic and automatic design method to such a non-conventional configuration highlights the character of centrifugal cascades; the blades require a specific and non-trivial definition of the shape, especially in the rear part, to avoid the onset of shock waves. It is shown that the optimization acts in similar way for the two cascades, identifying an optimal curvature of the blade that both provides a relevant increase of cascade performance and a reduction of downstream gradients.

  13. A systematic comparison of the closed shoulder reduction techniques.

    PubMed

    Alkaduhimi, H; van der Linde, J A; Willigenburg, N W; van Deurzen, D F P; van den Bekerom, M P J

    2017-05-01

    To identify the optimal technique for closed reduction for shoulder instability, based on success rates, reduction time, complication risks, and pain level. A PubMed and EMBASE query was performed, screening all relevant literature of closed reduction techniques mentioning the success rate written in English, Dutch, German, and Arabic. Studies with a fracture dislocation or lacking information on success rates for closed reduction techniques were excluded. We used the modified Coleman Methodology Score (CMS) to assess the quality of included studies and excluded studies with a poor methodological quality (CMS < 50). Finally, a meta-analysis was performed on the data from all studies combined. 2099 studies were screened for their title and abstract, of which 217 studies were screened full-text and finally 13 studies were included. These studies included 9 randomized controlled trials, 2 retrospective comparative studies, and 2 prospective non-randomized comparative studies. A combined analysis revealed that the scapular manipulation is the most successful (97%), fastest (1.75 min), and least painful reduction technique (VAS 1,47); the "Fast, Reliable, and Safe" (FARES) method also scores high in terms of successful reduction (92%), reduction time (2.24 min), and intra-reduction pain (VAS 1.59); the traction-countertraction technique is highly successful (95%), but slower (6.05 min) and more painful (VAS 4.75). For closed reduction of anterior shoulder dislocations, the combined data from the selected studies indicate that scapular manipulation is the most successful and fastest technique, with the shortest mean hospital stay and least pain during reduction. The FARES method seems the best alternative.

  14. A DFT-Based Method of Feature Extraction for Palmprint Recognition

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Karungaru, Stephen G.; Tsuge, Satoru; Fukumi, Minoru

    Over the last quarter century, research in biometric systems has developed at a breathtaking pace and what started with the focus on the fingerprint has now expanded to include face, voice, iris, and behavioral characteristics such as gait. Palmprint is one of the most recent additions, and is currently the subject of great research interest due to its inherent uniqueness, stability, user-friendliness and ease of acquisition. This paper describes an effective and procedurally simple method of palmprint feature extraction specifically for palmprint recognition, although verification experiments are also conducted. This method takes advantage of the correspondences that exist between prominent palmprint features or objects in the spatial domain with those in the frequency or Fourier domain. Multi-dimensional feature vectors are formed by extracting a GA-optimized set of points from the 2-D Fourier spectrum of the palmprint images. The feature vectors are then used for palmprint recognition, before and after dimensionality reduction via the Karhunen-Loeve Transform (KLT). Experiments performed using palmprint images from the ‘PolyU Palmprint Database’ indicate that using a compact set of DFT coefficients, combined with KLT and data preprocessing, produces a recognition accuracy of more than 98% and can provide a fast and effective technique for personal identification.

  15. Solid immersion terahertz imaging with sub-wavelength resolution

    NASA Astrophysics Data System (ADS)

    Chernomyrdin, Nikita V.; Schadko, Aleksander O.; Lebedev, Sergey P.; Tolstoguzov, Viktor L.; Kurlov, Vladimir N.; Reshetov, Igor V.; Spektor, Igor E.; Skorobogatiy, Maksim; Yurchenko, Stanislav O.; Zaytsev, Kirill I.

    2017-05-01

    We have developed a method of solid immersion THz imaging—a non-contact technique employing the THz beam focused into evanescent-field volume and allowing strong reduction in the dimensions of THz caustic. We have combined numerical simulations and experimental studies to demonstrate a sub-wavelength 0.35λ0-resolution of the solid immersion THz imaging system compared to 0.85λ0-resolution of a standard imaging system, employing only an aspherical singlet. We have discussed the prospective of using the developed technique in various branches of THz science and technology, namely, for THz measurements of solid-state materials featuring sub-wavelength variations of physical properties, for highly accurate mapping of healthy and pathological tissues in THz medical diagnosis, for detection of sub-wavelength defects in THz non-destructive sensing, and for enhancement of THz nonlinear effects.

  16. Combined data mining/NIR spectroscopy for purity assessment of lime juice

    NASA Astrophysics Data System (ADS)

    Shafiee, Sahameh; Minaei, Saeid

    2018-06-01

    This paper reports the data mining study on the NIR spectrum of lime juice samples to determine their purity (natural or synthetic). NIR spectra for 72 pure and synthetic lime juice samples were recorded in reflectance mode. Sample outliers were removed using PCA analysis. Different data mining techniques for feature selection (Genetic Algorithm (GA)) and classification (including the radial basis function (RBF) network, Support Vector Machine (SVM), and Random Forest (RF) tree) were employed. Based on the results, SVM proved to be the most accurate classifier as it achieved the highest accuracy (97%) using the raw spectrum information. The classifier accuracy dropped to 93% when selected feature vector by GA search method was applied as classifier input. It can be concluded that some relevant features which produce good performance with the SVM classifier are removed by feature selection. Also, reduced spectra using PCA do not show acceptable performance (total accuracy of 66% by RBFNN), which indicates that dimensional reduction methods such as PCA do not always lead to more accurate results. These findings demonstrate the potential of data mining combination with near-infrared spectroscopy for monitoring lime juice quality in terms of natural or synthetic nature.

  17. Application of wavelet transformation and adaptive neighborhood based modified backpropagation (ANMBP) for classification of brain cancer

    NASA Astrophysics Data System (ADS)

    Werdiningsih, Indah; Zaman, Badrus; Nuqoba, Barry

    2017-08-01

    This paper presents classification of brain cancer using wavelet transformation and Adaptive Neighborhood Based Modified Backpropagation (ANMBP). Three stages of the processes, namely features extraction, features reduction, and classification process. Wavelet transformation is used for feature extraction and ANMBP is used for classification process. The result of features extraction is feature vectors. Features reduction used 100 energy values per feature and 10 energy values per feature. Classifications of brain cancer are normal, alzheimer, glioma, and carcinoma. Based on simulation results, 10 energy values per feature can be used to classify brain cancer correctly. The correct classification rate of proposed system is 95 %. This research demonstrated that wavelet transformation can be used for features extraction and ANMBP can be used for classification of brain cancer.

  18. Dicationic ionic liquid mediated fabrication of Au@Pt nanoparticles supported on reduced graphene oxide with highly catalytic activity for oxygen reduction and hydrogen evolution

    NASA Astrophysics Data System (ADS)

    Shi, Ya-Cheng; Chen, Sai-Sai; Feng, Jiu-Ju; Lin, Xiao-Xiao; Wang, Weiping; Wang, Ai-Jun

    2018-05-01

    Ionic liquids as templates or directing agents have attracted great attention for shaping-modulated synthesis of advanced nanomaterials. In this work, reduced graphene oxide supported uniform core-shell Au@Pt nanoparticles (Au@Pt NPs/rGO) were fabricated by a simple one-pot aqueous approach, using N-methylimidazolium-based dicationic ionic liquid (1,1-bis(3-methylimadazoilum-1-yl)butylene bromide, [C4(Mim)2]2Br) as the shape-directing agent. The morphology evolution, structural information and formation mechanism of Au@Pt NPs anchored on rGO were investigated by a series of characterization techniques. The obtained nanocomposites displayed superior electrocatalytic features toward hydrogen evolution reaction (HER) and oxygen reduction reaction (ORR) compared with commercial Pt/C catalyst. This approach provides a novel route for facile synthesis of nanocatalysts in fuel cells.

  19. Synthesis, Characterization, Topographical Modification, and Surface Properties of Copoly(Imide Siloxane)s

    NASA Technical Reports Server (NTRS)

    Wohl, Christopher J.; Atkins, Brad M.; Belcher, Marcus A.; Connell, John W.

    2012-01-01

    Novel copoly(imide siloxane)s were synthesized from commercially available aminopropyl terminated siloxane oligomers, aromatic dianhydrides, and diamines. This synthetic approach produced copolymers with well-defined siloxane blocks linked with imide units in a random fashion. The copoly(amide acid)s were characterized by solution viscosity and subsequently used to cast thin films followed by thermal imidization in an inert atmosphere. Thin films were characterized using contact angle goniometry, attenuated total reflection Fourier transform infrared spectroscopy, confocal and optical microscopy, and tensile testing. Adhesion of micronsized particles was determined quantitatively using a sonication device. The polydimethylsiloxane (PDMS) moieties lowered the copolymer surface energy due to migration of siloxane moieties to the film s surface, resulting in a notable reduction in particle adhesion. A further reduction in particle adhesion was achieved by introducing topographical features on a scale of several to tens of microns by a laser ablation technique.

  20. Fabrication of micron scale metallic structures on photo paper substrates by low temperature photolithography for device applications

    NASA Astrophysics Data System (ADS)

    Cooke, M. D.; Wood, D.

    2015-11-01

    Using commercial standard paper as a substrate has a significant cost reduction implication over other more expensive substrate materials by approximately a factor of 100 (Shenton et al 2015 EMRS Spring Meeting; Zheng et al 2013 Nat. Sci. Rep. 3 1786). Discussed here is a novel process which allows photolithography and etching of simple metal films deposited on paper substrates, but requires no additional facilities to achieve it. This allows a significant reduction in feature size down to the micron scale over devices made using more conventional printing solutions which are of the order of tens of microns. The technique has great potential for making cheap disposable devices with additional functionality, which could include flexibility and foldability, simple disposability, porosity and low weight requirements. The potential for commercial applications and scale up is also discussed.

  1. Exponential convergence through linear finite element discretization of stratified subdomains

    NASA Astrophysics Data System (ADS)

    Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali

    2016-10-01

    Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.

  2. Understanding the Degradation Mechanism of Lithium Nickel Oxide Cathodes for Li-Ion Batteries

    DOE PAGES

    Xu, Jing; Hu, Enyuan; Nordlund, Dennis; ...

    2016-11-01

    The phase transition, charge compensation, and local chemical environment of Ni in LiNiO 2 were investigated to understand the degradation mechanism. The electrode was subjected to a variety of bulk and surface-sensitive characterization techniques under different charge–discharge cycling conditions. We observed the phase transition from the original hexagonal H1 phase to another two hexagonal phases (H2 and H3) upon Li deintercalation. Moreover, the gradual loss of H3-phase features was revealed during the repeated charges. The reduction in Ni redox activity occurred at both the charge and the discharge states, and it appeared both in the bulk and at the surfacemore » over the extended cycles. In conclusion, the degradation of crystal structure significantly contributes to the reduction of Ni redox activity, which in turn causes the cycling performance decay of LiNiO 2.« less

  3. Data Reduction Approaches for Dissecting Transcriptional Effects on Metabolism

    PubMed Central

    Schwahn, Kevin; Nikoloski, Zoran

    2018-01-01

    The availability of high-throughput data from transcriptomics and metabolomics technologies provides the opportunity to characterize the transcriptional effects on metabolism. Here we propose and evaluate two computational approaches rooted in data reduction techniques to identify and categorize transcriptional effects on metabolism by combining data on gene expression and metabolite levels. The approaches determine the partial correlation between two metabolite data profiles upon control of given principal components extracted from transcriptomics data profiles. Therefore, they allow us to investigate both data types with all features simultaneously without doing preselection of genes. The proposed approaches allow us to categorize the relation between pairs of metabolites as being under transcriptional or post-transcriptional regulation. The resulting classification is compared to existing literature and accumulated evidence about regulatory mechanism of reactions and pathways in the cases of Escherichia coli, Saccharomycies cerevisiae, and Arabidopsis thaliana. PMID:29731765

  4. Primary obstructive megaureter.

    PubMed

    Sripathi, V; King, P A; Thomson, M R; Bogle, M S

    1991-07-01

    Twenty-three children with primary obstructive megaureters presented between 1978 and 1988 to the Princess Margaret Hospital for Children in Perth. Twenty-eight ureters were treated. Urinary infections were the presenting feature in 14 children. The obstructive segment was transvesically excised. Histopathologic examination of the distal, intramural ureter showed fibromuscular disarray with a relative increase in fibrous tissue and reduction of musculature in all specimens. Twenty-two ureters were tapered by excision and all 28 were reimplanted using an antireflux technique. Seventeen children were followed for an average of 3 years. Seven children showed renal growth, reduction in ureteric size by greater than 2 cm, improvement in glomerular filtration rate by more than 10%, no obstruction on reflux, and no infections in postoperative period. Four children showed all the above but suffered one or more infections after the operation. Of the remaining 6 children, 3 had postoperative obstruction and 3 had vesicoureteric reflux.

  5. A Meta-Analytic Review of Stand-Alone Interventions to Improve Body Image

    PubMed Central

    Alleva, Jessica M.; Sheeran, Paschal; Webb, Thomas L.; Martijn, Carolien; Miles, Eleanor

    2015-01-01

    Objective Numerous stand-alone interventions to improve body image have been developed. The present review used meta-analysis to estimate the effectiveness of such interventions, and to identify the specific change techniques that lead to improvement in body image. Methods The inclusion criteria were that (a) the intervention was stand-alone (i.e., solely focused on improving body image), (b) a control group was used, (c) participants were randomly assigned to conditions, and (d) at least one pretest and one posttest measure of body image was taken. Effect sizes were meta-analysed and moderator analyses were conducted. A taxonomy of 48 change techniques used in interventions targeted at body image was developed; all interventions were coded using this taxonomy. Results The literature search identified 62 tests of interventions (N = 3,846). Interventions produced a small-to-medium improvement in body image (d + = 0.38), a small-to-medium reduction in beauty ideal internalisation (d + = -0.37), and a large reduction in social comparison tendencies (d + = -0.72). However, the effect size for body image was inflated by bias both within and across studies, and was reliable but of small magnitude once corrections for bias were applied. Effect sizes for the other outcomes were no longer reliable once corrections for bias were applied. Several features of the sample, intervention, and methodology moderated intervention effects. Twelve change techniques were associated with improvements in body image, and three techniques were contra-indicated. Conclusions The findings show that interventions engender only small improvements in body image, and underline the need for large-scale, high-quality trials in this area. The review identifies effective techniques that could be deployed in future interventions. PMID:26418470

  6. Automated processing of label-free Raman microscope images of macrophage cells with standardized regression for high-throughput analysis.

    PubMed

    Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I

    2010-11-19

    Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without compromise in image quality or information loss in associated spectra. These results motivate further use of label free microscopy techniques in real-time imaging of live immune cells.

  7. Nutrition labeling reduces valuations of food through multiple health and taste channels.

    PubMed

    Fisher, Geoffrey

    2018-01-01

    One popularized technique to promote healthy dietary choice involves posting calorie or other nutritional information at the time individuals make a consumption decision. While the evidence on the effectiveness of such interventions is mixed, relatively little work has focused on the underlying mechanisms of how such labels alter behavior. In the research reported here, we asked 87 hungry laboratory subjects to make bids over foods with or without nutrition labels present. We found that the presence of a nutrition label reduced bids by an average of 25 cents. Furthermore, we found this reduction was driven by differences in perceptions and the importance individuals placed on health features of the foods, but also by differences in the importance individuals placed on more visceral taste features. These results help explain the various methods in which nutritional information postings or other policy tools can nudge individuals to consume healthier options. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Complex refractive index measurements for BaF 2 and CaF 2 via single-angle infrared reflectance spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly-Gorham, Molly Rose K.; DeVetter, Brent M.; Brauer, Carolyn S.

    We have re-investigated the optical constants n and k for the homologous series of inorganic salts barium fluoride (BaF2) and calcium fluoride (CaF2) using a single-angle near-normal incidence reflectance device in combination with a calibrated Fourier transform infrared (FTIR) spectrometer. Our results are in good qualitative agreement with most previous works. However, certain features of the previously published data near the reststrahlen band exhibit distinct differences in spectral characteristics. Notably, our measurements of BaF2 do not include a spectral feature in the ~250 cm-1 reststrahlen band that was previously published. Additionally, CaF2 exhibits a distinct wavelength shift relative to themore » model derived from previously published data. We confirmed our results with recently published works that use significantly more modern instrumentation and data reduction techniques« less

  9. Study of traits and recalcitrance reduction of field-grown COMT down-regulated switchgrass

    DOE PAGES

    Li, Mi; Pu, Yunqiao; Yoo, Chang Geun; ...

    2017-01-03

    The native recalcitrance of plants hinders the biomass conversion process using current biorefinery techniques. Down-regulation of the caffeic acid O-methyltransferase (COMT) gene in the lignin biosynthesis pathway of switchgrass reduced the thermochemical and biochemical conversion recalcitrance of biomass. Due to potential environmental influences on lignin biosynthesis and deposition, studying the consequences of physicochemical changes in field-grown plants without pretreatment is essential to evaluate the performance of lignin-altered plants. In this study, we determined the chemical composition, cellulose crystallinity and the degree of its polymerization, molecular weight of hemicellulose, and cellulose accessibility of cell walls in order to better understand themore » fundamental features of why biomass is recalcitrant to conversion without pretreatment. The most important is to investigate whether traits and features are stable in the dynamics of field environmental effects over multiple years.« less

  10. Bacterial transformations of inorganic nitrogen in the oxygen-deficient waters of the Eastern Tropical South Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Lipschultz, F.; Wofsy, S. C.; Ward, B. B.; Codispoti, L. A.; Friedrich, G.; Elkins, J. W.

    1990-10-01

    Rates of transformations of inorganic nitrogen were measured in the low oxygen, subsurface waters (50-450 m) of the Eastern Tropical South Pacific during February 1985, using 15N tracer techniques. Oxygen concentrations over the entire region were in a range (O 2 < 2.5 μM) that allowed both oxidation and reduction of nitrogen to occur. A wide range of rates was observed for the lowest oxygen levels, indicating that observed oxygen concentration was not a primary factor regulating nitrogen metabolism. High values for subsurface metabolic rates correspond with high levels for surface primary production, both apparently associated with mesoscale features observed in satellite imagery and with mesoscale features of the current field. Measured rates of nitrate reduction and estimated rates of denitrification were sufficient to respire nearly all of the surface primary production that might be transported into the oxygen deficient zone. These results imply that the supply of labile organic material, especially from the surface, was more important than oxygen concentration in modulating the rates of nitrogen transformations within the low oxygen water mass of the Eastern Tropical South Pacific. The pattern of nitrite oxidation and nitrite reduction activities in the oxygen minimum zone supports the hypothesis ( ANDERSONet al., 1982, Deep-Sea Research, 29, 1113-1140) that nitrite, produced from nitrate reduction, can be recycled by oxidation at the interface between low and high oxygen waters. Rates for denitrification, estimated from nitrate reduction rates, were in harmony with previous estimates based on electron transport system (ETS) measurements and analysis of the nitrate deficit and water residence times. Assimilation rates of NH 4+ were substantial, providing evidence for heterotrophic bacterial growth in low oxygen waters. Ambient concentrations of ammonium were maintained at low values primarily by assimilation; ammonium oxidation was an important mechanism at the surface boundary of the low oxygen zone.

  11. Spreadsheet WATERSHED modeling for nonpoint-source pollution management in a Wisconsin basin

    USGS Publications Warehouse

    Walker, J.F.; Pickard, S.A.; Sonzogni, W.C.

    1989-01-01

    Although several sophisticated nonpoint pollution models exist, few are available that are easy to use, cover a variety of conditions, and integrate a wide range of information to allow managers and planners to assess different control strategies. Here, a straightforward pollutant input accounting approach is presented in the form of an existing model (WATERSHED) that has been adapted to run on modern electronic spreadsheets. As an application, WATERSHED is used to assess options to improve the quality of highly eutrophic Delavan Lake in Wisconsin. WATERSHED is flexible in that several techniques, such as the Universal Soil Loss Equation or unit-area loadings, can be used to estimate nonpoint-source inputs. Once the model parameters are determined (and calibrated, if possible), the spreadsheet features can be used to conduct a sensitivity analysis of management options. In the case of Delavan Lake, it was concluded that, although some nonpoint controls were cost-effective, the overall reduction in phosphorus would be insufficient to measurably improve water quality.A straightforward pollutant input accounting approach is presented in the form of an existing model (WATERSHED) that has been adapted to run on modern electronic spreadsheets. As an application, WATERSHED is used to assess options to improve the quality of highly eutrophic Delavan Lake in Wisconsin. WATERSHED is flexible in that several techniques, such as the Universal Soil Loss Equation or unit-area loadings, can be used to estimate nonpoint-source inputs. Once the model parameters are determined (and calibrated, if possible), the spreadsheet features can be used to conduct a sensitivity analysis of management options. In the case of Delavan Lake, it was concluded that, although some nonpoint controls were cost-effective, the overall reduction in phosphorus would be insufficient to measurably improve water quality.

  12. Estimated correlation matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, Szilárd; Kondor, Imre

    2004-11-01

    Correlations of returns on various assets play a central role in financial theory and also in many practical applications. From a theoretical point of view, the main interest lies in the proper description of the structure and dynamics of correlations, whereas for the practitioner the emphasis is on the ability of the models to provide adequate inputs for the numerous portfolio and risk management procedures used in the financial industry. The theory of portfolios, initiated by Markowitz, has suffered from the “curse of dimensions” from the very outset. Over the past decades a large number of different techniques have been developed to tackle this problem and reduce the effective dimension of large bank portfolios, but the efficiency and reliability of these procedures are extremely hard to assess or compare. In this paper, we propose a model (simulation)-based approach which can be used for the systematical testing of all these dimensional reduction techniques. To illustrate the usefulness of our framework, we develop several toy models that display some of the main characteristic features of empirical correlations and generate artificial time series from them. Then, we regard these time series as empirical data and reconstruct the corresponding correlation matrices which will inevitably contain a certain amount of noise, due to the finiteness of the time series. Next, we apply several correlation matrix estimators and dimension reduction techniques introduced in the literature and/or applied in practice. As in our artificial world the only source of error is the finite length of the time series and, in addition, the “true” model, hence also the “true” correlation matrix, are precisely known, therefore in sharp contrast with empirical studies, we can precisely compare the performance of the various noise reduction techniques. One of our recurrent observations is that the recently introduced filtering technique based on random matrix theory performs consistently well in all the investigated cases. Based on this experience, we believe that our simulation-based approach can also be useful for the systematic investigation of several related problems of current interest in finance.

  13. Condition Monitoring for Helicopter Data. Appendix A

    NASA Technical Reports Server (NTRS)

    Wen, Fang; Willett, Peter; Deb, Somnath

    2000-01-01

    In this paper the classical "Westland" set of empirical accelerometer helicopter data is analyzed with the aim of condition monitoring for diagnostic purposes. The goal is to determine features for failure events from these data, via a proprietary signal processing toolbox, and to weigh these according to a variety of classification algorithms. As regards signal processing, it appears that the autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; it has also been found that augmentation of these by harmonic and other parameters can improve classification significantly. As regards classification, several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior on training data and is thus able to quantify probability of error in an exact manner, such that features may be discarded or coarsened appropriately.

  14. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    PubMed Central

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  15. Vascular surgery trainees still need to learn how to sew: importance of learning surgical techniques in the era of endovascular surgery.

    PubMed

    Aziz, Faisal

    2015-01-01

    Vascular surgery represents one of the most rapidly evolving specialties in the field of surgery. It was merely 100 years ago when Dr. Alexis Carrel described vascular anastomosis. Over the course of next several decades, vascular surgeons distinguished themselves from general surgeons by horning the techniques of vascular surgery operations. In the era of minimally invasive interventions, the number of endovascular interventions performed by vascular surgeons has increased exponentially. Vascular surgery trainees in the current times spend considerable time in mastering the techniques of endovascular operations. Unfortunately, the reduction in number of open surgical operations has lead to concerns in regards to adequacy of learning open surgical techniques. In future, majority of vascular interventions will be done with minimally invasive techniques. Combination of poor training in open operations and increasing complexity of open surgical operations may lead to poor surgical outcomes. It is the need of the hour for vascular surgery trainees to realize the importance of learning and mastering open surgical techniques. One of the most distinguishing features of contemporary vascular surgeons is their ability to perform both endovascular and open vascular surgery operations, and we should strive to maintain our excellence in both of these arenas.

  16. n-SIFT: n-dimensional scale invariant feature transform.

    PubMed

    Cheung, Warren; Hamarneh, Ghassan

    2009-09-01

    We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.

  17. Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk

    NASA Astrophysics Data System (ADS)

    Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina

    2014-03-01

    We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.

  18. Effective Feature Selection for Classification of Promoter Sequences.

    PubMed

    K, Kouser; P G, Lavanya; Rangarajan, Lalitha; K, Acharya Kshitish

    2016-01-01

    Exploring novel computational methods in making sense of biological data has not only been a necessity, but also productive. A part of this trend is the search for more efficient in silico methods/tools for analysis of promoters, which are parts of DNA sequences that are involved in regulation of expression of genes into other functional molecules. Promoter regions vary greatly in their function based on the sequence of nucleotides and the arrangement of protein-binding short-regions called motifs. In fact, the regulatory nature of the promoters seems to be largely driven by the selective presence and/or the arrangement of these motifs. Here, we explore computational classification of promoter sequences based on the pattern of motif distributions, as such classification can pave a new way of functional analysis of promoters and to discover the functionally crucial motifs. We make use of Position Specific Motif Matrix (PSMM) features for exploring the possibility of accurately classifying promoter sequences using some of the popular classification techniques. The classification results on the complete feature set are low, perhaps due to the huge number of features. We propose two ways of reducing features. Our test results show improvement in the classification output after the reduction of features. The results also show that decision trees outperform SVM (Support Vector Machine), KNN (K Nearest Neighbor) and ensemble classifier LibD3C, particularly with reduced features. The proposed feature selection methods outperform some of the popular feature transformation methods such as PCA and SVD. Also, the methods proposed are as accurate as MRMR (feature selection method) but much faster than MRMR. Such methods could be useful to categorize new promoters and explore regulatory mechanisms of gene expressions in complex eukaryotic species.

  19. Superolateral dislocation of an intact mandibular condyle into the temporal fossa: case report and literature review.

    PubMed

    Sharma, Divashree; Khasgiwala, Ankit; Maheshwari, Bharat; Singh, Charanpreet; Shakya, Neelam

    2017-02-01

    Temporomandibular joint dislocation refers to the dislodgement of mandibular condyle from the glenoid fossa. Anterior and anteromedial dislocations of the mandibular condyle are frequently reported in the literature, but superolateral dislocation is a rare presentation. This report outlines a case of superolateral dislocation of an intact mandibular condyle that occurred in conjunction with an ipsilateral mandibular parasymphysis fracture. A review of the clinical features of superolateral dislocation of the mandibular condyle and the possible techniques of its reduction ranging from the most conservative means to extensive surgical interventions is presented. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Coarse analysis of collective behaviors: Bifurcation analysis of the optimal velocity model for traffic jam formation

    NASA Astrophysics Data System (ADS)

    Miura, Yasunari; Sugiyama, Yuki

    2017-12-01

    We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.

  1. The simulation of a propulsive jet and force measurement using a magnetically suspended wind tunnel model

    NASA Technical Reports Server (NTRS)

    Garbutt, K. S.; Goodyer, M. J.

    1994-01-01

    Models featuring the simulation of exhaust jets were developed for magnetic levitation in a wind tunnel. The exhaust gas was stored internally producing a discharge of sufficient duration to allow nominal steady state to be reached. The gas was stored in the form of compressed gas or a solid rocket propellant. Testing was performed with the levitated models although deficiencies prevented the detection of jet-induced aerodynamic effects. Difficulties with data reduction led to the development of a new force calibration technique, used in conjunction with an exhaust simulator and also in separate high incidence aerodynamic tests.

  2. Guided SAR image despeckling with probabilistic non local weights

    NASA Astrophysics Data System (ADS)

    Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny

    2017-12-01

    SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.

  3. Coarse-grained mechanics of viral shells

    NASA Astrophysics Data System (ADS)

    Klug, William S.; Gibbons, Melissa M.

    2008-03-01

    We present an approach for creating three-dimensional finite element models of viral capsids from atomic-level structural data (X-ray or cryo-EM). The models capture heterogeneous geometric features and are used in conjunction with three-dimensional nonlinear continuum elasticity to simulate nanoindentation experiments as performed using atomic force microscopy. The method is extremely flexible; able to capture varying levels of detail in the three-dimensional structure. Nanoindentation simulations are presented for several viruses: Hepatitis B, CCMV, HK97, and φ29. In addition to purely continuum elastic models a multiscale technique is developed that combines finite-element kinematics with MD energetics such that large-scale deformations are facilitated by a reduction in degrees of freedom. Simulations of these capsid deformation experiments provide a testing ground for the techniques, as well as insight into the strength-determining mechanisms of capsid deformation. These methods can be extended as a framework for modeling other proteins and macromolecular structures in cell biology.

  4. Nonideal ultrathin mantle cloak for electrically large conducting cylinders.

    PubMed

    Liu, Shuo; Zhang, Hao Chi; Xu, He-Xiu; Cui, Tie Jun

    2014-09-01

    Based on the concept of the scattering cancellation technique, we propose a nonideal ultrathin mantle cloak that can efficiently suppress the total scattering cross sections of an electrically large conducting cylinder (over one free-space wavelength). The cloaking mechanism is investigated in depth based on the Mie scattering theory and is simultaneously interpreted from the perspective of far-field bistatic scattering and near-field distributions. We remark that, unlike the perfect transformation-optics-based cloak, this nonideal cloaking technique is mainly designed to minimize simultaneously several scattering multipoles of a relatively large geometry around considerably broad bandwidth. Numerical simulations and experimental results show that the antiscattering ability of the metasurface gives rise to excellent total scattering reduction of the electrically large cylinder and remarkable electric-field restoration around the cloak. The outstanding cloaking performance together with the good features of and ultralow profile, flexibility, and easy fabrication predict promising applications in the microwave frequencies.

  5. A series connection architecture for large-area organic photovoltaic modules with a 7.5% module efficiency.

    PubMed

    Hong, Soonil; Kang, Hongkyu; Kim, Geunjin; Lee, Seongyu; Kim, Seok; Lee, Jong-Hoon; Lee, Jinho; Yi, Minjin; Kim, Junghwan; Back, Hyungcheol; Kim, Jae-Ryoung; Lee, Kwanghee

    2016-01-05

    The fabrication of organic photovoltaic modules via printing techniques has been the greatest challenge for their commercial manufacture. Current module architecture, which is based on a monolithic geometry consisting of serially interconnecting stripe-patterned subcells with finite widths, requires highly sophisticated patterning processes that significantly increase the complexity of printing production lines and cause serious reductions in module efficiency due to so-called aperture loss in series connection regions. Herein we demonstrate an innovative module structure that can simultaneously reduce both patterning processes and aperture loss. By using a charge recombination feature that occurs at contacts between electron- and hole-transport layers, we devise a series connection method that facilitates module fabrication without patterning the charge transport layers. With the successive deposition of component layers using slot-die and doctor-blade printing techniques, we achieve a high module efficiency reaching 7.5% with area of 4.15 cm(2).

  6. Progress in diode-pumped alexandrite lasers as a new resource for future space lidar missions

    NASA Astrophysics Data System (ADS)

    Damzen, M. J.; Thomas, G. M.; Teppitaksak, A.; Minassian, A.

    2017-11-01

    Satellite-based remote sensing using laser-based lidar techniques provides a powerful tool for global 3-D mapping of atmospheric species (e.g. CO2, ozone, clouds, aerosols), physical attributes of the atmosphere (e.g. temperature, wind speed), and spectral indicators of Earth features (e.g. vegetation, water). Such information provides a valuable source for weather prediction, understanding of climate change, atmospheric science and health of the Earth eco-system. Similarly, laser-based altimetry can provide high precision ground topography mapping and more complex 3-D mapping (e.g. canopy height profiling). The lidar technique requires use of cutting-edge laser technologies and engineered designs that are capable of enduring the space environment over the mission lifetime. The laser must operate with suitably high electrical-to-optical efficiency and risk reduction strategy adopted to mitigate against laser failure or excessive operational degradation of laser performance.

  7. An improved technique for the 2H/1H analysis of urines from diabetic volunteers

    USGS Publications Warehouse

    Coplen, T.B.; Harper, I.T.

    1994-01-01

    The H2-H2O ambient-temperature equilibration technique for the determination of 2H/1H ratios in urinary waters from diabetic subjects provides improved accuracy over the conventional Zn reduction technique. The standard deviation, ~ 1-2???, is at least a factor of three better than that of the Zn reduction technique on urinary waters from diabetic volunteers. Experiments with pure water and solutions containing glucose, urea and albumen indicate that there is no measurable bias in the hydrogen equilibration technique.The H2-H2O ambient-temperature equilibration technique for the determination of 2H/1H ratios in urinary waters from diabetic subjects provides improved accuracy over the conventional Zn reduction technique. The standard deviation, approximately 1-2%, is at least a factor of three better than that of the Zn reduction technique on urinary waters from diabetic volunteers. Experiments with pure water and solutions containing glucose, urea and albumen indicate that there is no measurable bias in the hydrogen equilibration technique.

  8. Classification of molecular structure images by using ANN, RF, LBP, HOG, and size reduction methods for early stomach cancer detection

    NASA Astrophysics Data System (ADS)

    Aytaç Korkmaz, Sevcan; Binol, Hamidullah

    2018-03-01

    Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.

  9. A novel green approach for reduction of free standing graphene oxide: electrical and electronic structural investigations.

    PubMed

    Saravanan, K; Panigrahi, B K; Suresh, K; Sundaravel, B; Magudapathy, P; Gupta, Mukul

    2018-08-24

    Ion beam irradiation technique has been proposed, for efficient, fast and eco-friendly reduction of graphene oxide (GO), as an alternative to the conventional methods. 5 MeV, Au + ion beam has been used to reduce the free standing GO flake. Both electronic and nuclear energy loss mechanisms of the irradiation process play a major role in removal of oxygen moieties and recovery of graphene network. Atomic resolution scanning tunnelling microscopy analysis of the irradiated GO flake shows the characteristic honeycomb structure of graphene. X-ray absorption near edge structure analysis at C K-edge reveals that the features of the irradiated GO flake resemble the few layer graphene. Resonant Rutherford backscattering spectrometry analysis evidenced an enhanced C/O ratio of ∼23 in the irradiated GO. In situ sheet resistance measurements exhibit a sharp decrease of resistance (few 100 s of Ω) at a fluence of 6.5 × 10 14 ions cm -2 . Photoluminescence spectroscopic analysis of irradiated GO shows a sharp blue emission, while pristine GO exhibits a broad emission in the visible-near IR region. Region selective reduction, tunable electrical and optical properties by controlling C/O ratio makes ion irradiation as a versatile tool for the green reduction of GO for diverse applications.

  10. Accurate RNA 5-methylcytosine site prediction based on heuristic physical-chemical properties reduction and classifier ensemble.

    PubMed

    Zhang, Ming; Xu, Yan; Li, Lei; Liu, Zi; Yang, Xibei; Yu, Dong-Jun

    2018-06-01

    RNA 5-methylcytosine (m 5 C) is an important post-transcriptional modification that plays an indispensable role in biological processes. The accurate identification of m 5 C sites from primary RNA sequences is especially useful for deeply understanding the mechanisms and functions of m 5 C. Due to the difficulty and expensive costs of identifying m 5 C sites with wet-lab techniques, developing fast and accurate machine-learning-based prediction methods is urgently needed. In this study, we proposed a new m 5 C site predictor, called M5C-HPCR, by introducing a novel heuristic nucleotide physicochemical property reduction (HPCR) algorithm and classifier ensemble. HPCR extracts multiple reducts of physical-chemical properties for encoding discriminative features, while the classifier ensemble is applied to integrate multiple base predictors, each of which is trained based on a separate reduct of the physical-chemical properties obtained from HPCR. Rigorous jackknife tests on two benchmark datasets demonstrate that M5C-HPCR outperforms state-of-the-art m 5 C site predictors, with the highest values of MCC (0.859) and AUC (0.962). We also implemented the webserver of M5C-HPCR, which is freely available at http://cslab.just.edu.cn:8080/M5C-HPCR/. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Robust Derivation of Risk Reduction Strategies

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Port, Daniel; Feather, Martin

    2007-01-01

    Effective risk reduction strategies can be derived mechanically given sufficient characterization of the risks present in the system and the effectiveness of available risk reduction techniques. In this paper, we address an important question: can we reliably expect mechanically derived risk reduction strategies to be better than fixed or hand-selected risk reduction strategies, given that the quantitative assessment of risks and risk reduction techniques upon which mechanical derivation is based is difficult and likely to be inaccurate? We consider this question relative to two methods for deriving effective risk reduction strategies: the strategic method defined by Kazman, Port et al [Port et al, 2005], and the Defect Detection and Prevention (DDP) tool [Feather & Cornford, 2003]. We performed a number of sensitivity experiments to evaluate how inaccurate knowledge of risk and risk reduction techniques affect the performance of the strategies computed by the Strategic Method compared to a variety of alternative strategies. The experimental results indicate that strategies computed by the Strategic Method were significantly more effective than the alternative risk reduction strategies, even when knowledge of risk and risk reduction techniques was very inaccurate. The robustness of the Strategic Method suggests that its use should be considered in a wide range of projects.

  12. Spectroscopic, cyclic voltammetric and biological studies of transition metal complexes with mixed nitrogen-sulphur (NS) donor macrocyclic ligand derived from thiosemicarbazide

    NASA Astrophysics Data System (ADS)

    Chandra, Sulekh; Gupta, Lokesh Kumar; Sangeetika

    2005-11-01

    The complexation of new mixed thia-aza-oxa macrocycle viz., 2,12-dithio-5,9,14,18-tetraoxo-7,16-dithia-1,3,4,10,11,13-hexaazacyclooctadecane containing thiosemicarba-zone unit with a series of transition metals Co(II), Ni(II) and Cu(II) has been investigated, by different spectroscopic techniques. The structural features of the ligand have been studied by EI-mass, 1H NMR and IR spectral techniques. Elemental analyses, magnetic moment susceptibility, molar conductance, IR, electronic, and EPR spectral studies characterized the complexes. Electronic absorption and IR spectra of the complexes indicate octahedral geometry for chloro, nitrato, thiocyanato or acetato complexes. The dimeric and neutral nature of the sulphato complexes are confirmed from magnetic susceptibility and low conductance values. Electronic spectra suggests square-planar geometry for all sulphato complexes. The redox behaviour was studied by cyclic voltammetry, show metal-centered reduction processes for all complexes. The complexes of copper show both oxidation and reduction process. The redox potentials depend on the conformation of central atom in the macrocyclic complexes. Newly synthesized macrocyclic ligand and its transition metal complexes show markedly growth inhibitory activity against pathogenic bacterias and plant pathogenic fungi under study. Most of the complexes have higher activity than that of the metal free ligand.

  13. Multisensor multiresolution data fusion for improvement in classification

    NASA Astrophysics Data System (ADS)

    Rubeena, V.; Tiwari, K. C.

    2016-04-01

    The rapid advancements in technology have facilitated easy availability of multisensor and multiresolution remote sensing data. Multisensor, multiresolution data contain complementary information and fusion of such data may result in application dependent significant information which may otherwise remain trapped within. The present work aims at improving classification by fusing features of coarse resolution hyperspectral (1 m) LWIR and fine resolution (20 cm) RGB data. The classification map comprises of eight classes. The class names are Road, Trees, Red Roof, Grey Roof, Concrete Roof, Vegetation, bare Soil and Unclassified. The processing methodology for hyperspectral LWIR data comprises of dimensionality reduction, resampling of data by interpolation technique for registering the two images at same spatial resolution, extraction of the spatial features to improve classification accuracy. In the case of fine resolution RGB data, the vegetation index is computed for classifying the vegetation class and the morphological building index is calculated for buildings. In order to extract the textural features, occurrence and co-occurence statistics is considered and the features will be extracted from all the three bands of RGB data. After extracting the features, Support Vector Machine (SVMs) has been used for training and classification. To increase the classification accuracy, post processing steps like removal of any spurious noise such as salt and pepper noise is done which is followed by filtering process by majority voting within the objects for better object classification.

  14. Computational procedures for evaluating the sensitivity derivatives of vibration frequencies and Eigenmodes of framed structures

    NASA Technical Reports Server (NTRS)

    Fetterman, Timothy L.; Noor, Ahmed K.

    1987-01-01

    Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.

  15. Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding

    PubMed Central

    Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping

    2015-01-01

    Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771

  16. Epileptic seizure detection in EEG signal with GModPCA and support vector machine.

    PubMed

    Jaiswal, Abeg Kumar; Banka, Haider

    2017-01-01

    Epilepsy is one of the most common neurological disorders caused by recurrent seizures. Electroencephalograms (EEGs) record neural activity and can detect epilepsy. Visual inspection of an EEG signal for epileptic seizure detection is a time-consuming process and may lead to human error; therefore, recently, a number of automated seizure detection frameworks were proposed to replace these traditional methods. Feature extraction and classification are two important steps in these procedures. Feature extraction focuses on finding the informative features that could be used for classification and correct decision-making. Therefore, proposing effective feature extraction techniques for seizure detection is of great significance. Principal Component Analysis (PCA) is a dimensionality reduction technique used in different fields of pattern recognition including EEG signal classification. Global modular PCA (GModPCA) is a variation of PCA. In this paper, an effective framework with GModPCA and Support Vector Machine (SVM) is presented for epileptic seizure detection in EEG signals. The feature extraction is performed with GModPCA, whereas SVM trained with radial basis function kernel performed the classification between seizure and nonseizure EEG signals. Seven different experimental cases were conducted on the benchmark epilepsy EEG dataset. The system performance was evaluated using 10-fold cross-validation. In addition, we prove analytically that GModPCA has less time and space complexities as compared to PCA. The experimental results show that EEG signals have strong inter-sub-pattern correlations. GModPCA and SVM have been able to achieve 100% accuracy for the classification between normal and epileptic signals. Along with this, seven different experimental cases were tested. The classification results of the proposed approach were better than were compared the results of some of the existing methods proposed in literature. It is also found that the time and space complexities of GModPCA are less as compared to PCA. This study suggests that GModPCA and SVM could be used for automated epileptic seizure detection in EEG signal.

  17. A review on machine learning principles for multi-view biological data integration.

    PubMed

    Li, Yifeng; Wu, Fang-Xiang; Ngom, Alioune

    2018-03-01

    Driven by high-throughput sequencing techniques, modern genomic and clinical studies are in a strong need of integrative machine learning models for better use of vast volumes of heterogeneous information in the deep understanding of biological systems and the development of predictive models. How data from multiple sources (called multi-view data) are incorporated in a learning system is a key step for successful analysis. In this article, we provide a comprehensive review on omics and clinical data integration techniques, from a machine learning perspective, for various analyses such as prediction, clustering, dimension reduction and association. We shall show that Bayesian models are able to use prior information and model measurements with various distributions; tree-based methods can either build a tree with all features or collectively make a final decision based on trees learned from each view; kernel methods fuse the similarity matrices learned from individual views together for a final similarity matrix or learning model; network-based fusion methods are capable of inferring direct and indirect associations in a heterogeneous network; matrix factorization models have potential to learn interactions among features from different views; and a range of deep neural networks can be integrated in multi-modal learning for capturing the complex mechanism of biological systems.

  18. Metabolic and clinical assessment of efficacy of cryoablation therapy on skeletal masses by 18F-FDG positron emission tomography/computed tomography (PET/CT) and visual analogue scale (VAS): initial experience.

    PubMed

    Masala, Salvatore; Schillaci, Orazio; Bartolucci, Alberto D; Calabria, Ferdinando; Mammucari, Matteo; Simonetti, Giovanni

    2011-02-01

    Various therapy modalities have been proposed as standard treatments in management of bone metastases. Radiation therapy remains the standard of care for patients with localized bone pain, but up to 30% of them do not experience notable pain relief. Percutaneous cryoablation is a minimally invasive technique that induces necrosis by alternately freezing and thawing a target tissue. This technique is successfully used to treat a variety of malignant and benign diseases in different sites. (18)F-FDG positron emission tomography/computed tomography ((18)F-FDG PET/CT) is a single technique of imaging that provides in a "single step" both morphological and metabolic features of neoplastic lesions of the bone. The aim of this study was to evaluate the efficacy of the cryosurgical technique on secondary musculoskeletal masses according to semi-quantitative PET analysis and clinical-test evaluation with the visual analogue scale (VAS). We enrolled 20 patients with painful bone lesions (score pain that exceeded 4 on the VAS) that were non-responsive to treatment; one lesion per patient was treated. All patients underwent a PET-CT evaluation before and 8 weeks after cryotherapy; maximum standardized uptake value (SUV(max)) was measured before and after treatment for metabolic assessment of response to therapy. After treatment, 18 patients (90%) showed considerable reduction in SUV(max) value (>50%) suggestive of response to treatment; only 2 patients did not show meaningful reduction in metabolic activity. Our preliminary study demonstrates that quantitative analysis provided by PET correlates with response to cryoablation therapy as assessed by CT data and clinical VAS evaluation.

  19. A novel method for predicting kidney stone type using ensemble learning.

    PubMed

    Kazemi, Yassaman; Mirroshandel, Seyed Abolghasem

    2018-01-01

    The high morbidity rate associated with kidney stone disease, which is a silent killer, is one of the main concerns in healthcare systems all over the world. Advanced data mining techniques such as classification can help in the early prediction of this disease and reduce its incidence and associated costs. The objective of the present study is to derive a model for the early detection of the type of kidney stone and the most influential parameters with the aim of providing a decision-support system. Information was collected from 936 patients with nephrolithiasis at the kidney center of the Razi Hospital in Rasht from 2012 through 2016. The prepared dataset included 42 features. Data pre-processing was the first step toward extracting the relevant features. The collected data was analyzed with Weka software, and various data mining models were used to prepare a predictive model. Various data mining algorithms such as the Bayesian model, different types of Decision Trees, Artificial Neural Networks, and Rule-based classifiers were used in these models. We also proposed four models based on ensemble learning to improve the accuracy of each learning algorithm. In addition, a novel technique for combining individual classifiers in ensemble learning was proposed. In this technique, for each individual classifier, a weight is assigned based on our proposed genetic algorithm based method. The generated knowledge was evaluated using a 10-fold cross-validation technique based on standard measures. However, the assessment of each feature for building a predictive model was another significant challenge. The predictive strength of each feature for creating a reproducible outcome was also investigated. Regarding the applied models, parameters such as sex, acid uric condition, calcium level, hypertension, diabetes, nausea and vomiting, flank pain, and urinary tract infection (UTI) were the most vital parameters for predicting the chance of nephrolithiasis. The final ensemble-based model (with an accuracy of 97.1%) was a robust one and could be safely applied to future studies to predict the chances of developing nephrolithiasis. This model provides a novel way to study stone disease by deciphering the complex interaction among different biological variables, thus helping in an early identification and reduction in diagnosis time. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Comparison of the direct and indirect reduction techniques during the surgical management of posterior malleolar fractures.

    PubMed

    Shi, Hong-Fei; Xiong, Jin; Chen, Yi-Xin; Wang, Jun-Fei; Qiu, Xu-Sheng; Huang, Jie; Gui, Xue-Yang; Wen, Si-Yuan; Wang, Yin-He

    2017-03-14

    The optimal method for the reduction and fixation of posterior malleolar fracture (PMF) remains inconclusive. Currently, both of the indirect and direct reduction techniques are widely used. We aimed to compare the reduction quality and clinical outcome of posterior malleolar fracture managed with the direct reduction technique through posterolateral approach or the indirect reduction technique using ligamentotaxis. Patients with a PMF involving over 25% of the articular surface were recruited and assigned to the direct reduction (DR) group or the indirect reduction (IR) group. Following reduction and fixation of the fracture, the quality of fracture reduction was evaluated in post-operative CT images. Clinical and radiological follow-ups were performed at 6 weeks, 3 months, 6 months, 12 months, and then at 6 month-intervals postoperatively. Functional outcome (AOFAS score), ankle range of motion, and Visual Analog Scale (VAS) were evaluated at the last follow-up. Statistical differences were compared between the DR and IR groups considering the patient demographics, quality of fracture reduction, AOFAS score, and VAS. Totally 116 patients were included, wherein 64 cases were assigned to the DR group and 52 cases were assigned to the IR group. The quality of fracture reduction was significant higher in the DR group (P = 0.038). In the patients who completed a minimum of 12 months' follow-up, a median AOFAS score of 87 was recorded in the DR group, which was significantly higher than that recorded in the IR group (a median score of 80). The ankle range of motion was slightly better in the DR group, with the mean dorsiflexion restriction recorded to be 5.2° and 6.1° in the DR and IR group respectively (P = 0.331). Similar VAS score was observed in the two groups (P = 0.419). The direct reduction technique through a posterolateral approach provide better quality of fracture reduction and functional outcome in the management of PMF over 25% of articular surface, as compared with the indirect reduction technique using ligamentotaxis. NCT02801474 (retrospectively registered, June 2016, ClinicalTrails.gov).

  1. Denoising the Speaking Brain: Toward a Robust Technique for Correcting Artifact-Contaminated fMRI Data under Severe Motion

    PubMed Central

    Xu, Yisheng; Tong, Yunxia; Liu, Siyuan; Chow, Ho Ming; AbdulSabur, Nuria Y.; Mattay, Govind S.; Braun, Allen R.

    2014-01-01

    A comprehensive set of methods based on spatial independent component analysis (sICA) is presented as a robust technique for artifact removal, applicable to a broad range of functional magnetic resonance imaging (fMRI) experiments that have been plagued by motion-related artifacts. Although the applications of sICA for fMRI denoising have been studied previously, three fundamental elements of this approach have not been established as follows: 1) a mechanistically-based ground truth for component classification; 2) a general framework for evaluating the performance and generalizability of automated classifiers; 3) a reliable method for validating the effectiveness of denoising. Here we perform a thorough investigation of these issues and demonstrate the power of our technique by resolving the problem of severe imaging artifacts associated with continuous overt speech production. As a key methodological feature, a dual-mask sICA method is proposed to isolate a variety of imaging artifacts by directly revealing their extracerebral spatial origins. It also plays an important role for understanding the mechanistic properties of noise components in conjunction with temporal measures of physical or physiological motion. The potentials of a spatially-based machine learning classifier and the general criteria for feature selection have both been examined, in order to maximize the performance and generalizability of automated component classification. The effectiveness of denoising is quantitatively validated by comparing the activation maps of fMRI with those of positron emission tomography acquired under the same task conditions. The general applicability of this technique is further demonstrated by the successful reduction of distance-dependent effect of head motion on resting-state functional connectivity. PMID:25225001

  2. Denoising the speaking brain: toward a robust technique for correcting artifact-contaminated fMRI data under severe motion.

    PubMed

    Xu, Yisheng; Tong, Yunxia; Liu, Siyuan; Chow, Ho Ming; AbdulSabur, Nuria Y; Mattay, Govind S; Braun, Allen R

    2014-12-01

    A comprehensive set of methods based on spatial independent component analysis (sICA) is presented as a robust technique for artifact removal, applicable to a broad range of functional magnetic resonance imaging (fMRI) experiments that have been plagued by motion-related artifacts. Although the applications of sICA for fMRI denoising have been studied previously, three fundamental elements of this approach have not been established as follows: 1) a mechanistically-based ground truth for component classification; 2) a general framework for evaluating the performance and generalizability of automated classifiers; and 3) a reliable method for validating the effectiveness of denoising. Here we perform a thorough investigation of these issues and demonstrate the power of our technique by resolving the problem of severe imaging artifacts associated with continuous overt speech production. As a key methodological feature, a dual-mask sICA method is proposed to isolate a variety of imaging artifacts by directly revealing their extracerebral spatial origins. It also plays an important role for understanding the mechanistic properties of noise components in conjunction with temporal measures of physical or physiological motion. The potentials of a spatially-based machine learning classifier and the general criteria for feature selection have both been examined, in order to maximize the performance and generalizability of automated component classification. The effectiveness of denoising is quantitatively validated by comparing the activation maps of fMRI with those of positron emission tomography acquired under the same task conditions. The general applicability of this technique is further demonstrated by the successful reduction of distance-dependent effect of head motion on resting-state functional connectivity. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Discrimination of stroke-related mild cognitive impairment and vascular dementia using EEG signal analysis.

    PubMed

    Al-Qazzaz, Noor Kamal; Ali, Sawal Hamid Bin Mohd; Ahmad, Siti Anom; Islam, Mohd Shabiul; Escudero, Javier

    2018-01-01

    Stroke survivors are more prone to developing cognitive impairment and dementia. Dementia detection is a challenge for supporting personalized healthcare. This study analyzes the electroencephalogram (EEG) background activity of 5 vascular dementia (VaD) patients, 15 stroke-related patients with mild cognitive impairment (MCI), and 15 control healthy subjects during a working memory (WM) task. The objective of this study is twofold. First, it aims to enhance the discrimination of VaD, stroke-related MCI patients, and control subjects using fuzzy neighborhood preserving analysis with QR-decomposition (FNPAQR); second, it aims to extract and investigate the spectral features that characterize the post-stroke dementia patients compared to the control subjects. Nineteen channels were recorded and analyzed using the independent component analysis and wavelet analysis (ICA-WT) denoising technique. Using ANOVA, linear spectral power including relative powers (RP) and power ratio were calculated to test whether the EEG dominant frequencies were slowed down in VaD and stroke-related MCI patients. Non-linear features including permutation entropy (PerEn) and fractal dimension (FD) were used to test the degree of irregularity and complexity, which was significantly lower in patients with VaD and stroke-related MCI than that in control subjects (ANOVA; p ˂ 0.05). This study is the first to use fuzzy neighborhood preserving analysis with QR-decomposition (FNPAQR) dimensionality reduction technique with EEG background activity of dementia patients. The impairment of post-stroke patients was detected using support vector machine (SVM) and k-nearest neighbors (kNN) classifiers. A comparative study has been performed to check the effectiveness of using FNPAQR dimensionality reduction technique with the SVM and kNN classifiers. FNPAQR with SVM and kNN obtained 91.48 and 89.63% accuracy, respectively, whereas without using the FNPAQR exhibited 70 and 67.78% accuracy for SVM and kNN, respectively, in classifying VaD, stroke-related MCI, and control patients, respectively. Therefore, EEG could be a reliable index for inspecting concise markers that are sensitive to VaD and stroke-related MCI patients compared to control healthy subjects.

  4. PAPR reduction in FBMC using an ACE-based linear programming optimization

    NASA Astrophysics Data System (ADS)

    van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan

    2014-12-01

    This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as a replacement modulation system for OFDM.

  5. Flexible multibody simulation of automotive systems with non-modal model reduction techniques

    NASA Astrophysics Data System (ADS)

    Shiiba, Taichi; Fehr, Jörg; Eberhard, Peter

    2012-12-01

    The stiffness of the body structure of an automobile has a strong relationship with its noise, vibration, and harshness (NVH) characteristics. In this paper, the effect of the stiffness of the body structure upon ride quality is discussed with flexible multibody dynamics. In flexible multibody simulation, the local elastic deformation of the vehicle has been described traditionally with modal shape functions. Recently, linear model reduction techniques from system dynamics and mathematics came into the focus to find more sophisticated elastic shape functions. In this work, the NVH-relevant states of a racing kart are simulated, whereas the elastic shape functions are calculated with modern model reduction techniques like moment matching by projection on Krylov-subspaces, singular value decomposition-based reduction techniques, and combinations of those. The whole elastic multibody vehicle model consisting of tyres, steering, axle, etc. is considered, and an excitation with a vibration characteristics in a wide frequency range is evaluated in this paper. The accuracy and the calculation performance of those modern model reduction techniques is investigated including a comparison of the modal reduction approach.

  6. Deep learning aided decision support for pulmonary nodules diagnosing: a review.

    PubMed

    Yang, Yixin; Feng, Xiaoyi; Chi, Wenhao; Li, Zhengyang; Duan, Wenzhe; Liu, Haiping; Liang, Wenhua; Wang, Wei; Chen, Ping; He, Jianxing; Liu, Bo

    2018-04-01

    Deep learning techniques have recently emerged as promising decision supporting approaches to automatically analyze medical images for different clinical diagnosing purposes. Diagnosing of pulmonary nodules by using computer-assisted diagnosing has received considerable theoretical, computational, and empirical research work, and considerable methods have been developed for detection and classification of pulmonary nodules on different formats of images including chest radiographs, computed tomography (CT), and positron emission tomography in the past five decades. The recent remarkable and significant progress in deep learning for pulmonary nodules achieved in both academia and the industry has demonstrated that deep learning techniques seem to be promising alternative decision support schemes to effectively tackle the central issues in pulmonary nodules diagnosing, including feature extraction, nodule detection, false-positive reduction, and benign-malignant classification for the huge volume of chest scan data. The main goal of this investigation is to provide a comprehensive state-of-the-art review of the deep learning aided decision support for pulmonary nodules diagnosing. As far as the authors know, this is the first time that a review is devoted exclusively to deep learning techniques for pulmonary nodules diagnosing.

  7. Image analysis and machine learning for detecting malaria.

    PubMed

    Poostchi, Mahdieh; Silamut, Kamolrat; Maude, Richard J; Jaeger, Stefan; Thoma, George

    2018-04-01

    Malaria remains a major burden on global health, with roughly 200 million cases worldwide and more than 400,000 deaths per year. Besides biomedical research and political efforts, modern information technology is playing a key role in many attempts at fighting the disease. One of the barriers toward a successful mortality reduction has been inadequate malaria diagnosis in particular. To improve diagnosis, image analysis software and machine learning methods have been used to quantify parasitemia in microscopic blood slides. This article gives an overview of these techniques and discusses the current developments in image analysis and machine learning for microscopic malaria diagnosis. We organize the different approaches published in the literature according to the techniques used for imaging, image preprocessing, parasite detection and cell segmentation, feature computation, and automatic cell classification. Readers will find the different techniques listed in tables, with the relevant articles cited next to them, for both thin and thick blood smear images. We also discussed the latest developments in sections devoted to deep learning and smartphone technology for future malaria diagnosis. Published by Elsevier Inc.

  8. Combination of process and vibration data for improved condition monitoring of industrial systems working under variable operating conditions

    NASA Astrophysics Data System (ADS)

    Ruiz-Cárcel, C.; Jaramillo, V. H.; Mba, D.; Ottewill, J. R.; Cao, Y.

    2016-01-01

    The detection and diagnosis of faults in industrial processes is a very active field of research due to the reduction in maintenance costs achieved by the implementation of process monitoring algorithms such as Principal Component Analysis, Partial Least Squares or more recently Canonical Variate Analysis (CVA). Typically the condition of rotating machinery is monitored separately using vibration analysis or other specific techniques. Conventional vibration-based condition monitoring techniques are based on the tracking of key features observed in the measured signal. Typically steady-state loading conditions are required to ensure consistency between measurements. In this paper, a technique based on merging process and vibration data is proposed with the objective of improving the detection of mechanical faults in industrial systems working under variable operating conditions. The capabilities of CVA for detection and diagnosis of faults were tested using experimental data acquired from a compressor test rig where different process faults were introduced. Results suggest that the combination of process and vibration data can effectively improve the detectability of mechanical faults in systems working under variable operating conditions.

  9. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  10. Exploring the CAESAR database using dimensionality reduction techniques

    NASA Astrophysics Data System (ADS)

    Mendoza-Schrock, Olga; Raymer, Michael L.

    2012-06-01

    The Civilian American and European Surface Anthropometry Resource (CAESAR) database containing over 40 anthropometric measurements on over 4000 humans has been extensively explored for pattern recognition and classification purposes using the raw, original data [1-4]. However, some of the anthropometric variables would be impossible to collect in an uncontrolled environment. Here, we explore the use of dimensionality reduction methods in concert with a variety of classification algorithms for gender classification using only those variables that are readily observable in an uncontrolled environment. Several dimensionality reduction techniques are employed to learn the underlining structure of the data. These techniques include linear projections such as the classical Principal Components Analysis (PCA) and non-linear (manifold learning) techniques, such as Diffusion Maps and the Isomap technique. This paper briefly describes all three techniques, and compares three different classifiers, Naïve Bayes, Adaboost, and Support Vector Machines (SVM), for gender classification in conjunction with each of these three dimensionality reduction approaches.

  11. Self-Assembled Fe-N-Doped Carbon Nanotube Aerogels with Single-Atom Catalyst Feature as High-Efficiency Oxygen Reduction Electrocatalysts

    DOE PAGES

    Zhu, Chengzhou; Fu, Shaofang; Song, Junhua; ...

    2017-02-06

    In this study, self-assembled M–N-doped carbon nanotube aerogels with single-atom catalyst feature are for the first time reported through one-step hydrothermal route and subsequent facile annealing treatment. By taking advantage of the porous nanostructures, 1D nanotubes as well as single-atom catalyst feature, the resultant Fe–N-doped carbon nanotube aerogels exhibit excellent oxygen reduction reaction electrocatalytic performance even better than commercial Pt/C in alkaline solution.

  12. Technique and cue selection for graphical presentation of generic hyperdimensional data

    NASA Astrophysics Data System (ADS)

    Howard, Lee M.; Burton, Robert P.

    2013-12-01

    Several presentation techniques have been created for visualization of data with more than three variables. Packages have been written, each of which implements a subset of these techniques. However, these packages generally fail to provide all the features needed by the user during the visualization process. Further, packages generally limit support for presentation techniques to a few techniques. A new package called Petrichor accommodates all necessary and useful features together in one system. Any presentation technique may be added easily through an extensible plugin system. Features are supported by a user interface that allows easy interaction with data. Annotations allow users to mark up visualizations and share information with others. By providing a hyperdimensional graphics package that easily accommodates presentation techniques and includes a complete set of features, including those that are rarely or never supported elsewhere, the user is provided with a tool that facilitates improved interaction with multivariate data to extract and disseminate information.

  13. Advanced Tie Feature Matching for the Registration of Mobile Mapping Imaging Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Peter, M.; Gerke, M.; Vosselman, G.

    2016-06-01

    Mobile Mapping's ability to acquire high-resolution ground data is opposing unreliable localisation capabilities of satellite-based positioning systems in urban areas. Buildings shape canyons impeding a direct line-of-sight to navigation satellites resulting in a deficiency to accurately estimate the mobile platform's position. Consequently, acquired data products' positioning quality is considerably diminished. This issue has been widely addressed in the literature and research projects. However, a consistent compliance of sub-decimetre accuracy as well as a correction of errors in height remain unsolved. We propose a novel approach to enhance Mobile Mapping (MM) image orientation based on the utilisation of highly accurate orientation parameters derived from aerial imagery. In addition to that, the diminished exterior orientation parameters of the MM platform will be utilised as they enable the application of accurate matching techniques needed to derive reliable tie information. This tie information will then be used within an adjustment solution to correct affected MM data. This paper presents an advanced feature matching procedure as a prerequisite to the aforementioned orientation update. MM data is ortho-projected to gain a higher resemblance to aerial nadir data simplifying the images' geometry for matching. By utilising MM exterior orientation parameters, search windows may be used in conjunction with a selective keypoint detection and template matching. Originating from different sensor systems, however, difficulties arise with respect to changes in illumination, radiometry and a different original perspective. To respond to these challenges for feature detection, the procedure relies on detecting keypoints in only one image. Initial tests indicate a considerable improvement in comparison to classic detector/descriptor approaches in this particular matching scenario. This method leads to a significant reduction of outliers due to the limited availability of putative matches and the utilisation of templates instead of feature descriptors. In our experiments discussed in this paper, typical urban scenes have been used for evaluating the proposed method. Even though no additional outlier removal techniques have been used, our method yields almost 90% of correct correspondences. However, repetitive image patterns may still induce ambiguities which cannot be fully averted by this technique. Hence and besides, possible advancements will be briefly presented.

  14. Treatment of real wastewater produced from Mobil car wash station using electrocoagulation technique.

    PubMed

    El-Ashtoukhy, E-S Z; Amin, N K; Fouad, Y O

    2015-10-01

    This paper deals with the electrocoagulation of real wastewater produced from a car wash station using a new cell design featuring a horizontal spiral anode placed above a horizontal disc cathode. The study dealt with the chemical oxygen demand (COD) reduction and turbidity removal using electrodes in a batch mode. Various operating parameters such as current density, initial pH, NaCl concentration, temperature, and electrode material were examined to optimize the performance of the process. Also, characterization of sludge formed during electrocoagulation was carried out. The results indicated that the COD reduction and turbidity removal increase with increasing the current density and NaCl concentration; pH from 7 to 8 was found to be optimum for treating the wastewater. Temperature was found to have an insignificant effect on the process. Aluminum was superior to iron as a sacrificial electrode material in treating car wash wastewater. Energy consumption based on COD reduction ranged from 2.32 to 15.1 kWh/kg COD removed depending on the operating conditions. Finally, the sludge produced during electrocoagulation using aluminum electrodes was characterized by scanning electron microscopy (SEM) and energy dispersive spectrometry (EDS) analysis.

  15. Content Abstract Classification Using Naive Bayes

    NASA Astrophysics Data System (ADS)

    Latif, Syukriyanto; Suwardoyo, Untung; Aldrin Wihelmus Sanadi, Edwin

    2018-03-01

    This study aims to classify abstract content based on the use of the highest number of words in an abstract content of the English language journals. This research uses a system of text mining technology that extracts text data to search information from a set of documents. Abstract content of 120 data downloaded at www.computer.org. Data grouping consists of three categories: DM (Data Mining), ITS (Intelligent Transport System) and MM (Multimedia). Systems built using naive bayes algorithms to classify abstract journals and feature selection processes using term weighting to give weight to each word. Dimensional reduction techniques to reduce the dimensions of word counts rarely appear in each document based on dimensional reduction test parameters of 10% -90% of 5.344 words. The performance of the classification system is tested by using the Confusion Matrix based on comparative test data and test data. The results showed that the best classification results were obtained during the 75% training data test and 25% test data from the total data. Accuracy rates for categories of DM, ITS and MM were 100%, 100%, 86%. respectively with dimension reduction parameters of 30% and the value of learning rate between 0.1-0.5.

  16. Ultrasonic Measurement of Erosion/corrosion Rates in Industrial Piping Systems

    NASA Astrophysics Data System (ADS)

    Sinclair, A. N.; Safavi, V.; Honarvar, F.

    2011-06-01

    Industrial piping systems that carry aggressive corrosion or erosion agents may suffer from a gradual wall thickness reduction that eventually threatens pipe integrity. Thinning rates could be estimated from the very small change in wall thickness values measured by conventional ultrasound over a time span of at least a few months. However, measurements performed over shorter time spans would yield no useful information—minor signal distortions originating from grain noise and ultrasonic equipment imperfections prevent a meaningful estimate of the minuscule reduction in echo travel time. Using a Model-Based Estimation (MBE) technique, a signal processing scheme has been developed that enables the echo signals from the pipe wall to be separated from the noise. This was implemented in a laboratory experimental program, featuring accelerated erosion/corrosion on the inner wall of a test pipe. The result was a reduction in the uncertainty in the wall thinning rate by a factor of four. This improvement enables a more rapid response by system operators to a change in plant conditions that could pose a pipe integrity problem. It also enables a rapid evaluation of the effectiveness of new corrosion inhibiting agents under plant operating conditions.

  17. Design of Ultrathin Pt-Based Multimetallic Nanostructures for Efficient Oxygen Reduction Electrocatalysis.

    PubMed

    Lai, Jianping; Guo, Shaojun

    2017-12-01

    Nanocatalysts with high platinum (Pt) utilization efficiency are attracting extensive attention for oxygen reduction reactions (ORR) conducted at the cathode of fuel cells. Ultrathin Pt-based multimetallic nanostructures show obvious advantages in accelerating the sluggish cathodic ORR due to their ultrahigh Pt utilization efficiency. A focus on recent important developments is provided in using wet chemistry techniques for making/tuning the multimetallic nanostructures with high Pt utilization efficiency for boosting ORR activity and durability. First, new synthetic methods for multimetallic core/shell nanoparticles with ultrathin shell sizes for achieving highly efficient ORR catalysts are reviewed. To obtain better ORR activity and stability, multimetallic nanowires or nanosheets with well-defined structure and surface are further highlighted. Furthermore, ultrathin Pt-based multimetallic nanoframes that feature 3D molecularly accessible surfaces for achieving more efficient ORR catalysis are discussed. Finally, the remaining challenges and outlooks for the future will be provided for this promising research field. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Biomimicry of multifunctional nanostructures in the neck feathers of mallard (Anas platyrhynchos L.) drakes

    NASA Astrophysics Data System (ADS)

    Khudiyev, Tural; Dogan, Tamer; Bayindir, Mehmet

    2014-04-01

    Biological systems serve as fundamental sources of inspiration for the development of artificially colored devices, and their investigation provides a great number of photonic design opportunities. While several successful biomimetic designs have been detailed in the literature, conventional fabrication techniques nonetheless remain inferior to their natural counterparts in complexity, ease of production and material economy. Here, we investigate the iridescent neck feathers of Anas platyrhynchos drakes, show that they feature an unusual arrangement of two-dimensional (2D) photonic crystals and further exhibit a superhydrophobic surface, and mimic this multifunctional structure using a nanostructure composite fabricated by a recently developed top-down iterative size reduction method, which avoids the above-mentioned fabrication challenges, provides macroscale control and enhances hydrophobicity through the surface structure. Our 2D solid core photonic crystal fibres strongly resemble drake neck plumage in structure and fully polymeric material composition, and can be produced in wide array of colors by minor alterations during the size reduction process.

  19. Biomimicry of multifunctional nanostructures in the neck feathers of mallard (Anas platyrhynchos L.) drakes

    PubMed Central

    Khudiyev, Tural; Dogan, Tamer; Bayindir, Mehmet

    2014-01-01

    Biological systems serve as fundamental sources of inspiration for the development of artificially colored devices, and their investigation provides a great number of photonic design opportunities. While several successful biomimetic designs have been detailed in the literature, conventional fabrication techniques nonetheless remain inferior to their natural counterparts in complexity, ease of production and material economy. Here, we investigate the iridescent neck feathers of Anas platyrhynchos drakes, show that they feature an unusual arrangement of two-dimensional (2D) photonic crystals and further exhibit a superhydrophobic surface, and mimic this multifunctional structure using a nanostructure composite fabricated by a recently developed top-down iterative size reduction method, which avoids the above-mentioned fabrication challenges, provides macroscale control and enhances hydrophobicity through the surface structure. Our 2D solid core photonic crystal fibres strongly resemble drake neck plumage in structure and fully polymeric material composition, and can be produced in wide array of colors by minor alterations during the size reduction process. PMID:24751587

  20. Biomimicry of multifunctional nanostructures in the neck feathers of mallard (Anas platyrhynchos L.) drakes.

    PubMed

    Khudiyev, Tural; Dogan, Tamer; Bayindir, Mehmet

    2014-04-22

    Biological systems serve as fundamental sources of inspiration for the development of artificially colored devices, and their investigation provides a great number of photonic design opportunities. While several successful biomimetic designs have been detailed in the literature, conventional fabrication techniques nonetheless remain inferior to their natural counterparts in complexity, ease of production and material economy. Here, we investigate the iridescent neck feathers of Anas platyrhynchos drakes, show that they feature an unusual arrangement of two-dimensional (2D) photonic crystals and further exhibit a superhydrophobic surface, and mimic this multifunctional structure using a nanostructure composite fabricated by a recently developed top-down iterative size reduction method, which avoids the above-mentioned fabrication challenges, provides macroscale control and enhances hydrophobicity through the surface structure. Our 2D solid core photonic crystal fibres strongly resemble drake neck plumage in structure and fully polymeric material composition, and can be produced in wide array of colors by minor alterations during the size reduction process.

  1. Preliminary noise tradeoff study of a Mach 2.7 cruise aircraft

    NASA Technical Reports Server (NTRS)

    Mascitti, V. R.; Maglieri, D. J. (Editor); Raney, J. P. (Editor)

    1979-01-01

    NASA computer codes in the areas of preliminary sizing and enroute performance, takeoff and landing performance, aircraft noise prediction, and economics were used in a preliminary noise tradeoff study for a Mach 2.7 design supersonic cruise concept. Aerodynamic configuration data were based on wind-tunnel model tests and related analyses. Aircraft structural characteristics and weight were based on advanced structural design methodologies, assuming conventional titanium technology. The most advanced noise prediction techniques available were used, and aircraft operating costs were estimated using accepted industry methods. The 4-engines cycles included in the study were based on assumed 1985 technology levels. Propulsion data was provided by aircraft manufacturers. Additional empirical data is needed to define both noise reduction features and other operating characteristics of all engine cycles under study. Data on VCE design parameters, coannular nozzle inverted flow noise reduction and advanced mechanical suppressors are urgently needed to reduce the present uncertainties in studies of this type.

  2. On the connection between multigrid and cyclic reduction

    NASA Technical Reports Server (NTRS)

    Merriam, M. L.

    1984-01-01

    A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.

  3. CHOBS: Color Histogram of Block Statistics for Automatic Bleeding Detection in Wireless Capsule Endoscopy Video.

    PubMed

    Ghosh, Tonmoy; Fattah, Shaikh Anowarul; Wahid, Khan A

    2018-01-01

    Wireless capsule endoscopy (WCE) is the most advanced technology to visualize whole gastrointestinal (GI) tract in a non-invasive way. But the major disadvantage here, it takes long reviewing time, which is very laborious as continuous manual intervention is necessary. In order to reduce the burden of the clinician, in this paper, an automatic bleeding detection method for WCE video is proposed based on the color histogram of block statistics, namely CHOBS. A single pixel in WCE image may be distorted due to the capsule motion in the GI tract. Instead of considering individual pixel values, a block surrounding to that individual pixel is chosen for extracting local statistical features. By combining local block features of three different color planes of RGB color space, an index value is defined. A color histogram, which is extracted from those index values, provides distinguishable color texture feature. A feature reduction technique utilizing color histogram pattern and principal component analysis is proposed, which can drastically reduce the feature dimension. For bleeding zone detection, blocks are classified using extracted local features that do not incorporate any computational burden for feature extraction. From extensive experimentation on several WCE videos and 2300 images, which are collected from a publicly available database, a very satisfactory bleeding frame and zone detection performance is achieved in comparison to that obtained by some of the existing methods. In the case of bleeding frame detection, the accuracy, sensitivity, and specificity obtained from proposed method are 97.85%, 99.47%, and 99.15%, respectively, and in the case of bleeding zone detection, 95.75% of precision is achieved. The proposed method offers not only low feature dimension but also highly satisfactory bleeding detection performance, which even can effectively detect bleeding frame and zone in a continuous WCE video data.

  4. Classification of brain tumours using short echo time 1H MR spectra

    NASA Astrophysics Data System (ADS)

    Devos, A.; Lukas, L.; Suykens, J. A. K.; Vanhamme, L.; Tate, A. R.; Howe, F. A.; Majós, C.; Moreno-Torres, A.; van der Graaf, M.; Arús, C.; Van Huffel, S.

    2004-09-01

    The purpose was to objectively compare the application of several techniques and the use of several input features for brain tumour classification using Magnetic Resonance Spectroscopy (MRS). Short echo time 1H MRS signals from patients with glioblastomas ( n = 87), meningiomas ( n = 57), metastases ( n = 39), and astrocytomas grade II ( n = 22) were provided by six centres in the European Union funded INTERPRET project. Linear discriminant analysis, least squares support vector machines (LS-SVM) with a linear kernel and LS-SVM with radial basis function kernel were applied and evaluated over 100 stratified random splittings of the dataset into training and test sets. The area under the receiver operating characteristic curve (AUC) was used to measure the performance of binary classifiers, while the percentage of correct classifications was used to evaluate the multiclass classifiers. The influence of several factors on the classification performance has been tested: L2- vs. water normalization, magnitude vs. real spectra and baseline correction. The effect of input feature reduction was also investigated by using only the selected frequency regions containing the most discriminatory information, and peak integrated values. Using L2-normalized complete spectra the automated binary classifiers reached a mean test AUC of more than 0.95, except for glioblastomas vs. metastases. Similar results were obtained for all classification techniques and input features except for water normalized spectra, where classification performance was lower. This indicates that data acquisition and processing can be simplified for classification purposes, excluding the need for separate water signal acquisition, baseline correction or phasing.

  5. Reduction of streamflow monitoring networks by a reference point approach

    NASA Astrophysics Data System (ADS)

    Cetinkaya, Cem P.; Harmancioglu, Nilgun B.

    2014-05-01

    Adoption of an integrated approach to water management strongly forces policy and decision-makers to focus on hydrometric monitoring systems as well. Existing hydrometric networks need to be assessed and revised against the requirements on water quantity data to support integrated management. One of the questions that a network assessment study should resolve is whether a current monitoring system can be consolidated in view of the increased expenditures in time, money and effort imposed on the monitoring activity. Within the last decade, governmental monitoring agencies in Turkey have foreseen an audit on all their basin networks in view of prevailing economic pressures. In particular, they question how they can decide whether monitoring should be continued or terminated at a particular site in a network. The presented study is initiated to address this question by examining the applicability of a method called “reference point approach” (RPA) for network assessment and reduction purposes. The main objective of the study is to develop an easily applicable and flexible network reduction methodology, focusing mainly on the assessment of the “performance” of existing streamflow monitoring networks in view of variable operational purposes. The methodology is applied to 13 hydrometric stations in the Gediz Basin, along the Aegean coast of Turkey. The results have shown that the simplicity of the method, in contrast to more complicated computational techniques, is an asset that facilitates the involvement of decision makers in application of the methodology for a more interactive assessment procedure between the monitoring agency and the network designer. The method permits ranking of hydrometric stations with regard to multiple objectives of monitoring and the desired attributes of the basin network. Another distinctive feature of the approach is that it also assists decision making in cases with limited data and metadata. These features of the RPA approach highlight its advantages over the existing network assessment and reduction methods.

  6. Effective diagnosis of Alzheimer’s disease by means of large margin-based methodology

    PubMed Central

    2012-01-01

    Background Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer’s Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. Methods It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. Results Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. Conclusions All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET). PMID:22849649

  7. Effective diagnosis of Alzheimer's disease by means of large margin-based methodology.

    PubMed

    Chaves, Rosa; Ramírez, Javier; Górriz, Juan M; Illán, Ignacio A; Gómez-Río, Manuel; Carnero, Cristobal

    2012-07-31

    Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).

  8. Robust Feature Selection Technique using Rank Aggregation.

    PubMed

    Sarkar, Chandrima; Cooley, Sarah; Srivastava, Jaideep

    2014-01-01

    Although feature selection is a well-developed research area, there is an ongoing need to develop methods to make classifiers more efficient. One important challenge is the lack of a universal feature selection technique which produces similar outcomes with all types of classifiers. This is because all feature selection techniques have individual statistical biases while classifiers exploit different statistical properties of data for evaluation. In numerous situations this can put researchers into dilemma as to which feature selection method and a classifiers to choose from a vast range of choices. In this paper, we propose a technique that aggregates the consensus properties of various feature selection methods to develop a more optimal solution. The ensemble nature of our technique makes it more robust across various classifiers. In other words, it is stable towards achieving similar and ideally higher classification accuracy across a wide variety of classifiers. We quantify this concept of robustness with a measure known as the Robustness Index (RI). We perform an extensive empirical evaluation of our technique on eight data sets with different dimensions including Arrythmia, Lung Cancer, Madelon, mfeat-fourier, internet-ads, Leukemia-3c and Embryonal Tumor and a real world data set namely Acute Myeloid Leukemia (AML). We demonstrate not only that our algorithm is more robust, but also that compared to other techniques our algorithm improves the classification accuracy by approximately 3-4% (in data set with less than 500 features) and by more than 5% (in data set with more than 500 features), across a wide range of classifiers.

  9. A novel approach for detection and classification of mammographic microcalcifications using wavelet analysis and extreme learning machine.

    PubMed

    Malar, E; Kandaswamy, A; Chakravarthy, D; Giri Dharan, A

    2012-09-01

    The objective of this paper is to reveal the effectiveness of wavelet based tissue texture analysis for microcalcification detection in digitized mammograms using Extreme Learning Machine (ELM). Microcalcifications are tiny deposits of calcium in the breast tissue which are potential indicators for early detection of breast cancer. The dense nature of the breast tissue and the poor contrast of the mammogram image prohibit the effectiveness in identifying microcalcifications. Hence, a new approach to discriminate the microcalcifications from the normal tissue is done using wavelet features and is compared with different feature vectors extracted using Gray Level Spatial Dependence Matrix (GLSDM) and Gabor filter based techniques. A total of 120 Region of Interests (ROIs) extracted from 55 mammogram images of mini-Mias database, including normal and microcalcification images are used in the current research. The network is trained with the above mentioned features and the results denote that ELM produces relatively better classification accuracy (94%) with a significant reduction in training time than the other artificial neural networks like Bayesnet classifier, Naivebayes classifier, and Support Vector Machine. ELM also avoids problems like local minima, improper learning rate, and over fitting. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. [Management of disk displacement with condylar fracture].

    PubMed

    Yu, Shi-bin; Li, Zu-bing; Yang, Xue-wen; Zhao, Ji-hong; Dong, Yao-jun

    2003-07-01

    To investigate clinical features of disk displacement during the course of condylar fracture and to explore the techniques of disk reposition and suturation. 32 patients (10 females and 22 males) who had disk displacements with condylar fractures were followed up. Reduction and reposition of the dislocated disks simultaneously with fixation of fractures were performed. 7 patients underwent intermaxillary fixation with elastic bands for 1 to 2 weeks. The occlusions were satisfactory in all cases but one for the reason of ramus height loss. No TMJ symptom was found when examined 3 months post operation. Anterior disk displacements were most occurred with high condylar process fractures. Surgical reposition and suturation of disk play an important role for the later TMJ-function.

  11. Digital Isostatic Gravity Map of the Nevada Test Site and Vicinity, Nye, Lincoln, and Clark Counties, Nevada, and Inyo County, California

    USGS Publications Warehouse

    Ponce, David A.; Mankinen, E.A.; Davidson, J.G.; Morin, R.L.; Blakely, R.J.

    2000-01-01

    An isostatic gravity map of the Nevada Test Site area was prepared from publicly available gravity data (Ponce, 1997) and from gravity data recently collected by the U.S. Geological Survey (Mankinen and others, 1999; Morin and Blakely, 1999). Gravity data were processed using standard gravity data reduction techniques. Southwest Nevada is characterized by gravity anomalies that reflect the distribution of pre-Cenozoic carbonate rocks, thick sequences of volcanic rocks, and thick alluvial basins. In addition, regional gravity data reveal the presence of linear features that reflect large-scale faults whereas detailed gravity data can indicate the presence of smaller-scale faults.

  12. Shell-isolated nanoparticle-enhanced Raman spectroscopy study of the adsorption behaviour of DNA bases on Au(111) electrode surfaces.

    PubMed

    Wen, Bao-Ying; Jin, Xi; Li, Yue; Wang, Ya-Hao; Li, Chao-Yu; Liang, Miao-Miao; Panneerselvam, Rajapandiyan; Xu, Qing-Chi; Wu, De-Yin; Yang, Zhi-Lin; Li, Jian-Feng; Tian, Zhong-Qun

    2016-06-21

    For the first time, we used the electrochemical shell-isolated nanoparticle-enhanced Raman spectroscopy (EC-SHINERS) technique to in situ characterize the adsorption behaviour of four DNA bases (adenine, guanine, thymine, and cytosine) on atomically flat Au(111) electrode surfaces. The spectroscopic results of the various molecules reveal similar features, such as the adsorption-induced reconstruction of the Au(111) surface and the drastic Raman intensity reduction of the ring breathing modes after the lifting reconstruction. As a preliminary study of the photo-induced charge transfer (PICT) mechanism, the in situ spectroscopic results obtained on single crystal surfaces are excellently illustrated with electrochemical data.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levakhina, Y. M.; Mueller, J.; Buzug, T. M.

    Purpose: This paper introduces a nonlinear weighting scheme into the backprojection operation within the simultaneous algebraic reconstruction technique (SART). It is designed for tomosynthesis imaging of objects with high-attenuation features in order to reduce limited angle artifacts. Methods: The algorithm estimates which projections potentially produce artifacts in a voxel. The contribution of those projections into the updating term is reduced. In order to identify those projections automatically, a four-dimensional backprojected space representation is used. Weighting coefficients are calculated based on a dissimilarity measure, evaluated in this space. For each combination of an angular view direction and a voxel position anmore » individual weighting coefficient for the updating term is calculated. Results: The feasibility of the proposed approach is shown based on reconstructions of the following real three-dimensional tomosynthesis datasets: a mammography quality phantom, an apple with metal needles, a dried finger bone in water, and a human hand. Datasets have been acquired with a Siemens Mammomat Inspiration tomosynthesis device and reconstructed using SART with and without suggested weighting. Out-of-focus artifacts are described using line profiles and measured using standard deviation (STD) in the plane and below the plane which contains artifact-causing features. Artifacts distribution in axial direction is measured using an artifact spread function (ASF). The volumes reconstructed with the weighting scheme demonstrate the reduction of out-of-focus artifacts, lower STD (meaning reduction of artifacts), and narrower ASF compared to nonweighted SART reconstruction. It is achieved successfully for different kinds of structures: point-like structures such as phantom features, long structures such as metal needles, and fine structures such as trabecular bone structures. Conclusions: Results indicate the feasibility of the proposed algorithm to reduce typical tomosynthesis artifacts produced by high-attenuation features. The proposed algorithm assigns weighting coefficients automatically and no segmentation or tissue-classification steps are required. The algorithm can be included into various iterative reconstruction algorithms with an additive updating strategy. It can also be extended to computed tomography case with the complete set of angular data.« less

  14. Method for indexing and retrieving manufacturing-specific digital imagery based on image content

    DOEpatents

    Ferrell, Regina K.; Karnowski, Thomas P.; Tobin, Jr., Kenneth W.

    2004-06-15

    A method for indexing and retrieving manufacturing-specific digital images based on image content comprises three steps. First, at least one feature vector can be extracted from a manufacturing-specific digital image stored in an image database. In particular, each extracted feature vector corresponds to a particular characteristic of the manufacturing-specific digital image, for instance, a digital image modality and overall characteristic, a substrate/background characteristic, and an anomaly/defect characteristic. Notably, the extracting step includes generating a defect mask using a detection process. Second, using an unsupervised clustering method, each extracted feature vector can be indexed in a hierarchical search tree. Third, a manufacturing-specific digital image associated with a feature vector stored in the hierarchicial search tree can be retrieved, wherein the manufacturing-specific digital image has image content comparably related to the image content of the query image. More particularly, can include two data reductions, the first performed based upon a query vector extracted from a query image. Subsequently, a user can select relevant images resulting from the first data reduction. From the selection, a prototype vector can be calculated, from which a second-level data reduction can be performed. The second-level data reduction can result in a subset of feature vectors comparable to the prototype vector, and further comparable to the query vector. An additional fourth step can include managing the hierarchical search tree by substituting a vector average for several redundant feature vectors encapsulated by nodes in the hierarchical search tree.

  15. Robust predictions for an oscillatory bispectrum in Planck 2015 data from transient reductions in the speed of sound of the inflaton

    NASA Astrophysics Data System (ADS)

    Torrado, Jesús; Hu, Bin; Achúcarro, Ana

    2017-10-01

    We update the search for features in the cosmic microwave background (CMB) power spectrum due to transient reductions in the speed of sound, using Planck 2015 CMB temperature and polarization data. We enlarge the parameter space to much higher oscillatory frequencies of the feature, and define a robust prior independent of the ansatz for the reduction, guaranteed to reproduce the assumptions of the theoretical model. This prior exhausts the regime in which features coming from a Gaussian reduction are easily distinguishable from the baseline cosmology. We find a fit to the ℓ≈20 - 40 minus /plus structure in Planck TT power spectrum, as well as features spanning along higher ℓ's (ℓ≈100 - 1500 ). None of those fits is statistically significant, either in terms of their improvement of the likelihood or in terms of the Bayes ratio. For the higher-ℓ ones, their oscillatory frequency (and their amplitude to a lesser extent) is tightly constrained, so they can be considered robust, falsifiable predictions for their correlated features in the CMB bispectrum. We compute said correlated features, and assess their signal to noise and correlation with the secondary bispectrum of the correlation between the gravitational lensing of the CMB and the integrated Sachs-Wolfe effect. We compare our findings to the shape-agnostic oscillatory template tested in Planck 2015, and we comment on some tantalizing coincidences with some of the traits described in Planck's 2015 bispectrum data.

  16. Use of machine-learning classifiers to predict requests for preoperative acute pain service consultation.

    PubMed

    Tighe, Patrick J; Lucas, Stephen D; Edwards, David A; Boezaart, André P; Aytug, Haldun; Bihorac, Azra

    2012-10-01

      The purpose of this project was to determine whether machine-learning classifiers could predict which patients would require a preoperative acute pain service (APS) consultation.   Retrospective cohort.   University teaching hospital.   The records of 9,860 surgical patients posted between January 1 and June 30, 2010 were reviewed.   Request for APS consultation. A cohort of machine-learning classifiers was compared according to its ability or inability to classify surgical cases as requiring a request for a preoperative APS consultation. Classifiers were then optimized utilizing ensemble techniques. Computational efficiency was measured with the central processing unit processing times required for model training. Classifiers were tested using the full feature set, as well as the reduced feature set that was optimized using a merit-based dimensional reduction strategy.   Machine-learning classifiers correctly predicted preoperative requests for APS consultations in 92.3% (95% confidence intervals [CI], 91.8-92.8) of all surgical cases. Bayesian methods yielded the highest area under the receiver operating curve (0.87, 95% CI 0.84-0.89) and lowest training times (0.0018 seconds, 95% CI, 0.0017-0.0019 for the NaiveBayesUpdateable algorithm). An ensemble of high-performing machine-learning classifiers did not yield a higher area under the receiver operating curve than its component classifiers. Dimensional reduction decreased the computational requirements for multiple classifiers, but did not adversely affect classification performance.   Using historical data, machine-learning classifiers can predict which surgical cases should prompt a preoperative request for an APS consultation. Dimensional reduction improved computational efficiency and preserved predictive performance. Wiley Periodicals, Inc.

  17. Posterior dental size reduction in hominids: the Atapuerca evidence.

    PubMed

    Bermúdez de Castro, J M; Nicolas, M E

    1995-04-01

    In order to reassess previous hypotheses concerning dental size reduction of the posterior teeth during Pleistocene human evolution, current fossil dental evidence is examined. This evidence includes the large sample of hominid teeth found in recent excavations (1984-1993) in the Sima de los Huesos Middle Pleistocene cave site of the Sierra de Atapuerca (Burgos, Spain). The lower fourth premolars and molars of the Atapuerca hominids, probably older than 300 Kyr, have dimensions similar to those of modern humans. Further, these hominids share the derived state of other features of the posterior teeth with modern humans, such as a similar relative molar size and frequent absence of the hypoconulid, thus suggesting a possible case of parallelism. We believe that dietary changes allowed size reduction of the posterior teeth during the Middle Pleistocene, and the present evidence suggests that the selective pressures that operated on the size variability of these teeth were less restrictive than what is assumed by previous models of dental reduction. Thus, the causal relationship between tooth size decrease and changes in food-preparation techniques during the Pleistocene should be reconsidered. Moreover, the present evidence indicates that the differential reduction of the molars cannot be explained in terms of restriction of available growth space. The molar crown area measurements of a modern human sample were also investigated. The results of this study, as well as previous similar analyses, suggest that a decrease of the rate of cell proliferation, which affected the later-forming crown regions to a greater extent, may be the biological process responsible for the general and differential dental size reduction that occurred during human evolution.

  18. Comparative evaluation of features and techniques for identifying activity type and estimating energy cost from accelerometer data

    PubMed Central

    Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.

    2016-01-01

    Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679

  19. Metal/Carbon Hybrid Nanostructures Produced from Plasma-Enhanced Chemical Vapor Deposition over Nafion-Supported Electrochemically Deposited Cobalt Nanoparticles

    PubMed Central

    Achour, Amine; Saeed, Khalid; Djouadi, Mohamed Abdou

    2018-01-01

    In this work, we report development of hybrid nanostructures of metal nanoparticles (NP) and carbon nanostructures with strong potential for catalysis, sensing, and energy applications. First, the etched silicon wafer substrates were passivated for subsequent electrochemical (EC) processing through grafting of nitro phenyl groups using para-nitrobenzene diazonium (PNBT). The X-ray photoelectron spectroscope (XPS) and atomic force microscope (AFM) studies confirmed presence of few layers. Cobalt-based nanoparticles were produced over dip or spin coated Nafion films under different EC reduction conditions, namely CoSO4 salt concentration (0.1 M, 1 mM), reduction time (5, 20 s), and indirect or direct EC reduction route. Extensive AFM examination revealed NP formation with different attributes (size, distribution) depending on electrochemistry conditions. While relatively large NP with >100 nm size and bimodal distribution were obtained after 20 s EC reduction in H3BO3 following Co2+ ion uptake, ultrafine NP (<10 nm) could be produced from EC reduction in CoSO4 and H3BO3 mixed solution with some tendency to form oxides. Different carbon nanostructures including few-walled or multiwalled carbon nanotubes (CNT) and carbon nanosheets were grown in a C2H2/NH3 plasma using the plasma-enhanced chemical vapor deposition technique. The devised processing routes enable size controlled synthesis of cobalt nanoparticles and metal/carbon hybrid nanostructures with unique microstructural features. PMID:29702583

  20. Home visit program improves technique survival in peritoneal dialysis.

    PubMed

    Martino, Francesca; Adıbelli, Z; Mason, G; Nayak, A; Ariyanon, W; Rettore, E; Crepaldi, Carlo; Rodighiero, Mariapia; Ronco, Claudio

    2014-01-01

    Peritoneal dialysis (PD) is a home therapy, and technique survival is related to the adherence to PD prescription at home. The presence of a home visit program could improve PD outcomes. We evaluated its effects on clinical outcome during 1 year of follow-up. This was a case-control study. The case group included all 96 patients who performed PD in our center on January 1, 2013, and who attended a home visit program; the control group included all 92 patients who performed PD on January 1, 2008. The home visit program consisted of several additional visits to reinforce patients' confidence in PD management in their own environment. Outcomes were defined as technique failure, peritonitis episode, and hospitalization. Clinical and dialysis features were evaluated for each patient. The case group was significantly older (p = 0.048), with a lower grade of autonomy (p = 0.033), but a better hemoglobin level (p = 0.02) than the control group. During the observational period, we had 11 episodes of technique failure. We found a significant reduction in the rate of technique failure in the case group (p = 0.004). Furthermore, survival analysis showed a significant extension of PD treatment in the patients supported by the home visit program (52 vs. 48.8 weeks, p = 0.018). We did not find any difference between the two groups in terms of peritonitis and hospitalization rate; however, trends toward a reduction of Gram-positive peritonitis rates as well as prevalence and duration of hospitalization related to PD problems were identified in the case group. The retrospective nature of the analysis was a limitation of this study. The home visit program improves the survival of PD patients and could reduce the rate of Gram-positive peritonitis and hospitalization. Video Journal Club "Cappuccino with Claudio Ronco" at http://www.karger.com/?doi=365168.

  1. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  2. Discriminating Induced-Microearthquakes Using New Seismic Features

    NASA Astrophysics Data System (ADS)

    Mousavi, S. M.; Horton, S.

    2016-12-01

    We studied characteristics of induced-microearthquakes on the basis of the waveforms recorded on a limited number of surface receivers using machine-learning techniques. Forty features in the time, frequency, and time-frequency domains were measured on each waveform, and several techniques such as correlation-based feature selection, Artificial Neural Networks (ANNs), Logistic Regression (LR) and X-mean were used as research tools to explore the relationship between these seismic features and source parameters. The results show that spectral features have the highest correlation to source depth. Two new measurements developed as seismic features for this study, spectral centroids and 2D cross-correlations in the time-frequency domain, performed better than the common seismic measurements. These features can be used by machine learning techniques for efficient automatic classification of low energy signals recorded at one or more seismic stations. We applied the technique to 440 microearthquakes-1.7Reference: Mousavi, S.M., S.P. Horton, C. A. Langston, B. Samei, (2016) Seismic features and automatic discrimination of deep and shallow induced-microearthquakes using neural network and logistic regression, Geophys. J. Int. doi: 10.1093/gji/ggw258.

  3. Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems

    NASA Technical Reports Server (NTRS)

    Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan

    2010-01-01

    A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.

  4. Multi-scale textural feature extraction and particle swarm optimization based model selection for false positive reduction in mammography.

    PubMed

    Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin

    2015-12-01

    The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. 5-D interpolation with wave-front attributes

    NASA Astrophysics Data System (ADS)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that there are significant advantages for steep dipping events using the 5-D WABI method when compared to the rank-reduction-based 5-D interpolation technique. Diffraction tails substantially benefit from this improved performance of the partial CRS stacking approach while the CPU time is comparable to the CPU time consumed by the rank-reduction-based method.

  6. A combination of selected mapping and clipping to increase energy efficiency of OFDM systems

    PubMed Central

    Lee, Byung Moo; Rim, You Seung

    2017-01-01

    We propose an energy efficient combination design for OFDM systems based on selected mapping (SLM) and clipping peak-to-average power ratio (PAPR) reduction techniques, and show the related energy efficiency (EE) performance analysis. The combination of two different PAPR reduction techniques can provide a significant benefit in increasing EE, because it can take advantages of both techniques. For the combination, we choose the clipping and SLM techniques, since the former technique is quite simple and effective, and the latter technique does not cause any signal distortion. We provide the structure and the systematic operating method, and show the various analyzes to derive the EE gain based on the combined technique. Our analysis show that the combined technique increases the EE by 69% compared to no PAPR reduction, and by 19.34% compared to only using SLM technique. PMID:29023591

  7. Liposuction-assisted four pedicle-based breast reduction (LAFPBR): A new safer technique of breast reduction for elderly patients.

    PubMed

    La Padula, Simone; Hersant, Barbara; Noel, Warren; Meningaud, Jean Paul

    2018-05-01

    As older people increasingly care for their body image and remain active longer, the demand for reduction mammaplasty is increasing in this population. Only a few studies of reduction mammaplasty have specifically focussed on the outcomes in elderly women. We developed a new breast reduction technique: the Liposuction-Assisted Four Pedicle-Based Breast Reduction (LAFPBR) that is especially indicated for elderly patients. The aim of this paper was to describe the LAFPBR technique and to determine whether it could be considered a safer option for elderly patients compared to the superomedial pedicle (SMP) technique. A retrospective study included sixty-two women aged 60 years and over who underwent bilateral breast reduction mammaplasty. Thirty-one patients underwent LAFPBR and 31 patients were operated using the SMP technique. Complications and patient satisfaction in both groups were analysed. Patient satisfaction was measured using a validated questionnaire: the client satisfaction questionnaire 8 (CSQ-8). The LAFPBR technique required less operating time, and avoided significant blood loss. Six minor complications were observed in SMP patients. No LAFPBR women developed a procedure-related complication. Patient satisfaction was high with a mean score of 29.65 in LAFPBR patients and 28.68 in SMP patients. The LAFPBR is an easy procedure that appears safer than SMP and results in a high satisfaction rate in elderly women. Copyright © 2018 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  8. Flight control system design factors for applying automated testing techniques

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.; Vernon, Todd H.

    1990-01-01

    Automated validation of flight-critical embedded systems is being done at ARC Dryden Flight Research Facility. The automated testing techniques are being used to perform closed-loop validation of man-rated flight control systems. The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 High Alpha Research Vehicle (HARV) automated test systems are discussed. Operationally applying automated testing techniques has accentuated flight control system features that either help or hinder the application of these techniques. The paper also discusses flight control system features which foster the use of automated testing techniques.

  9. Omitted variable bias in crash reduction factors.

    DOT National Transportation Integrated Search

    2015-09-01

    Transportation planners and traffic engineers are increasingly turning to crash reduction factors to evaluate changes in road : geometric and design features in order to reduce crashes. Crash reduction factors are typically estimated based on segment...

  10. Feature extraction via KPCA for classification of gait patterns.

    PubMed

    Wu, Jianning; Wang, Jue; Liu, Li

    2007-06-01

    Automated recognition of gait pattern change is important in medical diagnostics as well as in the early identification of at-risk gait in the elderly. We evaluated the use of Kernel-based Principal Component Analysis (KPCA) to extract more gait features (i.e., to obtain more significant amounts of information about human movement) and thus to improve the classification of gait patterns. 3D gait data of 24 young and 24 elderly participants were acquired using an OPTOTRAK 3020 motion analysis system during normal walking, and a total of 36 gait spatio-temporal and kinematic variables were extracted from the recorded data. KPCA was used first for nonlinear feature extraction to then evaluate its effect on a subsequent classification in combination with learning algorithms such as support vector machines (SVMs). Cross-validation test results indicated that the proposed technique could allow spreading the information about the gait's kinematic structure into more nonlinear principal components, thus providing additional discriminatory information for the improvement of gait classification performance. The feature extraction ability of KPCA was affected slightly with different kernel functions as polynomial and radial basis function. The combination of KPCA and SVM could identify young-elderly gait patterns with 91% accuracy, resulting in a markedly improved performance compared to the combination of PCA and SVM. These results suggest that nonlinear feature extraction by KPCA improves the classification of young-elderly gait patterns, and holds considerable potential for future applications in direct dimensionality reduction and interpretation of multiple gait signals.

  11. Balloon osteoplasty--a new technique for reduction and stabilisation of impression fractures in the tibial plateau: a cadaver study and first clinical application.

    PubMed

    Ahrens, Philipp; Sandmann, Gunther; Bauer, Jan; König, Benjamin; Martetschläger, Frank; Müller, Dirk; Siebenlist, Sebastian; Kirchhoff, Chlodwig; Neumaier, Markus; Biberthaler, Peter; Stöckle, Ulrich; Freude, Thomas

    2012-09-01

    Fractures of the tibial plateau are among the most severe injuries of the knee joint and lead to advanced gonarthrosis if the reduction does not restore perfect joint congruency. Many different reduction techniques focusing on open surgical procedures have been described in the past. In this context we would like to introduce a novel technique which was first tested in a cadaver setup and has undergone its successful first clinical application. Since kyphoplasty demonstrated effective ways of anatomical correction in spine fractures, we adapted the inflatable instruments and used the balloon technique to reduce depressed fragments of the tibial plateau. The technique enabled us to restore a congruent cartilage surface and bone reduction. In this technique we see a useful new method to reduce depressed fractures of the tibial plateau with the advantages of low collateral damage as it is known from minimally invasive procedures.

  12. Identifying the features of an exercise addiction: A Delphi study

    PubMed Central

    Macfarlane, Lucy; Owens, Glynn; Cruz, Borja del Pozo

    2016-01-01

    Objectives There remains limited consensus regarding the definition and conceptual basis of exercise addiction. An understanding of the factors motivating maintenance of addictive exercise behavior is important for appropriately targeting intervention. The aims of this study were twofold: first, to establish consensus on features of an exercise addiction using Delphi methodology and second, to identify whether these features are congruous with a conceptual model of exercise addiction adapted from the Work Craving Model. Methods A three-round Delphi process explored the views of participants regarding the features of an exercise addiction. The participants were selected from sport and exercise relevant domains, including physicians, physiotherapists, coaches, trainers, and athletes. Suggestions meeting consensus were considered with regard to the proposed conceptual model. Results and discussion Sixty-three items reached consensus. There was concordance of opinion that exercising excessively is an addiction, and therefore it was appropriate to consider the suggestions in light of the addiction-based conceptual model. Statements reaching consensus were consistent with all three components of the model: learned (negative perfectionism), behavioral (obsessive–compulsive drive), and hedonic (self-worth compensation and reduction of negative affect and withdrawal). Conclusions Delphi methodology allowed consensus to be reached regarding the features of an exercise addiction, and these features were consistent with our hypothesized conceptual model of exercise addiction. This study is the first to have applied Delphi methodology to the exercise addiction field, and therefore introduces a novel approach to exercise addiction research that can be used as a template to stimulate future examination using this technique. PMID:27554504

  13. Diagnostic features of Alzheimer's disease extracted from PET sinograms

    NASA Astrophysics Data System (ADS)

    Sayeed, A.; Petrou, M.; Spyrou, N.; Kadyrov, A.; Spinks, T.

    2002-01-01

    Texture analysis of positron emission tomography (PET) images of the brain is a very difficult task, due to the poor signal to noise ratio. As a consequence, very few techniques can be implemented successfully. We use a new global analysis technique known as the Trace transform triple features. This technique can be applied directly to the raw sinograms to distinguish patients with Alzheimer's disease (AD) from normal volunteers. FDG-PET images of 18 AD and 10 normal controls obtained from the same CTI ECAT-953 scanner were used in this study. The Trace transform triple feature technique was used to extract features that were invariant to scaling, translation and rotation, referred to as invariant features, as well as features that were sensitive to rotation but invariant to scaling and translation, referred to as sensitive features in this study. The features were used to classify the groups using discriminant function analysis. Cross-validation tests using stepwise discriminant function analysis showed that combining both sensitive and invariant features produced the best results, when compared with the clinical diagnosis. Selecting the five best features produces an overall accuracy of 93% with sensitivity of 94% and specificity of 90%. This is comparable with the classification accuracy achieved by Kippenhan et al (1992), using regional metabolic activity.

  14. An enhanced feature set for pattern recognition based contrast enhancement of contact-less captured latent fingerprints in digitized crime scene forensics

    NASA Astrophysics Data System (ADS)

    Hildebrandt, Mario; Kiltz, Stefan; Dittmann, Jana; Vielhauer, Claus

    2014-02-01

    In crime scene forensics latent fingerprints are found on various substrates. Nowadays primarily physical or chemical preprocessing techniques are applied for enhancing the visibility of the fingerprint trace. In order to avoid altering the trace it has been shown that contact-less sensors offer a non-destructive acquisition approach. Here, the exploitation of fingerprint or substrate properties and the utilization of signal processing techniques are an essential requirement to enhance the fingerprint visibility. However, especially the optimal sensory is often substrate-dependent. An enhanced generic pattern recognition based contrast enhancement approach for scans of a chromatic white light sensor is introduced in Hildebrandt et al.1 using statistical, structural and Benford's law2 features for blocks of 50 micron. This approach achieves very good results for latent fingerprints on cooperative, non-textured, smooth substrates. However, on textured and structured substrates the error rates are very high and the approach thus unsuitable for forensic use cases. We propose the extension of the feature set with semantic features derived from known Gabor filter based exemplar fingerprint enhancement techniques by suggesting an Epsilon-neighborhood of each block in order to achieve an improved accuracy (called fingerprint ridge orientation semantics). Furthermore, we use rotation invariant Hu moments as an extension of the structural features and two additional preprocessing methods (separate X- and Y Sobel operators). This results in a 408-dimensional feature space. In our experiments we investigate and report the recognition accuracy for eight substrates, each with ten latent fingerprints: white furniture surface, veneered plywood, brushed stainless steel, aluminum foil, "Golden-Oak" veneer, non-metallic matte car body finish, metallic car body finish and blued metal. In comparison to Hildebrandt et al.,1 our evaluation shows a significant reduction of the error rates by 15.8 percent points on brushed stainless steel using the same classifier. This also allows for a successful biometric matching of 3 of the 8 latent fingerprint samples with the corresponding exemplar fingerprint on this particular substrate. For contrast enhancement analysis of classification results we suggest to use known Visual Quality Indexes (VQI)3 as a contrast enhancement quality indicator and discuss our first preliminary results using the exemplary chosen VQI Edge Similarity Score (ESS),4 showing a tendency that higher image differences between a substrate containing a fingerprint and a substrate with a blank surface correlate with a higher recognition accuracy between a latent fingerprint and an exemplar fingerprint. Those first preliminary results support further research into VQIs as contrast enhancement quality indicator for a given feature space.

  15. Analysis of Proportional Integral and Optimized Proportional Integral Controllers for Resistance Spot Welding System (RSWS) - A Performance Perspective

    NASA Astrophysics Data System (ADS)

    Rama Subbanna, S.; Suryakalavathi, M., Dr.

    2017-08-01

    This paper is an attempt to accomplish a performance analysis of the different control techniques on spikes reduction method applied on the medium frequency transformer based DC spot welding system. Spike reduction is an important factor to be considered while spot welding systems are concerned. During normal RSWS operation welding transformer’s magnetic core can become saturated due to the unbalanced resistances of both transformer secondary windings and different characteristics of output rectifier diodes, which causes current spikes and over-current protection switch-off of the entire system. The current control technique is a piecewise linear control technique that is inspired from the DC-DC converter control algorithms to register a novel spike reduction method in the MFDC spot welding applications. Two controllers that were used for the spike reduction portion of the overall applications involve the traditional PI controller and Optimized PI controller. Care is taken such that the current control technique would maintain a reduced spikes in the primary current of the transformer while it reduces the Total Harmonic Distortion. The performance parameter that is involved in the spikes reduction technique is the THD, Percentage of current spike reduction for both techniques. Matlab/SimulinkTM based simulation is carried out for the MFDC RSWS with KW and results are tabulated for the PI and Optimized PI controllers and a tradeoff analysis is carried out.

  16. [Balloon osteoplasty as reduction technique in the treatment of tibial head fractures].

    PubMed

    Freude, T; Kraus, T M; Sandmann, G H

    2015-10-01

    Tibial plateau fractures requiring surgery are severe injuries of the lower extremities. Depending on the fracture pattern, the age of the patient, the range of activity and the bone quality there is a broad variation in adequate treatment.  This article reports on an innovative treatment concept to address split depression fractures (Schatzker type II) and depression fractures (Schatzker type III) of the tibial head using the balloon osteoplasty technique for fracture reduction. Using the balloon technique achieves a precise and safe fracture reduction. This internal osteoplasty combines a minimal invasive percutaneous approach with a gently rise of the depressed area and the associated protection of the stratum regenerativum below the articular cartilage surface. This article lights up the surgical procedure using the balloon technique in tibia depression fractures. Using the balloon technique a precise and safe fracture reduction can be achieved. This internal osteoplasty combines a minimally invasive percutaneous approach with a gentle raising of the depressed area and the associated protection of the regenerative layer below the articular cartilage surface. Fracture reduction by use of a tamper results in high peak forces over small areas, whereas by using the balloon the forces are distributed over a larger area causing less secondary stress to the cartilage tissue. This less invasive approach might help to achieve a better long-term outcome with decreased secondary osteoarthritis due to the precise and chondroprotective reduction technique.

  17. Enhancing membrane protein subcellular localization prediction by parallel fusion of multi-view features.

    PubMed

    Yu, Dongjun; Wu, Xiaowei; Shen, Hongbin; Yang, Jian; Tang, Zhenmin; Qi, Yong; Yang, Jingyu

    2012-12-01

    Membrane proteins are encoded by ~ 30% in the genome and function importantly in the living organisms. Previous studies have revealed that membrane proteins' structures and functions show obvious cell organelle-specific properties. Hence, it is highly desired to predict membrane protein's subcellular location from the primary sequence considering the extreme difficulties of membrane protein wet-lab studies. Although many models have been developed for predicting protein subcellular locations, only a few are specific to membrane proteins. Existing prediction approaches were constructed based on statistical machine learning algorithms with serial combination of multi-view features, i.e., different feature vectors are simply serially combined to form a super feature vector. However, such simple combination of features will simultaneously increase the information redundancy that could, in turn, deteriorate the final prediction accuracy. That's why it was often found that prediction success rates in the serial super space were even lower than those in a single-view space. The purpose of this paper is investigation of a proper method for fusing multiple multi-view protein sequential features for subcellular location predictions. Instead of serial strategy, we propose a novel parallel framework for fusing multiple membrane protein multi-view attributes that will represent protein samples in complex spaces. We also proposed generalized principle component analysis (GPCA) for feature reduction purpose in the complex geometry. All the experimental results through different machine learning algorithms on benchmark membrane protein subcellular localization datasets demonstrate that the newly proposed parallel strategy outperforms the traditional serial approach. We also demonstrate the efficacy of the parallel strategy on a soluble protein subcellular localization dataset indicating the parallel technique is flexible to suite for other computational biology problems. The software and datasets are available at: http://www.csbio.sjtu.edu.cn/bioinf/mpsp.

  18. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    PubMed

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  19. Improved multi-stage neonatal seizure detection using a heuristic classifier and a data-driven post-processor.

    PubMed

    Ansari, A H; Cherian, P J; Dereymaeker, A; Matic, V; Jansen, K; De Wispelaere, L; Dielman, C; Vervisch, J; Swarte, R M; Govaert, P; Naulaers, G; De Vos, M; Van Huffel, S

    2016-09-01

    After identifying the most seizure-relevant characteristics by a previously developed heuristic classifier, a data-driven post-processor using a novel set of features is applied to improve the performance. The main characteristics of the outputs of the heuristic algorithm are extracted by five sets of features including synchronization, evolution, retention, segment, and signal features. Then, a support vector machine and a decision making layer remove the falsely detected segments. Four datasets including 71 neonates (1023h, 3493 seizures) recorded in two different university hospitals, are used to train and test the algorithm without removing the dubious seizures. The heuristic method resulted in a false alarm rate of 3.81 per hour and good detection rate of 88% on the entire test databases. The post-processor, effectively reduces the false alarm rate by 34% while the good detection rate decreases by 2%. This post-processing technique improves the performance of the heuristic algorithm. The structure of this post-processor is generic, improves our understanding of the core visually determined EEG features of neonatal seizures and is applicable for other neonatal seizure detectors. The post-processor significantly decreases the false alarm rate at the expense of a small reduction of the good detection rate. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  20. Technique for systematic bone reduction for fixed implant-supported prosthesis in the edentulous maxilla.

    PubMed

    Bidra, Avinash S

    2015-06-01

    Bone reduction for maxillary fixed implant-supported prosthodontic treatment is often necessary to either gain prosthetic space or to conceal the prosthesis-tissue junction in patients with excessive gingival display (gummy smile). Inadequate bone reduction is often a cause of prosthetic failure due to material fractures, poor esthetics, or inability to perform oral hygiene procedures due to unfavorable ridge lap prosthetic contours. Various instruments and techniques are available for bone reduction. It would be helpful to have an accurate and efficient method for bone reduction at the time of surgery and subsequently create a smooth bony platform. This article presents a straightforward technique for systematic bone reduction by transferring the patient's maximum smile line, recorded clinically, to a clear radiographic smile guide for treatment planning using cone beam computed tomography (CBCT). The patient's smile line and the amount of required bone reduction are transferred clinically by marking bone with a sterile stationery graphite wood pencil at the time of surgery. This technique can help clinicians to accurately achieve the desired bone reduction during surgery, and provide confidence that the diagnostic and treatment planning goals have been achieved. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  1. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    NASA Astrophysics Data System (ADS)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  2. Pulse oximeter plethysmographic waveform changes in awake, spontaneously breathing, hypovolemic volunteers.

    PubMed

    McGrath, Susan P; Ryan, Kathy L; Wendelken, Suzanne M; Rickards, Caroline A; Convertino, Victor A

    2011-02-01

    The primary objective of this study was to determine whether alterations in the pulse oximeter waveform characteristics would track progressive reductions in central blood volume. We also assessed whether changes in the pulse oximeter waveform provide an indication of blood loss in the hemorrhaging patient before changes in standard vital signs. Pulse oximeter data from finger, forehead, and ear pulse oximeter sensors were collected from 18 healthy subjects undergoing progressive reduction in central blood volume induced by lower body negative pressure (LBNP). Stroke volume measurements were simultaneously recorded using impedance cardiography. The study was conducted in a research laboratory setting where no interventions were performed. Pulse amplitude, width, and area under the curve (AUC) features were calculated from each pulse wave recording. Amalgamated correlation coefficients were calculated to determine the relationship between the changes in pulse oximeter waveform features and changes in stroke volume with LBNP. For pulse oximeter sensors on the ear and forehead, reductions in pulse amplitude, width, and area were strongly correlated with progressive reductions in stroke volume during LBNP (R(2) ≥ 0.59 for all features). Changes in pulse oximeter waveform features were observed before profound decreases in arterial blood pressure. The best correlations between pulse features and stroke volume were obtained from the forehead sensor area (R(2) = 0.97). Pulse oximeter waveform features returned to baseline levels when central blood volume was restored. These results support the use of pulse oximeter waveform analysis as a potential diagnostic tool to detect clinically significant hypovolemia before the onset of cardiovascular decompensation in spontaneously breathing patients.

  3. Investigation of Laser Welding of Ti Alloys for Cognitive Process Parameters Selection.

    PubMed

    Caiazzo, Fabrizia; Caggiano, Alessandra

    2018-04-20

    Laser welding of titanium alloys is attracting increasing interest as an alternative to traditional joining techniques for industrial applications, with particular reference to the aerospace sector, where welded assemblies allow for the reduction of the buy-to-fly ratio, compared to other traditional mechanical joining techniques. In this research work, an investigation on laser welding of Ti⁻6Al⁻4V alloy plates is carried out through an experimental testing campaign, under different process conditions, in order to perform a characterization of the produced weld bead geometry, with the final aim of developing a cognitive methodology able to support decision-making about the selection of the suitable laser welding process parameters. The methodology is based on the employment of artificial neural networks able to identify correlations between the laser welding process parameters, with particular reference to the laser power, welding speed and defocusing distance, and the weld bead geometric features, on the basis of the collected experimental data.

  4. Investigation of Laser Welding of Ti Alloys for Cognitive Process Parameters Selection

    PubMed Central

    2018-01-01

    Laser welding of titanium alloys is attracting increasing interest as an alternative to traditional joining techniques for industrial applications, with particular reference to the aerospace sector, where welded assemblies allow for the reduction of the buy-to-fly ratio, compared to other traditional mechanical joining techniques. In this research work, an investigation on laser welding of Ti–6Al–4V alloy plates is carried out through an experimental testing campaign, under different process conditions, in order to perform a characterization of the produced weld bead geometry, with the final aim of developing a cognitive methodology able to support decision-making about the selection of the suitable laser welding process parameters. The methodology is based on the employment of artificial neural networks able to identify correlations between the laser welding process parameters, with particular reference to the laser power, welding speed and defocusing distance, and the weld bead geometric features, on the basis of the collected experimental data. PMID:29677114

  5. Recent Advances in Characterization of Lignin Polymer by Solution-State Nuclear Magnetic Resonance (NMR) Methodology

    PubMed Central

    Wen, Jia-Long; Sun, Shao-Long; Xue, Bai-Liang; Sun, Run-Cang

    2013-01-01

    The demand for efficient utilization of biomass induces a detailed analysis of the fundamental chemical structures of biomass, especially the complex structures of lignin polymers, which have long been recognized for their negative impact on biorefinery. Traditionally, it has been attempted to reveal the complicated and heterogeneous structure of lignin by a series of chemical analyses, such as thioacidolysis (TA), nitrobenzene oxidation (NBO), and derivatization followed by reductive cleavage (DFRC). Recent advances in nuclear magnetic resonance (NMR) technology undoubtedly have made solution-state NMR become the most widely used technique in structural characterization of lignin due to its versatility in illustrating structural features and structural transformations of lignin polymers. As one of the most promising diagnostic tools, NMR provides unambiguous evidence for specific structures as well as quantitative structural information. The recent advances in two-dimensional solution-state NMR techniques for structural analysis of lignin in isolated and whole cell wall states (in situ), as well as their applications are reviewed. PMID:28809313

  6. Comparison among different retrofitting strategies for the vulnerability reduction of masonry bell towers

    NASA Astrophysics Data System (ADS)

    Milani, Gabriele; Shehu, Rafael; Valente, Marco

    2017-11-01

    This paper investigates the effectiveness of reducing the seismic vulnerability of masonry towers by means of innovative and traditional strengthening techniques. The followed strategy for providing the optimal retrofitting for masonry towers subjected to seismic risk relies on preventing active failure mechanisms. These vulnerable mechanisms are pre-assigned failure patterns based on the crack patterns experienced during the past seismic events. An upper bound limit analysis strategy is found suitable to be applied for simplified tower models in their present state and the proposed retrofitted ones. Taking into consideration the variability of geometrical features and the uncertainty of the strengthening techniques, Monte Carlo simulations are implemented into the limit analysis. In this framework a wide range of idealized cases are covered by the conducted analyses. The retrofitting strategies aim to increase the shear strength and the overturning load carrying capacity in order to reduce vulnerability. This methodology gives the possibility to use different materials which can fulfill the structural implementability requirements.

  7. A series connection architecture for large-area organic photovoltaic modules with a 7.5% module efficiency

    PubMed Central

    Hong, Soonil; Kang, Hongkyu; Kim, Geunjin; Lee, Seongyu; Kim, Seok; Lee, Jong-Hoon; Lee, Jinho; Yi, Minjin; Kim, Junghwan; Back, Hyungcheol; Kim, Jae-Ryoung; Lee, Kwanghee

    2016-01-01

    The fabrication of organic photovoltaic modules via printing techniques has been the greatest challenge for their commercial manufacture. Current module architecture, which is based on a monolithic geometry consisting of serially interconnecting stripe-patterned subcells with finite widths, requires highly sophisticated patterning processes that significantly increase the complexity of printing production lines and cause serious reductions in module efficiency due to so-called aperture loss in series connection regions. Herein we demonstrate an innovative module structure that can simultaneously reduce both patterning processes and aperture loss. By using a charge recombination feature that occurs at contacts between electron- and hole-transport layers, we devise a series connection method that facilitates module fabrication without patterning the charge transport layers. With the successive deposition of component layers using slot-die and doctor-blade printing techniques, we achieve a high module efficiency reaching 7.5% with area of 4.15 cm2. PMID:26728507

  8. Characterization and analysis of surface notches on Ti-alloy plates fabricated by additive manufacturing techniques

    NASA Astrophysics Data System (ADS)

    Chan, Kwai S.

    2015-12-01

    Rectangular plates of Ti-6Al-4V with extra low interstitial (ELI) were fabricated by layer-by-layer deposition techniques that included electron beam melting (EBM) and laser beam melting (LBM). The surface conditions of these plates were characterized using x-ray micro-computed tomography. The depth and radius of surface notch-like features on the LBM and EBM plates were measured from sectional images of individual virtual slices of the rectangular plates. The stress concentration factors of individual surface notches were computed and analyzed statistically to determine the appropriate distributions for the notch depth, notch radius, and stress concentration factor. These results were correlated with the fatigue life of the Ti-6Al-4V ELI alloys from an earlier investigation. A surface notch analysis was performed to assess the debit in the fatigue strength due to the surface notches. The assessment revealed that the fatigue lives of the additively manufactured plates with rough surface topographies and notch-like features are dominated by the fatigue crack growth of large cracks for both the LBM and EBM materials. The fatigue strength reduction due to the surface notches can be as large as 60%-75%. It is concluded that for better fatigue performance, the surface notches on EBM and LBM materials need to be removed by machining and the surface roughness be improved to a surface finish of about 1 μm.

  9. An Event-Triggered Machine Learning Approach for Accelerometer-Based Fall Detection.

    PubMed

    Putra, I Putu Edy Suardiyana; Brusey, James; Gaura, Elena; Vesilo, Rein

    2017-12-22

    The fixed-size non-overlapping sliding window (FNSW) and fixed-size overlapping sliding window (FOSW) approaches are the most commonly used data-segmentation techniques in machine learning-based fall detection using accelerometer sensors. However, these techniques do not segment by fall stages (pre-impact, impact, and post-impact) and thus useful information is lost, which may reduce the detection rate of the classifier. Aligning the segment with the fall stage is difficult, as the segment size varies. We propose an event-triggered machine learning (EvenT-ML) approach that aligns each fall stage so that the characteristic features of the fall stages are more easily recognized. To evaluate our approach, two publicly accessible datasets were used. Classification and regression tree (CART), k -nearest neighbor ( k -NN), logistic regression (LR), and the support vector machine (SVM) were used to train the classifiers. EvenT-ML gives classifier F-scores of 98% for a chest-worn sensor and 92% for a waist-worn sensor, and significantly reduces the computational cost compared with the FNSW- and FOSW-based approaches, with reductions of up to 8-fold and 78-fold, respectively. EvenT-ML achieves a significantly better F-score than existing fall detection approaches. These results indicate that aligning feature segments with fall stages significantly increases the detection rate and reduces the computational cost.

  10. Strain Gauge Balance Calibration and Data Reduction at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Ferris, A. T. Judy

    1999-01-01

    This paper will cover the standard force balance calibration and data reduction techniques used at Langley Research Center. It will cover balance axes definition, balance type, calibration instrumentation, traceability of standards to NIST, calibration loading procedures, balance calibration mathematical model, calibration data reduction techniques, balance accuracy reporting, and calibration frequency.

  11. Computer-Aided Diagnosis System for Alzheimer's Disease Using Different Discrete Transform Techniques.

    PubMed

    Dessouky, Mohamed M; Elrashidy, Mohamed A; Taha, Taha E; Abdelkader, Hatem M

    2016-05-01

    The different discrete transform techniques such as discrete cosine transform (DCT), discrete sine transform (DST), discrete wavelet transform (DWT), and mel-scale frequency cepstral coefficients (MFCCs) are powerful feature extraction techniques. This article presents a proposed computer-aided diagnosis (CAD) system for extracting the most effective and significant features of Alzheimer's disease (AD) using these different discrete transform techniques and MFCC techniques. Linear support vector machine has been used as a classifier in this article. Experimental results conclude that the proposed CAD system using MFCC technique for AD recognition has a great improvement for the system performance with small number of significant extracted features, as compared with the CAD system based on DCT, DST, DWT, and the hybrid combination methods of the different transform techniques. © The Author(s) 2015.

  12. Classification of tumor based on magnetic resonance (MR) brain images using wavelet energy feature and neuro-fuzzy model

    NASA Astrophysics Data System (ADS)

    Damayanti, A.; Werdiningsih, I.

    2018-03-01

    The brain is the organ that coordinates all the activities that occur in our bodies. Small abnormalities in the brain will affect body activity. Tumor of the brain is a mass formed a result of cell growth not normal and unbridled in the brain. MRI is a non-invasive medical test that is useful for doctors in diagnosing and treating medical conditions. The process of classification of brain tumor can provide the right decision and correct treatment and right on the process of treatment of brain tumor. In this study, the classification process performed to determine the type of brain tumor disease, namely Alzheimer’s, Glioma, Carcinoma and normal, using energy coefficient and ANFIS. Process stages in the classification of images of MR brain are the extraction of a feature, reduction of a feature, and process of classification. The result of feature extraction is a vector approximation of each wavelet decomposition level. The feature reduction is a process of reducing the feature by using the energy coefficients of the vector approximation. The feature reduction result for energy coefficient of 100 per feature is 1 x 52 pixels. This vector will be the input on the classification using ANFIS with Fuzzy C-Means and FLVQ clustering process and LM back-propagation. Percentage of success rate of MR brain images recognition using ANFIS-FLVQ, ANFIS, and LM back-propagation was obtained at 100%.

  13. Kruskal-Wallis-based computationally efficient feature selection for face recognition.

    PubMed

    Ali Khan, Sajid; Hussain, Ayyaz; Basit, Abdul; Akram, Sheeraz

    2014-01-01

    Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques.

  14. Half-Fan-Based Intensity-Weighted Region-of-Interest Imaging for Low-Dose Cone-Beam CT in Image-Guided Radiation Therapy.

    PubMed

    Yoo, Boyeol; Son, Kihong; Pua, Rizza; Kim, Jinsung; Solodov, Alexander; Cho, Seungryong

    2016-10-01

    With the increased use of computed tomography (CT) in clinics, dose reduction is the most important feature people seek when considering new CT techniques or applications. We developed an intensity-weighted region-of-interest (IWROI) imaging method in an exact half-fan geometry to reduce the imaging radiation dose to patients in cone-beam CT (CBCT) for image-guided radiation therapy (IGRT). While dose reduction is highly desirable, preserving the high-quality images of the ROI is also important for target localization in IGRT. An intensity-weighting (IW) filter made of copper was mounted in place of a bowtie filter on the X-ray tube unit of an on-board imager (OBI) system such that the filter can substantially reduce radiation exposure to the outer ROI. In addition to mounting the IW filter, the lead-blade collimation of the OBI was adjusted to produce an exact half-fan scanning geometry for a further reduction of the radiation dose. The chord-based rebinned backprojection-filtration (BPF) algorithm in circular CBCT was implemented for image reconstruction, and a humanoid pelvis phantom was used for the IWROI imaging experiment. The IWROI image of the phantom was successfully reconstructed after beam-quality correction, and it was registered to the reference image within an acceptable level of tolerance. Dosimetric measurements revealed that the dose is reduced by approximately 61% in the inner ROI and by 73% in the outer ROI compared to the conventional bowtie filter-based half-fan scan. The IWROI method substantially reduces the imaging radiation dose and provides reconstructed images with an acceptable level of quality for patient setup and target localization. The proposed half-fan-based IWROI imaging technique can add a valuable option to CBCT in IGRT applications.

  15. Partial Discharge Spectral Characterization in HF, VHF and UHF Bands Using Particle Swarm Optimization.

    PubMed

    Robles, Guillermo; Fresno, José Manuel; Martínez-Tarifa, Juan Manuel; Ardila-Rey, Jorge Alfredo; Parrado-Hernández, Emilio

    2018-03-01

    The measurement of partial discharge (PD) signals in the radio frequency (RF) range has gained popularity among utilities and specialized monitoring companies in recent years. Unfortunately, in most of the occasions the data are hidden by noise and coupled interferences that hinder their interpretation and renders them useless especially in acquisition systems in the ultra high frequency (UHF) band where the signals of interest are weak. This paper is focused on a method that uses a selective spectral signal characterization to feature each signal, type of partial discharge or interferences/noise, with the power contained in the most representative frequency bands. The technique can be considered as a dimensionality reduction problem where all the energy information contained in the frequency components is condensed in a reduced number of UHF or high frequency (HF) and very high frequency (VHF) bands. In general, dimensionality reduction methods make the interpretation of results a difficult task because the inherent physical nature of the signal is lost in the process. The proposed selective spectral characterization is a preprocessing tool that facilitates further main processing. The starting point is a clustering of signals that could form the core of a PD monitoring system. Therefore, the dimensionality reduction technique should discover the best frequency bands to enhance the affinity between signals in the same cluster and the differences between signals in different clusters. This is done maximizing the minimum Mahalanobis distance between clusters using particle swarm optimization (PSO). The tool is tested with three sets of experimental signals to demonstrate its capabilities in separating noise and PDs with low signal-to-noise ratio and separating different types of partial discharges measured in the UHF and HF/VHF bands.

  16. Mapping accuracy via spectrally and structurally based filtering techniques: comparisons through visual observations

    NASA Astrophysics Data System (ADS)

    Chockalingam, Letchumanan

    2005-01-01

    The data of Gunung Ledang region of Malaysia acquired through LANDSAT are considered to map certain hydrogeolocial features. To map these significant features, image-processing tools such as contrast enhancement, edge detection techniques are employed. The advantages of these techniques over the other methods are evaluated from the point of their validity in properly isolating features of hydrogeolocial interest are discussed. As these techniques take the advantage of spectral aspects of the images, these techniques have several limitations to meet the objectives. To discuss these limitations, a morphological transformation, which generally considers the structural aspects rather than spectral aspects from the image, are applied to provide comparisons between the results derived from spectral based and the structural based filtering techniques.

  17. Optimal Dynamic Sub-Threshold Technique for Extreme Low Power Consumption for VLSI

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2012-01-01

    For miniaturization of electronics systems, power consumption plays a key role in the realm of constraints. Considering the very large scale integration (VLSI) design aspect, as transistor feature size is decreased to 50 nm and below, there is sizable increase in the number of transistors as more functional building blocks are embedded in the same chip. However, the consequent increase in power consumption (dynamic and leakage) will serve as a key constraint to inhibit the advantages of transistor feature size reduction. Power consumption can be reduced by minimizing the voltage supply (for dynamic power consumption) and/or increasing threshold voltage (V(sub th), for reducing leakage power). When the feature size of the transistor is reduced, supply voltage (V(sub dd)) and threshold voltage (V(sub th)) are also reduced accordingly; then, the leakage current becomes a bigger factor of the total power consumption. To maintain low power consumption, operation of electronics at sub-threshold levels can be a potentially strong contender; however, there are two obstacles to be faced: more leakage current per transistor will cause more leakage power consumption, and slow response time when the transistor is operated in weak inversion region. To enable low power consumption and yet obtain high performance, the CMOS (complementary metal oxide semiconductor) transistor as a basic element is viewed and controlled as a four-terminal device: source, drain, gate, and body, as differentiated from the traditional approach with three terminals: i.e., source and body, drain, and gate. This technique features multiple voltage sources to supply the dynamic control, and uses dynamic control to enable low-threshold voltage when the channel (N or P) is active, for speed response enhancement and high threshold voltage, and when the transistor channel (N or P) is inactive, to reduce the leakage current for low-leakage power consumption.

  18. Flexible conformable hydrophobized surfaces for turbulent flow drag reduction

    NASA Astrophysics Data System (ADS)

    Brennan, Joseph C.; Geraldi, Nicasio R.; Morris, Robert H.; Fairhurst, David J.; McHale, Glen; Newton, Michael I.

    2015-05-01

    In recent years extensive work has been focused onto using superhydrophobic surfaces for drag reduction applications. Superhydrophobic surfaces retain a gas layer, called a plastron, when submerged underwater in the Cassie-Baxter state with water in contact with the tops of surface roughness features. In this state the plastron allows slip to occur across the surface which results in a drag reduction. In this work we report flexible and relatively large area superhydrophobic surfaces produced using two different methods: Large roughness features were created by electrodeposition on copper meshes; Small roughness features were created by embedding carbon nanoparticles (soot) into Polydimethylsiloxane (PDMS). Both samples were made into cylinders with a diameter under 12 mm. To characterize the samples, scanning electron microscope (SEM) images and confocal microscope images were taken. The confocal microscope images were taken with each sample submerged in water to show the extent of the plastron. The hydrophobized electrodeposited copper mesh cylinders showed drag reductions of up to 32% when comparing the superhydrophobic state with a wetted out state. The soot covered cylinders achieved a 30% drag reduction when comparing the superhydrophobic state to a plain cylinder. These results were obtained for turbulent flows with Reynolds numbers 10,000 to 32,500.

  19. Automated simultaneous multiple feature classification of MTI data

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Theiler, James P.; Balick, Lee K.; Pope, Paul A.; Szymanski, John J.; Perkins, Simon J.; Porter, Reid B.; Brumby, Steven P.; Bloch, Jeffrey J.; David, Nancy A.; Galassi, Mark C.

    2002-08-01

    Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.

  20. Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis.

    PubMed

    You, Zhu-Hong; Lei, Ying-Ke; Zhu, Lin; Xia, Junfeng; Wang, Bing

    2013-01-01

    Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time.

  1. Computer-aided Assessment of Regional Abdominal Fat with Food Residue Removal in CT

    PubMed Central

    Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi

    2014-01-01

    Rationale and Objectives Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Materials and Methods Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. Results We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Conclusions Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. PMID:24119354

  2. Computer-aided assessment of regional abdominal fat with food residue removal in CT.

    PubMed

    Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi

    2013-11-01

    Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. Published by Elsevier Inc.

  3. Experimental demonstration of single electron transistors featuring SiO{sub 2} plasma-enhanced atomic layer deposition in Ni-SiO{sub 2}-Ni tunnel junctions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karbasian, Golnaz, E-mail: Golnaz.Karbasian.1@nd.edu; McConnell, Michael S.; Orlov, Alexei O.

    The authors report the use of plasma-enhanced atomic layer deposition (PEALD) to fabricate single-electron transistors (SETs) featuring ultrathin (≈1 nm) tunnel-transparent SiO{sub 2} in Ni-SiO{sub 2}-Ni tunnel junctions. They show that, as a result of the O{sub 2} plasma steps in PEALD of SiO{sub 2}, the top surface of the underlying Ni electrode is oxidized. Additionally, the bottom surface of the upper Ni layer is also oxidized where it is in contact with the deposited SiO{sub 2}, most likely as a result of oxygen-containing species on the surface of the SiO{sub 2}. Due to the presence of these surface parasitic layersmore » of NiO, which exhibit features typical of thermally activated transport, the resistance of Ni-SiO{sub 2}-Ni tunnel junctions is drastically increased. Moreover, the transport mechanism is changed from quantum tunneling through the dielectric barrier to one consistent with thermally activated resistors in series with tunnel junctions. The reduction of NiO to Ni is therefore required to restore the metal-insulator-metal (MIM) structure of the junctions. Rapid thermal annealing in a forming gas ambient at elevated temperatures is presented as a technique to reduce both parasitic oxide layers. This method is of great interest for devices that rely on MIM tunnel junctions with ultrathin barriers. Using this technique, the authors successfully fabricated MIM SETs with minimal trace of parasitic NiO component. They demonstrate that the properties of the tunnel barrier in nanoscale tunnel junctions (with <10{sup −15} m{sup 2} in area) can be evaluated by electrical characterization of SETs.« less

  4. Smartphone applications to support weight loss: current perspectives

    PubMed Central

    Pellegrini, Christine A; Pfammatter, Angela F; Conroy, David E; Spring, Bonnie

    2015-01-01

    Lower cost alternatives are needed for the traditional in-person behavioral weight loss programs to overcome challenges of lowering the worldwide prevalence of overweight and obesity. Smartphones have become ubiquitous and provide a unique platform to aid in the delivery of a behavioral weight loss program. The technological capabilities of a smartphone may address certain limitations of a traditional weight loss program, while also reducing the cost and burden on participants, interventionists, and health care providers. Awareness of the advantages smartphones offer for weight loss has led to the rapid development and proliferation of weight loss applications (apps). The built-in features and the mechanisms by which they work vary across apps. Although there are an extraordinary number of a weight loss apps available, most lack the same magnitude of evidence-based behavior change strategies typically used in traditional programs. As features develop and new capabilities are identified, we propose a conceptual model as a framework to guide the inclusion of features that can facilitate behavior change and lead to reductions in weight. Whereas the conventional wisdom about behavior change asserts that more is better (with respect to the number of behavior change techniques involved), this model suggests that less may be more because extra techniques may add burden and adversely impact engagement. Current evidence is promising and continues to emerge on the potential of smartphone use within weight loss programs; yet research is unable to keep up with the rapidly improving smartphone technology. Future studies are needed to refine the conceptual model’s utility in the use of technology for weight loss, determine the effectiveness of intervention components utilizing smartphone technology, and identify novel and faster ways to evaluate the ever-changing technology. PMID:26236766

  5. Sequential projection pursuit for optimised vibration-based damage detection in an experimental wind turbine blade

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2018-02-01

    To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.

  6. Evaluation of Clipping Based Iterative PAPR Reduction Techniques for FBMC Systems

    PubMed Central

    Kollár, Zsolt

    2014-01-01

    This paper investigates filter bankmulticarrier (FBMC), a multicarrier modulation technique exhibiting an extremely low adjacent channel leakage ratio (ACLR) compared to conventional orthogonal frequency division multiplexing (OFDM) technique. The low ACLR of the transmitted FBMC signal makes it especially favorable in cognitive radio applications, where strict requirements are posed on out-of-band radiation. Large dynamic range resulting in high peak-to-average power ratio (PAPR) is characteristic of all sorts of multicarrier signals. The advantageous spectral properties of the high-PAPR FBMC signal are significantly degraded if nonlinearities are present in the transceiver chain. Spectral regrowth may appear, causing harmful interference in the neighboring frequency bands. This paper presents novel clipping based PAPR reduction techniques, evaluated and compared by simulations and measurements, with an emphasis on spectral aspects. The paper gives an overall comparison of PAPR reduction techniques, focusing on the reduction of the dynamic range of FBMC signals without increasing out-of-band radiation. An overview is presented on transmitter oriented techniques employing baseband clipping, which can maintain the system performance with a desired bit error rate (BER). PMID:24558338

  7. Juvenile Angiofibroma: Evolution of Management

    PubMed Central

    Nicolai, Piero; Schreiber, Alberto; Bolzoni Villaret, Andrea

    2012-01-01

    Juvenile angiofibroma is a rare benign lesion originating from the pterygopalatine fossa with distinctive epidemiologic features and growth patterns. The typical patient is an adolescent male with a clinical history of recurrent epistaxis and nasal obstruction. Although the use of nonsurgical therapies is described in the literature, surgery is currently considered the ideal treatment for juvenile angiofibroma. Refinement in preoperative embolization has provided significant reduction of complications and intraoperative bleeding with minimal risk of residual disease. During the last decade, an endoscopic technique has been extensively adopted as a valid alternative to external approaches in the management of small-intermediate size juvenile angiofibromas. Herein, we review the evolution in the management of juvenile angiofibroma with particular reference to recent advances in diagnosis and treatment. PMID:22164185

  8. Erich Lexer's mammaplasty.

    PubMed

    Hinderer, U T; del Rio, J L

    1992-01-01

    A summary of Hans May's biography of Erich Lexer is reproduced, followed by a translation of Lexer's first publication, in Spain in 1921, on the correction of pendular breasts. Lexer's fundamental contributions to mammaplasty are analyzed. This author was the first in the history of mammaplasty to perform breast reduction with an "open" nipple-areola complex transposition, with preservation of the continuity of the skin to the remaining gland. This feature was far ahead of its time, as the techniques based on this concept did not become popular until after 1955. Lexer also was the first to propose subcutaneous mastectomy for treatment of fibrocystic disease, to perform breast augmentation in the ptotic hypoplastic breast with fat flaps, and to use free fat grafts taken from the abdomen or hips for augmentation mammaplasty.

  9. MARKOV: A methodology for the solution of infinite time horizon MARKOV decision processes

    USGS Publications Warehouse

    Williams, B.K.

    1988-01-01

    Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value-improvement and policy-improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.

  10. Fiber-Drawn Metamaterial for THz Waveguiding and Imaging

    NASA Astrophysics Data System (ADS)

    Atakaramians, Shaghik; Stefani, Alessio; Li, Haisu; Habib, Md. Samiul; Hayashi, Juliano Grigoleto; Tuniz, Alessandro; Tang, Xiaoli; Anthony, Jessienta; Lwin, Richard; Argyros, Alexander; Fleming, Simon C.; Kuhlmey, Boris T.

    2017-09-01

    In this paper, we review the work of our group in fabricating metamaterials for terahertz (THz) applications by fiber drawing. We discuss the fabrication technique and the structures that can be obtained before focusing on two particular applications of terahertz metamaterials, i.e., waveguiding and sub-diffraction imaging. We show the experimental demonstration of THz radiation guidance through hollow core waveguides with metamaterial cladding, where substantial improvements were realized compared to conventional hollow core waveguides, such as reduction of size, greater flexibility, increased single-mode operating regime, and guiding due to magnetic and electric resonances. We also report recent and new experimental work on near- and far-field THz imaging using wire array metamaterials that are capable of resolving features as small as λ/28.

  11. One-step solution combustion synthesis of pure Ni nanopowders with enhanced coercivity: The fuel effect

    NASA Astrophysics Data System (ADS)

    Khort, Alexander; Podbolotov, Kirill; Serrano-García, Raquel; Gun'ko, Yurii K.

    2017-09-01

    In this paper, we report a new modified one-step combustion synthesis technique for production of Ni metal nanoparticles. The main unique feature of our approach is the use of microwave assisted foam preparation. Also, the effect of different types of fuels (urea, citric acid, glycine and hexamethylenetetramine) on the combustion process and characteristics of resultant solid products were investigated. It is observed that the combination of microwave assisted foam preparation and using of hexamethylenetetramine as a fuel allows producing pure ferromagnetic Ni metal nanoparticles with enhanced coercivity (78 Oe) and high value of saturation magnetization (52 emu/g) by one-step solution combustion synthesis under normal air atmosphere without any post-reduction processing.

  12. Light scattering techniques for the characterization of optical components

    NASA Astrophysics Data System (ADS)

    Hauptvogel, M.; Schröder, S.; Herffurth, T.; Trost, M.; von Finck, A.; Duparré, A.; Weigel, T.

    2017-11-01

    The rapid developments in optical technologies generate increasingly higher and sometimes completely new demands on the quality of materials, surfaces, components, and systems. Examples for such driving applications are the steadily shrinking feature sizes in semiconductor lithography, nanostructured functional surfaces for consumer optics, and advanced optical systems for astronomy and space applications. The reduction of surface defects as well as the minimization of roughness and other scatter-relevant irregularities are essential factors in all these areas of application. Quality-monitoring for analysing and improving those properties must ensure that even minimal defects and roughness values can be detected reliably. Light scattering methods have a high potential for a non-contact, rapid, efficient, and sensitive determination of roughness, surface structures, and defects.

  13. Reductive Augmentation of the Breast.

    PubMed

    Chasan, Paul E

    2018-06-01

    Although breast reduction surgery plays an invaluable role in the correction of macromastia, it almost always results in a breast lacking in upper pole fullness and/or roundness. We present a technique of breast reduction combined with augmentation termed "reductive augmentation" to solve this problem. The technique is also extremely useful for correcting breast asymmetry, as well as revising significant pseudoptosis in the patient who has previously undergone breast augmentation with or without mastopexy. An evolution of techniques has been used to create a breast with more upper pole fullness and anterior projection in those patients desiring a more round, higher-profile appearance. Reductive augmentation is a one-stage procedure in which a breast augmentation is immediately followed by a modified superomedial pedicle breast reduction. Often, the excision of breast tissue is greater than would normally be performed with breast reduction alone. Thirty-five patients underwent reductive augmentation, of which 12 were primary surgeries and 23 were revisions. There was an average tissue removal of 255 and 227 g, respectively, per breast for the primary and revision groups. Six of the reductive augmentations were performed for gross asymmetry. Fourteen patients had a previous mastopexy, and 3 patients had a previous breast reduction. The average follow-up was 26 months. Reductive augmentation is an effective one-stage method for achieving a more round-appearing breast with upper pole fullness both in primary breast reduction candidates and in revisionary breast surgery. This technique can also be applied to those patients with significant asymmetry. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  14. Surveying the interest of individuals with upper limb loss in novel prosthetic control techniques.

    PubMed

    Engdahl, Susannah M; Christie, Breanne P; Kelly, Brian; Davis, Alicia; Chestek, Cynthia A; Gates, Deanna H

    2015-06-13

    Novel techniques for the control of upper limb prostheses may allow users to operate more complex prostheses than those that are currently available. Because many of these techniques are surgically invasive, it is important to understand whether individuals with upper limb loss would accept the associated risks in order to use a prosthesis. An online survey of individuals with upper limb loss was conducted. Participants read descriptions of four prosthetic control techniques. One technique was noninvasive (myoelectric) and three were invasive (targeted muscle reinnervation, peripheral nerve interfaces, cortical interfaces). Participants rated how likely they were to try each technique if it offered each of six different functional features. They also rated their general interest in each of the six features. A two-way repeated measures analysis of variance with Greenhouse-Geisser corrections was used to examine the effect of the technique type and feature on participants' interest in each technique. Responses from 104 individuals were analyzed. Many participants were interested in trying the techniques - 83 % responded positively toward myoelectric control, 63 % toward targeted muscle reinnervation, 68 % toward peripheral nerve interfaces, and 39 % toward cortical interfaces. Common concerns about myoelectric control were weight, cost, durability, and difficulty of use, while the most common concern about the invasive techniques was surgical risk. Participants expressed greatest interest in basic prosthesis features (e.g., opening and closing the hand slowly), as opposed to advanced features like fine motor control and touch sensation. The results of these investigations may be used to inform the development of future prosthetic technologies that are appealing to individuals with upper limb loss.

  15. Constrained Metric Learning by Permutation Inducing Isometries.

    PubMed

    Bosveld, Joel; Mahmood, Arif; Huynh, Du Q; Noakes, Lyle

    2016-01-01

    The choice of metric critically affects the performance of classification and clustering algorithms. Metric learning algorithms attempt to improve performance, by learning a more appropriate metric. Unfortunately, most of the current algorithms learn a distance function which is not invariant to rigid transformations of images. Therefore, the distances between two images and their rigidly transformed pair may differ, leading to inconsistent classification or clustering results. We propose to constrain the learned metric to be invariant to the geometry preserving transformations of images that induce permutations in the feature space. The constraint that these transformations are isometries of the metric ensures consistent results and improves accuracy. Our second contribution is a dimension reduction technique that is consistent with the isometry constraints. Our third contribution is the formulation of the isometry constrained logistic discriminant metric learning (IC-LDML) algorithm, by incorporating the isometry constraints within the objective function of the LDML algorithm. The proposed algorithm is compared with the existing techniques on the publicly available labeled faces in the wild, viewpoint-invariant pedestrian recognition, and Toy Cars data sets. The IC-LDML algorithm has outperformed existing techniques for the tasks of face recognition, person identification, and object classification by a significant margin.

  16. Deep learning aided decision support for pulmonary nodules diagnosing: a review

    PubMed Central

    Yang, Yixin; Feng, Xiaoyi; Chi, Wenhao; Li, Zhengyang; Duan, Wenzhe; Liu, Haiping; Liang, Wenhua; Wang, Wei; Chen, Ping

    2018-01-01

    Deep learning techniques have recently emerged as promising decision supporting approaches to automatically analyze medical images for different clinical diagnosing purposes. Diagnosing of pulmonary nodules by using computer-assisted diagnosing has received considerable theoretical, computational, and empirical research work, and considerable methods have been developed for detection and classification of pulmonary nodules on different formats of images including chest radiographs, computed tomography (CT), and positron emission tomography in the past five decades. The recent remarkable and significant progress in deep learning for pulmonary nodules achieved in both academia and the industry has demonstrated that deep learning techniques seem to be promising alternative decision support schemes to effectively tackle the central issues in pulmonary nodules diagnosing, including feature extraction, nodule detection, false-positive reduction, and benign-malignant classification for the huge volume of chest scan data. The main goal of this investigation is to provide a comprehensive state-of-the-art review of the deep learning aided decision support for pulmonary nodules diagnosing. As far as the authors know, this is the first time that a review is devoted exclusively to deep learning techniques for pulmonary nodules diagnosing. PMID:29780633

  17. Laser fringe anemometry for aero engine components

    NASA Technical Reports Server (NTRS)

    Strazisar, A. J.

    1986-01-01

    Advances in flow measurement techniques in turbomachinery continue to be paced by the need to obtain detailed data for use in validating numerical predictions of the flowfield and for use in the development of empirical models for those flow features which cannot be readily modelled numerically. The use of laser anemometry in turbomachinery research has grown over the last 14 years in response to these needs. Based on past applications and current developments, this paper reviews the key issues which are involved when considering the application of laser anemometry to the measurement of turbomachinery flowfields. Aspects of laser fringe anemometer optical design which are applicable to turbomachinery research are briefly reviewed. Application problems which are common to both laser fringe anemometry (LFA) and laser transit anemometry (LTA) such as seed particle injection, optical access to the flowfield, and measurement of rotor rotational position are covered. The efficiency of various data acquisition schemes is analyzed and issues related to data integrity and error estimation are addressed. Real-time data analysis techniques aimed at capturing flow physics in real time are discussed. Finally, data reduction and analysis techniques are discussed and illustrated using examples taken from several LFA turbomachinery applications.

  18. SPHARA--a generalized spatial Fourier analysis for multi-sensor systems with non-uniformly arranged sensors: application to EEG.

    PubMed

    Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens

    2015-01-01

    Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications.

  19. A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis

    NASA Technical Reports Server (NTRS)

    Reichert, Bruce A.; Wendt, Bruce J.

    1994-01-01

    A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.

  20. A comparative analysis of swarm intelligence techniques for feature selection in cancer classification.

    PubMed

    Gunavathi, Chellamuthu; Premalatha, Kandasamy

    2014-01-01

    Feature selection in cancer classification is a central area of research in the field of bioinformatics and used to select the informative genes from thousands of genes of the microarray. The genes are ranked based on T-statistics, signal-to-noise ratio (SNR), and F-test values. The swarm intelligence (SI) technique finds the informative genes from the top-m ranked genes. These selected genes are used for classification. In this paper the shuffled frog leaping with Lévy flight (SFLLF) is proposed for feature selection. In SFLLF, the Lévy flight is included to avoid premature convergence of shuffled frog leaping (SFL) algorithm. The SI techniques such as particle swarm optimization (PSO), cuckoo search (CS), SFL, and SFLLF are used for feature selection which identifies informative genes for classification. The k-nearest neighbour (k-NN) technique is used to classify the samples. The proposed work is applied on 10 different benchmark datasets and examined with SI techniques. The experimental results show that the results obtained from k-NN classifier through SFLLF feature selection method outperform PSO, CS, and SFL.

  1. Imaging as characterization techniques for thin-film cadmium telluride photovoltaics

    NASA Astrophysics Data System (ADS)

    Zaunbrecher, Katherine

    The goal of increasing the efficiency of solar cell devices is a universal one. Increased photovoltaic (PV) performance means an increase in competition with other energy technologies. One way to improve PV technologies is to develop rapid, accurate characterization tools for quality control. Imaging techniques developed over the past decade are beginning to fill that role. Electroluminescence (EL), photoluminescence (PL), and lock-in thermography are three types of imaging implemented in this study to provide a multifaceted approach to studying imaging as applied to thin-film CdTe solar cells. Images provide spatial information about cell operation, which in turn can be used to identify defects that limit performance. This study began with developing EL, PL, and dark lock-in thermography (DLIT) for CdTe. Once imaging data were acquired, luminescence and thermography signatures of non-uniformities that disrupt the generation and collection of carriers were identified and cataloged. Additional data acquisition and analysis were used to determine luminescence response to varying operating conditions. This includes acquiring spectral data, varying excitation conditions, and correlating luminescence to device performance. EL measurements show variations in a cell's local voltage, which include inhomogeneities in the transparent-conductive oxide (TCO) front contact, CdS window layer, and CdTe absorber layer. EL signatures include large gradients, local reduction of luminescence, and local increases in luminescence on the interior of the device as well as bright spots located on the cell edges. The voltage bias and spectral response were analyzed to determine the response of these non-uniformities and surrounding areas. PL images of CdTe have not shown the same level of detail and features compared to their EL counterparts. Many of the signatures arise from reflections and severe inhomogeneities, but the technique is limited by the external illumination source used to excite carriers. Measurements on unfinished CdS and CdTe films reveal changes in signal after post-deposition processing treatments. DLIT images contained heat signatures arising from defect-related current crowding. Forward- and reverse-bias measurements revealed hot spots related to shunt and weak-diode defects. Modeling and previous studies done on Cu(In,Ga)Se 2 thin-film solar cells aided in identifying the physical causes of these thermographic and luminescence signatures. Imaging data were also coupled with other characterization techniques to provide a more comprehensive examination of nonuniform features and their origins and effects on device performance. These techniques included light-beam-induced-current (LBIC) measurements, which provide spatial quantum efficiency maps of the cell at varying resolutions, as well as time-resolved photoluminescence and spectral PL mapping. Local drops in quantum efficiency seen in LBIC typically corresponded with reductions in EL signal while minority-carrier lifetime values acquired by time-resolved PL measurements correlate with PL intensity.

  2. Dose tracking and dose auditing in a comprehensive computed tomography dose-reduction program.

    PubMed

    Duong, Phuong-Anh; Little, Brent P

    2014-08-01

    Implementation of a comprehensive computed tomography (CT) radiation dose-reduction program is a complex undertaking, requiring an assessment of baseline doses, an understanding of dose-saving techniques, and an ongoing appraisal of results. We describe the role of dose tracking in planning and executing a dose-reduction program and discuss the use of the American College of Radiology CT Dose Index Registry at our institution. We review the basics of dose-related CT scan parameters, the components of the dose report, and the dose-reduction techniques, showing how an understanding of each technique is important in effective auditing of "outlier" doses identified by dose tracking. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Potential for the use of reconstructed IASI radiances in the detection of atmospheric trace gases

    NASA Astrophysics Data System (ADS)

    Atkinson, N. C.; Hilton, F. I.; Illingworth, S. M.; Eyre, J. R.; Hultberg, T.

    2010-02-01

    Principal component (PC) analysis has received considerable attention as a technique for the extraction of meteorological signals from hyperspectral infra-red sounders such as the Infrared Atmospheric Sounding Interferometer (IASI) and the Atmospheric Infrared Sounder (AIRS). In addition to achieving substantial bit-volume reductions for dissemination purposes, the technique can also be used to generate reconstructed radiances in which random instrument noise has been suppressed. To date, most studies have been in the context of Numerical Weather Prediction (NWP). This study examines the potential of PC analysis for chemistry applications. A major concern in the use of PC analysis for chemistry has been that the spectral features associated with trace gases may not be well represented in the reconstructed spectra, either due to deficiencies in the training set or due to the limited number of PC scores used in the radiance reconstruction. In this paper we show examples of reconstructed IASI radiances for several trace gases: ammonia, sulphur dioxide, methane and carbon monoxide. It is shown that care must be taken in the selection of spectra for the initial training set: an iterative technique, in which outlier spectra are added to a base training set, gives the best results. For the four trace gases examined, the chemical signatures are retained in the reconstructed radiances, whilst achieving a substantial reduction in instrument noise. A new regional re-transmission service for IASI is scheduled to start in 2010, as part of the EUMETSAT Advanced Retransmission Service (EARS). For this EARS-IASI service it is intended to include PC scores as part of the data stream. The paper describes the generation of the reference eigenvectors for this new service.

  4. A computationally efficient Bayesian sequential simulation approach for the assimilation of vast and diverse hydrogeophysical datasets

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus

    2016-04-01

    Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.

  5. Mixed Models and Reduction Techniques for Large-Rotation, Nonlinear Analysis of Shells of Revolution with Application to Tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Andersen, C. M.; Tanner, J. A.

    1984-01-01

    An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.

  6. Combinative Particle Size Reduction Technologies for the Production of Drug Nanocrystals

    PubMed Central

    Salazar, Jaime; Müller, Rainer H.; Möschwitzer, Jan P.

    2014-01-01

    Nanosizing is a suitable method to enhance the dissolution rate and therefore the bioavailability of poorly soluble drugs. The success of the particle size reduction processes depends on critical factors such as the employed technology, equipment, and drug physicochemical properties. High pressure homogenization and wet bead milling are standard comminution techniques that have been already employed to successfully formulate poorly soluble drugs and bring them to market. However, these techniques have limitations in their particle size reduction performance, such as long production times and the necessity of employing a micronized drug as the starting material. This review article discusses the development of combinative methods, such as the NANOEDGE, H 96, H 69, H 42, and CT technologies. These processes were developed to improve the particle size reduction effectiveness of the standard techniques. These novel technologies can combine bottom-up and/or top-down techniques in a two-step process. The combinative processes lead in general to improved particle size reduction effectiveness. Faster production of drug nanocrystals and smaller final mean particle sizes are among the main advantages. The combinative particle size reduction technologies are very useful formulation tools, and they will continue acquiring importance for the production of drug nanocrystals. PMID:26556191

  7. Classification of Alzheimer's disease and prediction of mild cognitive impairment-to-Alzheimer's conversion from structural magnetic resource imaging using feature ranking and a genetic algorithm.

    PubMed

    Beheshti, Iman; Demirel, Hasan; Matsuda, Hiroshi

    2017-04-01

    We developed a novel computer-aided diagnosis (CAD) system that uses feature-ranking and a genetic algorithm to analyze structural magnetic resonance imaging data; using this system, we can predict conversion of mild cognitive impairment (MCI)-to-Alzheimer's disease (AD) at between one and three years before clinical diagnosis. The CAD system was developed in four stages. First, we used a voxel-based morphometry technique to investigate global and local gray matter (GM) atrophy in an AD group compared with healthy controls (HCs). Regions with significant GM volume reduction were segmented as volumes of interest (VOIs). Second, these VOIs were used to extract voxel values from the respective atrophy regions in AD, HC, stable MCI (sMCI) and progressive MCI (pMCI) patient groups. The voxel values were then extracted into a feature vector. Third, at the feature-selection stage, all features were ranked according to their respective t-test scores and a genetic algorithm designed to find the optimal feature subset. The Fisher criterion was used as part of the objective function in the genetic algorithm. Finally, the classification was carried out using a support vector machine (SVM) with 10-fold cross validation. We evaluated the proposed automatic CAD system by applying it to baseline values from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset (160 AD, 162 HC, 65 sMCI and 71 pMCI subjects). The experimental results indicated that the proposed system is capable of distinguishing between sMCI and pMCI patients, and would be appropriate for practical use in a clinical setting. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Paroxysmal atrial fibrillation prediction based on HRV analysis and non-dominated sorting genetic algorithm III.

    PubMed

    Boon, K H; Khalil-Hani, M; Malarvili, M B

    2018-01-01

    This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The Arecibo Pisces-Perseus Supercluster Survey: Declination Strip 35

    NASA Astrophysics Data System (ADS)

    McMichael, Chelsey; Ribaudo, Joseph; Koopmann, Rebecca A.; Haynes, Martha P.; APPSS Team, Undergraduate ALFALFA Team, and ALFALFA Team

    2018-01-01

    The Arecibo Pisces-Perseus Supercluster Survey (APPSS) will provide strong observational constraints on the mass-infall rate onto the main filament of the Pisces-Perseus Supercluster. The survey data consist of HI emission-line spectra of cluster galaxy candidates, obtained primarily at the Arecibo Observatory (with ALFA as part of the ALFALFA Survey and with the L-Band Wide receiver as part of APPSS observations). Here we present the details of the data reduction process and spectral-analysis techniques used to determine if a galaxy candidate is at a velocity consistent with the Supercluster, as well as the detected HI-flux and rotational velocity of the galaxy, which will be used to estimate the corresponding HI-mass. We discuss the results of a preliminary analysis on a subset of the APPSS sample, corresponding to 98 galaxies located within ~1.5° of DEC = +35.0°, with 65 possible detections. We also highlight several interesting emission-line features and galaxies discovered during the reduction and analysis process and layout the future of the APPSS project. This work has been supported by NSF grants AST-1211005 and AST-1637339.

  10. On the connection between Maximum Drag Reduction and Newtonian fluid flow

    NASA Astrophysics Data System (ADS)

    Whalley, Richard; Park, Jae-Sung; Kushwaha, Anubhav; Dennis, David; Graham, Michael; Poole, Robert

    2014-11-01

    To date, the most successful turbulence control technique is the dissolution of certain rheology-modifying additives in liquid flows, which results in a universal maximum drag reduction (MDR) asymptote. The MDR asymptote is a well-known phenomenon in the turbulent flow of complex fluids; yet recent direct numerical simulations of Newtonian fluid flow have identified time intervals showing key features of MDR. These intervals have been termed ``hibernating turbulence'' and are a weak turbulence state which is characterised by low wall-shear stress and weak vortical flow structures. Here, in this experimental investigation, we monitor the instantaneous wall-shear stress in a fully-developed turbulent channel flow of a Newtonian fluid with a hot-film probe whilst simultaneously measuring the streamwise velocity at various distances above the wall with laser Doppler velocimetry. We show, by conditionally sampling the streamwise velocity during low wall-shear stress events, that the MDR velocity profile is approached in an additive-free, Newtonian fluid flow. This result corroborates recent numerical investigations, which suggest that the MDR asymptote in polymer solutions is closely connected to weak, transient Newtonian flow structures.

  11. Temporal and spatial intermittencies within Newtonian turbulence

    NASA Astrophysics Data System (ADS)

    Kushwaha, Anubhav; Graham, Michael

    2015-11-01

    Direct numerical simulations of a pressure driven turbulent flow are performed in a large rectangular channel. Intermittent high- and low-drag regimes within turbulence that have earlier been found to exist temporally in minimal channels have been observed both spatially and temporally in full-size turbulent flows. These intermittent regimes, namely, ''active'' and ''hibernating'' turbulence, display very different structural and statistical features. We adopt a very simple sampling technique to identify these intermittent intervals, both temporally and spatially, and present differences between them in terms of simple quantities like mean-velocity, wall-shear stress and flow structures. By conditionally sampling of the low wall-shear stress events in particular, we show that the Maximum Drag Reduction (MDR) velocity profile, that occurs in viscoelastic flows, can also be approached in a Newtonian-fluid flow in the absence of any additives. This suggests that the properties of polymer drag reduction are inherent to all flows and their occurrence is just enhanced by the addition of polymers. We also show how the intermittencies within turbulence vary with Reynolds number. The work was supported by AFOSR grant FA9550-15-1-0062.

  12. Machine-learning-based real-bogus system for the HSC-SSP moving object detection pipeline

    NASA Astrophysics Data System (ADS)

    Lin, Hsing-Wen; Chen, Ying-Tung; Wang, Jen-Hung; Wang, Shiang-Yu; Yoshida, Fumi; Ip, Wing-Huen; Miyazaki, Satoshi; Terai, Tsuyoshi

    2018-01-01

    Machine-learning techniques are widely applied in many modern optical sky surveys, e.g., Pan-STARRS1, PTF/iPTF, and the Subaru/Hyper Suprime-Cam survey, to reduce human intervention in data verification. In this study, we have established a machine-learning-based real-bogus system to reject false detections in the Subaru/Hyper-Suprime-Cam Strategic Survey Program (HSC-SSP) source catalog. Therefore, the HSC-SSP moving object detection pipeline can operate more effectively due to the reduction of false positives. To train the real-bogus system, we use stationary sources as the real training set and "flagged" data as the bogus set. The training set contains 47 features, most of which are photometric measurements and shape moments generated from the HSC image reduction pipeline (hscPipe). Our system can reach a true positive rate (tpr) ˜96% with a false positive rate (fpr) ˜1% or tpr ˜99% at fpr ˜5%. Therefore, we conclude that stationary sources are decent real training samples, and using photometry measurements and shape moments can reject false positives effectively.

  13. [Comparison of two access portals of an employee assistance program at an insurance corporation targeted to reduce stress levels of employees].

    PubMed

    Burnus, M; Benner, V; Kirchner, D; Drabik, A; Stock, St

    2012-03-01

    Support programmes for stress reduction were offered independently in two departments (650 employees in total) of an insurance group. Both departments, referred to as comparison group 1 and 2 (CG1 and CG2), offered an Employee Assistance Programme (EAP) featuring individual consultations. The employees were addressed through different channels of communication, such as staff meetings, superiors and email. In CG1, a staff adviser additionally called on all employees at their workplace and showed them a brief relaxing technique in order to raise awareness of stress reduction. By contacting employees personally it was also intended to reduce the inhibition threshold for the following individual talks. In CG2 individual talks were done face-to-face, whereas CG1 used telephone counselling. By using the new access channel with an additional personal contact at the workplace, an above average percentage of employees in CG1 could be motivated to participate in the following talks. The rate of participants was five times as high as in CG1, with lower costs for the consultation in each case.

  14. Biosynthesis of silver nanoparticles by using Ganoderma-mushroom extract

    NASA Astrophysics Data System (ADS)

    Ekar, S. U.; Khollam, Y. B.; Koinkar, P. M.; Mirji, S. A.; Mane, R. S.; Naushad, M.; Jadhav, S. S.

    2015-03-01

    Present study reports the biochemical synthesis of silver nanoparticles (Ag-NPs) from aqueous medium by using the extract of medicinal mushroom Ganoderma, as a reducing and stabilizing agents. The Ag-NPs are prepared at room temperature by the reduction of Ag+ to Ag in aqueous solution of AgNO3. The resultant particles are characterized by using UV-visible spectroscopy, Fourier transform infrared (FTIR) spectroscopy and transmission electron microscopy (TEM) measurement techniques. The formation of Ag-NPs is confirmed by recording the UV-visible absorption spectra for surface plasmon resonance (SPR) where peak around 427 nm. The prominent changes observed in FTIR spectra supported the reduction of Ag+ to Ag. The morphological features of Ag-NPs are evaluated from HRTEM. The spherical Ag-NPs are observed in transmission electron microscopy (TEM) studies. The particle size distribution is found to be nearly uniform with average particle size of 2 nm. The Ag-NPs aged for 15, 30, 60 and 120 days showed no profound effect on the position of SPR peak in UV-visible studies, indicating the protecting/capping ability of medicinal mushroom Ganoderma in the synthesis of Ag-NPs.

  15. Cortex and amygdala morphology in psychopathy.

    PubMed

    Boccardi, Marina; Frisoni, Giovanni B; Hare, Robert D; Cavedo, Enrica; Najt, Pablo; Pievani, Michela; Rasser, Paul E; Laakso, Mikko P; Aronen, Hannu J; Repo-Tiihonen, Eila; Vaurio, Olli; Thompson, Paul M; Tiihonen, Jari

    2011-08-30

    Psychopathy is characterized by abnormal emotional processes, but only recent neuroimaging studies have investigated its cerebral correlates. The study aim was to map local differences of cortical and amygdalar morphology. Cortical pattern matching and radial distance mapping techniques were used to analyze the magnetic resonance images of 26 violent male offenders (age: 32±8) with psychopathy diagnosed using the Psychopathy Checklist-Revised (PCL-R) and no schizophrenia spectrum disorders, and in matched controls (age: 35± sp="0.12"/>11). The cortex displayed up to 20% reduction in the orbitofrontal and midline structures (corrected p<0.001 bilaterally). Up to 30% tissue reduction in the basolateral nucleus, and 10-30% enlargement effects in the central and lateral nuclei indicated abnormal structure of the amygdala (corrected p=0.05 on the right; and symmetrical pattern on the left). Psychopathy features specific morphology of the main cerebral structures involved in cognitive and emotional processing, consistent with clinical and functional data, and with a hypothesis of an alternative evolutionary brain development. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. Respiratory monitoring system based on the nasal pressure technique for the analysis of sleep breathing disorders: Reduction of static and dynamic errors, and comparisons with thermistors and pneumotachographs

    NASA Astrophysics Data System (ADS)

    Alves de Mesquita, Jayme; Lopes de Melo, Pedro

    2004-03-01

    Thermally sensitive devices—thermistors—have usually been used to monitor sleep-breathing disorders. However, because of their long time constant, these devices are not able to provide a good characterization of fast events, like hypopneas. Nasal pressure recording technique (NPR) has recently been suggested to quantify airflow during sleep. It is claimed that the short time constants of the devices used to implement this technique would allow an accurate analysis of fast abnormal respiratory events. However, these devices present errors associated with nonlinearities and acoustic resonance that could reduce the diagnostic value of the NPR. Moreover, in spite of the high scientific and clinical potential, there is no detailed description of a complete instrumentation system to implement this promising technique in sleep studies. In this context, the purpose of this work was twofold: (1) describe the development of a flexible NPR device and (2) evaluate the performance of this device when compared to pneumotachographs (PNTs) and thermistors. After the design details are described, the system static accuracy is evaluated by a comparative analysis with a PNT. This analysis revealed a significant reduction (p<0.001) of the static error when system nonlinearities were reduced. The dynamic performance of the NPR system was investigated by frequency response analysis and time constant evaluations and the results showed that the developed device response was as good as PNT and around 100 times faster (τ=5,3 ms) than thermistors (τ=512 ms). Experimental results obtained in simulated clinical conditions and in a patient are presented as examples, and confirmed the good features achieved in engineering tests. These results are in close agreement with physiological fundamentals, supplying substantial evidence that the improved dynamic and static characteristics of this device can contribute to a more accurate implementation of medical research projects and to improve the diagnoses of sleep-breathing disorders.

  17. EDITORIAL: Measurement techniques for multiphase flows Measurement techniques for multiphase flows

    NASA Astrophysics Data System (ADS)

    Okamoto, Koji; Murai, Yuichi

    2009-11-01

    Research on multiphase flows is very important for industrial applications, including power stations, vehicles, engines, food processing and so on. Multiphase flows originally have nonlinear features because of multiphase systems. The interaction between the phases plays a very interesting role in the flows. The nonlinear interaction causes the multiphase flows to be very complicated. Therefore techniques for measuring multiphase flows are very useful in helping to understand the nonlinear phenomena. The state-of-the-art measurement techniques were presented and discussed at the sixth International Symposium on Measurement Techniques for Multiphase Flows (ISMTMF2008) held in Okinawa, Japan, on 15-17 December 2008. This special feature of Measurement Science and Technology includes selected papers from ISMTMF2008. Okinawa has a long history as the Ryukyus Kingdom. China, Japan and many western Pacific countries have had cultural and economic exchanges through Okinawa for over 1000 years. Much technical and scientific information was exchanged at the symposium in Okinawa. The proceedings of ISMTMF2008 apart from these special featured papers were published in Journal of Physics: Conference Series vol. 147 (2009). We would like to express special thanks to all the contributors to the symposium and this special feature. This special feature will be a milestone in measurement techniques for multiphase flows.

  18. Use of a real-size 3D-printed model as a preoperative and intraoperative tool for minimally invasive plating of comminuted midshaft clavicle fractures.

    PubMed

    Kim, Hyong Nyun; Liu, Xiao Ning; Noh, Kyu Cheol

    2015-06-10

    Open reduction and plate fixation is the standard operative treatment for displaced midshaft clavicle fracture. However, sometimes it is difficult to achieve anatomic reduction by open reduction technique in cases with comminution. We describe a novel technique using a real-size three dimensionally (3D)-printed clavicle model as a preoperative and intraoperative tool for minimally invasive plating of displaced comminuted midshaft clavicle fractures. A computed tomography (CT) scan is taken of both clavicles in patients with a unilateral displaced comminuted midshaft clavicle fracture. Both clavicles are 3D printed into a real-size clavicle model. Using the mirror imaging technique, the uninjured side clavicle is 3D printed into the opposite side model to produce a suitable replica of the fractured side clavicle pre-injury. The 3D-printed fractured clavicle model allows the surgeon to observe and manipulate accurate anatomical replicas of the fractured bone to assist in fracture reduction prior to surgery. The 3D-printed uninjured clavicle model can be utilized as a template to select the anatomically precontoured locking plate which best fits the model. The plate can be inserted through a small incision and fixed with locking screws without exposing the fracture site. Seven comminuted clavicle fractures treated with this technique achieved good bone union. This technique can be used for a unilateral displaced comminuted midshaft clavicle fracture when it is difficult to achieve anatomic reduction by open reduction technique. Level of evidence V.

  19. Identification of particle-laden flow features from wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Jackson, A.; Turnbull, B.

    2017-12-01

    A wavelet decomposition based technique is applied to air pressure data obtained from laboratory-scale powder snow avalanches. This technique is shown to be a powerful tool for identifying both repeatable and chaotic features at any frequency within the signal. Additionally, this technique is demonstrated to be a robust method for the removal of noise from the signal as well as being capable of removing other contaminants from the signal. Whilst powder snow avalanches are the focus of the experiments analysed here, the features identified can provide insight to other particle-laden gravity currents and the technique described is applicable to a wide variety of experimental signals.

  20. An efficient CU partition algorithm for HEVC based on improved Sobel operator

    NASA Astrophysics Data System (ADS)

    Sun, Xuebin; Chen, Xiaodong; Xu, Yong; Sun, Gang; Yang, Yunsheng

    2018-04-01

    As the latest video coding standard, High Efficiency Video Coding (HEVC) achieves over 50% bit rate reduction with similar video quality compared with previous standards H.264/AVC. However, the higher compression efficiency is attained at the cost of significantly increasing computational load. In order to reduce the complexity, this paper proposes a fast coding unit (CU) partition technique to speed up the process. To detect the edge features of each CU, a more accurate improved Sobel filtering is developed and performed By analyzing the textural features of CU, an early CU splitting termination is proposed to decide whether a CU should be decomposed into four lower-dimensions CUs or not. Compared with the reference software HM16.7, experimental results indicate the proposed algorithm can lessen the encoding time up to 44.09% on average, with a negligible bit rate increase of 0.24%, and quality losses lower 0.03 dB, respectively. In addition, the proposed algorithm gets a better trade-off between complexity and rate-distortion among the other proposed works.

  1. Dimension reduction and multiscaling law through source extraction

    NASA Astrophysics Data System (ADS)

    Capobianco, Enrico

    2003-04-01

    Through the empirical analysis of financial return generating processes one may find features that are common to other research fields, such as internet data from network traffic, physiological studies about human heart beat, speech and sleep recorded time series, geophysics signals, just to mention well-known cases of study. In particular, long range dependence, intermittency, heteroscedasticity are clearly appearing, and consequently power laws and multi-scaling behavior result typical signatures of either the spectral or the time correlation diagnostics. We study these features and the dynamics underlying financial volatility, which can respectively be detected and inferred from high frequency realizations of stock index returns, and show that they vary according to the resolution levels used for both the analysis and the synthesis of the available information. Discovering whether the volatility dynamics are subject to changes in scaling regimes requires the consideration of a model embedding scale-dependent information packets, thus accounting for possible heterogeneous activity occurring in financial markets. Independent component analysis result to be an important tool for reducing the dimension of the problem and calibrating greedy approximation techniques aimed to learn the structure of the underlying volatility.

  2. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  3. Analysis of Big Data in Gait Biomechanics: Current Trends and Future Directions.

    PubMed

    Phinyomark, Angkoon; Petri, Giovanni; Ibáñez-Marcelo, Esther; Osis, Sean T; Ferber, Reed

    2018-01-01

    The increasing amount of data in biomechanics research has greatly increased the importance of developing advanced multivariate analysis and machine learning techniques, which are better able to handle "big data". Consequently, advances in data science methods will expand the knowledge for testing new hypotheses about biomechanical risk factors associated with walking and running gait-related musculoskeletal injury. This paper begins with a brief introduction to an automated three-dimensional (3D) biomechanical gait data collection system: 3D GAIT, followed by how the studies in the field of gait biomechanics fit the quantities in the 5 V's definition of big data: volume, velocity, variety, veracity, and value. Next, we provide a review of recent research and development in multivariate and machine learning methods-based gait analysis that can be applied to big data analytics. These modern biomechanical gait analysis methods include several main modules such as initial input features, dimensionality reduction (feature selection and extraction), and learning algorithms (classification and clustering). Finally, a promising big data exploration tool called "topological data analysis" and directions for future research are outlined and discussed.

  4. Extrinsic contributions to the dielectric response in sintered BaTiO3 nanostructures in paraelectric and ferroelectric regimes

    NASA Astrophysics Data System (ADS)

    Jaffari, G. Hassnain; Rehman, Atiq ur; Iqbal, Asad M.; Awan, M. S.; Saleemi, Mohsin

    2017-11-01

    Post sintering studies of BaTiO3 (BTO) nanoparticles are presented in detail. Bulk nanostructures were prepared via three different compaction processes, namely, uniaxial cold pressing (UCP), Cold Isostatic Pressing (CIP) and Spark Plasma Sintering (SPS). Effect of compaction technique on microstructures have been investigated and correlated with electrical response for each sample. In addition to the transport properties, temperature and frequency dependent dielectric response of variously sintered samples and bulk counterpart was recorded. Several aspects have been identified that are essential to be taken into account in order to completely understand physical processes. Drastically distinct features were observed in paraelectric (PE) regime well above ferroelectric (FE)-PE transition temperature. These features include intra grain conduction with a reduction in the magnitude of PE to FE peak dielectric constant magnitude. Role of strain, grain boundary conduction associated with observation of Maxwell Wagner relaxation and hopping conduction in dielectric and ferroelectric response have been observed and discussed. Densification with presence of oxygen vacancies, significantly enhances conductivity associated with the hopping of the carriers, in turn deteriorated ferroelectric response.

  5. An extended algebraic reconstruction technique (E-ART) for dual spectral CT.

    PubMed

    Zhao, Yunsong; Zhao, Xing; Zhang, Peng

    2015-03-01

    Compared with standard computed tomography (CT), dual spectral CT (DSCT) has many advantages for object separation, contrast enhancement, artifact reduction, and material composition assessment. But it is generally difficult to reconstruct images from polychromatic projections acquired by DSCT, because of the nonlinear relation between the polychromatic projections and the images to be reconstructed. This paper first models the DSCT reconstruction problem as a nonlinear system problem; and then extend the classic ART method to solve the nonlinear system. One feature of the proposed method is its flexibility. It fits for any scanning configurations commonly used and does not require consistent rays for different X-ray spectra. Another feature of the proposed method is its high degree of parallelism, which means that the method is suitable for acceleration on GPUs (graphic processing units) or other parallel systems. The method is validated with numerical experiments from simulated noise free and noisy data. High quality images are reconstructed with the proposed method from the polychromatic projections of DSCT. The reconstructed images are still satisfactory even if there are certain errors in the estimated X-ray spectra.

  6. Low-temperature solvothermal approach to the synthesis of La4Ni3O8 by topotactic oxygen deintercalation.

    PubMed

    Blakely, Colin K; Bruno, Shaun R; Poltavets, Viktor V

    2011-07-18

    A chimie douce solvothermal reduction method is proposed for topotactic oxygen deintercalation of complex metal oxides. Four different reduction techniques were employed to qualitatively identify the relative reduction activity of each including reduction with H(2) and NaH, solution-based reduction using metal hydrides at ambient pressure, and reduction under solvothermal conditions. The reduction of the Ruddlesden-Popper nickelate La(4)Ni(3)O(10) was used as a test case to prove the validity of the method. The completely reduced phase La(4)Ni(3)O(8) was produced via the solvothermal technique at 150 °C--a lower temperature than by other more conventional solid state oxygen deintercalation methods.

  7. Gravity Survey of the Rye Patch KGRA, Rye Patch, Nevada

    NASA Astrophysics Data System (ADS)

    Mcdonald, M. R.; Gosnold, W. D.

    2011-12-01

    The Rye Patch Known Geothermal Resource Area (KGRA) is located in Pershing County Nevada on the west side of the Humboldt Range and east of the Rye Patch Reservoir approximately 200 km northeast of Reno, Nevada. Previous studies include an earlier gravity survey, 3-D seismic reflection, vertical seismic profiling (VSP) on a single well, 3-D seismic imaging, and a report of the integrated seismic studies. Recently, Presco Energy conducted an aeromagnetic survey and is currently in the process of applying 2-D VSP methods to target exploration and production wells at the site. These studies have indicated that geothermal fluid flow primarily occurs along faults and fractures and that two potential aquifers include a sandstone/siltstone member of the Triassic Natchez Pass Formation and a karst zone that occurs at the interface between Mesozoic limestone and Tertiary volcanics. We hypothesized that addition of a high-resolution gravity survey would better define the locations, trends, lengths, and dip angles of faults and possible solution cavity features. The gravity survey encompassed an area of approximately 78 km2 (30 mi2) within the boundary of the KGRA along with portions of 8 sections directly to the west and 8 sections directly to the east. The survey included 203 stations that were spaced at 400 m intervals. The simple Bouguer anomaly patterns were coincident with elevation, and those patterns remained after terrain corrections were performed. To remove this signal, the data were further processed using wave-length (bandpass) filtering techniques. The results of the filtering and comparison with the recent aeromagnetic survey indicate that the location and trend of major fault systems can be identified using this technique. Dip angles can be inferred by the anomaly contour gradients. By further reductions in the bandpass window, other features such as possible karst solution channels may also be recognizable. Drilling or other geophysical methods such as a magnetotelluric survey may assist in confirming the results. However, lengths of the features were difficult to interpret as the wavelength filtering tends to truncate features in accordance with the bandpass window. Additional gravity measurements would aid in providing higher resolution for the identification and interpretation of features, particularly in the vicinity of the Humboldt House to the north and in an area located to the south of the study area where a large feature was identified in both the aeromagnetic and gravity surveys.

  8. Harnessing Solid-State Ionic Transport for Nanomanufacturing and Nanodevices

    ERIC Educational Resources Information Center

    Hsu, Keng Hao

    2009-01-01

    Through this work a new all-solid, ambient processing condition direct metal patterning technique has been developed and characterized. This ionic-transport-based patterning technique is capable of sub-50nm feature resolution under ambient conditions. It generates features with a rate that is comparable to conventional dry-etching techniques. A…

  9. Technique Feature Analysis or Involvement Load Hypothesis: Estimating Their Predictive Power in Vocabulary Learning.

    PubMed

    Gohar, Manoochehr Jafari; Rahmanian, Mahboubeh; Soleimani, Hassan

    2018-02-05

    Vocabulary learning has always been a great concern and has attracted the attention of many researchers. Among the vocabulary learning hypotheses, involvement load hypothesis and technique feature analysis have been proposed which attempt to bring some concepts like noticing, motivation, and generation into focus. In the current study, 90 high proficiency EFL students were assigned into three vocabulary tasks of sentence making, composition, and reading comprehension in order to examine the power of involvement load hypothesis and technique feature analysis frameworks in predicting vocabulary learning. It was unraveled that involvement load hypothesis cannot be a good predictor, and technique feature analysis was a good predictor in pretest to posttest score change and not in during-task activity. The implications of the results will be discussed in the light of preparing vocabulary tasks.

  10. Reduction and temporary stabilization of Tile C pelvic ring injuries using a posteriorly based external fixation system.

    PubMed

    Martin, Murphy P; Rojas, David; Mauffrey, Cyril

    2018-07-01

    Tile C pelvic ring injuries are challenging to manage even in the most experienced hands. The majority of such injuries can be managed using percutaneous reduction techniques, and the posterior ring can be stabilized using percutaneous transiliac-transsacral screw fixation. However, a subgroup of patients present with inadequate bony corridors, significant sacral zone 2 comminution or significant lateral/vertical displacement of the hemipelvis through a complete sacral fracture. Percutaneous strategies in such circumstances can be dangerous. Those patients may benefit from prone positioning and open reduction of the sacral fracture with fixation through tension band plating or lumbo-pelvic fixation. Soft tissue handling is critical, and direct reduction techniques around the sacrum can be difficult due to the complex anatomy and the fragile nature of the sacrum making clamp placement and tightening a challenge. In this paper, we propose a mini-invasive technique of indirect reduction and temporary stabilization, which is soft tissue friendly and permits maintenance of reduction during definitive fixation surgical.

  11. CHOBS: Color Histogram of Block Statistics for Automatic Bleeding Detection in Wireless Capsule Endoscopy Video

    PubMed Central

    Ghosh, Tonmoy; Wahid, Khan A.

    2018-01-01

    Wireless capsule endoscopy (WCE) is the most advanced technology to visualize whole gastrointestinal (GI) tract in a non-invasive way. But the major disadvantage here, it takes long reviewing time, which is very laborious as continuous manual intervention is necessary. In order to reduce the burden of the clinician, in this paper, an automatic bleeding detection method for WCE video is proposed based on the color histogram of block statistics, namely CHOBS. A single pixel in WCE image may be distorted due to the capsule motion in the GI tract. Instead of considering individual pixel values, a block surrounding to that individual pixel is chosen for extracting local statistical features. By combining local block features of three different color planes of RGB color space, an index value is defined. A color histogram, which is extracted from those index values, provides distinguishable color texture feature. A feature reduction technique utilizing color histogram pattern and principal component analysis is proposed, which can drastically reduce the feature dimension. For bleeding zone detection, blocks are classified using extracted local features that do not incorporate any computational burden for feature extraction. From extensive experimentation on several WCE videos and 2300 images, which are collected from a publicly available database, a very satisfactory bleeding frame and zone detection performance is achieved in comparison to that obtained by some of the existing methods. In the case of bleeding frame detection, the accuracy, sensitivity, and specificity obtained from proposed method are 97.85%, 99.47%, and 99.15%, respectively, and in the case of bleeding zone detection, 95.75% of precision is achieved. The proposed method offers not only low feature dimension but also highly satisfactory bleeding detection performance, which even can effectively detect bleeding frame and zone in a continuous WCE video data. PMID:29468094

  12. [Ribeiro-Technique in Gigantomastia - Review of 294 Reduction Mammaplasties in 8 Years].

    PubMed

    Wolter, Andreas; Pluto, Naja; Scholz, Till; Diedrichson, Jens; Arens-Landwehr, Andreas; Liebau, Jutta

    2017-12-01

    Reduction mammaplasty in patients with gigantomastia is challenging even to very experienced plastic surgeons. Extremely elongated pedicles impair the vascular supply of the nipple-areola complex. Breast shaping and effective reduction are difficult due to the severely stretched skin envelope. The Ribeiro technique is the standard technique for reduction mammaplasty in our clinic. The aim of this study is to review our approach in patients with gigantomastia in comparison to the current literature. From 01/2009 to 12/2016, we performed 1247 reduction mammaplasties in 760 patients. In 294 reduction mammoplasties (23.6 %), resection weight was more than 1000 g per breast corresponding to the definition of gigantomastia. The Ribeiro technique with a superomedial pedicle and inferior dermoglandular flap for autologous augmentation of the upper pole was implemented as standard procedure. In cases with a sternal notch-nipple distance > 40 cm, free nipple grafting was performed. The outcome parameters complication rate, patient satisfaction with the aesthetic result, nipple sensitivity and surgical revision rate were obtained and retrospectively analysed. In 174 patients, 294 reduction mammaplasties were performed with a resection weight of more than 1000 g per breast. Average resection weight was 1389.6 g (range, 1000-4580 g). Average age was 43.5 years (range, 18-76 years), average body mass index (BMI) was 29.2 kg/m 2 (range, 19-40 kg/m 2 ), average sternal notch-nipple distance was 34.8 cm (range, 27-52 cm), average operation time was 117 minutes (range, 72-213 minutes). A free nipple graft was necessary in 30 breasts. Overall complication rate was 7.8 %; secondary surgical revision rate was 16 %. 93 % of the patients were "very satisfied" and "satisfied" with the aesthetic result. Nipple sensitivity was rated "very good" and "good" in 88 %. The Ribeiro technique is a well established, versatile standard technique for reduction mammaplasty, which helps to create high-quality reproducible results with longterm formstable shape. In gigantomastia, this procedure is also very effective to achieve volume reduction and aesthetically pleasing results with a low complication rate. Georg Thieme Verlag KG Stuttgart · New York.

  13. Multiclass Bayes error estimation by a feature space sampling technique

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  14. Flexible conformable hydrophobized surfaces for turbulent flow drag reduction

    PubMed Central

    Brennan, Joseph C; Geraldi, Nicasio R; Morris, Robert H; Fairhurst, David J; McHale, Glen; Newton, Michael I

    2015-01-01

    In recent years extensive work has been focused onto using superhydrophobic surfaces for drag reduction applications. Superhydrophobic surfaces retain a gas layer, called a plastron, when submerged underwater in the Cassie-Baxter state with water in contact with the tops of surface roughness features. In this state the plastron allows slip to occur across the surface which results in a drag reduction. In this work we report flexible and relatively large area superhydrophobic surfaces produced using two different methods: Large roughness features were created by electrodeposition on copper meshes; Small roughness features were created by embedding carbon nanoparticles (soot) into Polydimethylsiloxane (PDMS). Both samples were made into cylinders with a diameter under 12 mm. To characterize the samples, scanning electron microscope (SEM) images and confocal microscope images were taken. The confocal microscope images were taken with each sample submerged in water to show the extent of the plastron. The hydrophobized electrodeposited copper mesh cylinders showed drag reductions of up to 32% when comparing the superhydrophobic state with a wetted out state. The soot covered cylinders achieved a 30% drag reduction when comparing the superhydrophobic state to a plain cylinder. These results were obtained for turbulent flows with Reynolds numbers 10,000 to 32,500. PMID:25975704

  15. Screw-Wire Osteo-Traction: An Adjunctive or Alternative Method of Anatomical Reduction of Multisegment Midfacial Fractures? A Description of Technique and Prospective Study of 40 Patients

    PubMed Central

    O'Regan, Barry; Devine, Maria; Bhopal, Sats

    2013-01-01

    Stable anatomical fracture reduction and segment control before miniplate fixation can be difficult to achieve in comminuted midfacial fractures. Fracture mobilization and reduction methods include Gillies elevation, malar hook, and Dingman elevators. No single method is used universally. Disadvantages include imprecise segment alignment and poor segment stability/control. We have employed screw-wire osteo-traction (SWOT) to address this problem. A literature review revealed two published reports. The aims were to evaluate the SWOT technique effectiveness as a fracture reduction method and to examine rates of revision fixation and plate removal. We recruited 40 consecutive patients requiring open reduction and internal fixation of multisegment midfacial fractures (2009–2012) and employed miniplate osteosynthesis in all patients. SWOT was used as a default reduction method in all patients. The rates of successful fracture reduction achieved by SWOT alone or in combination and of revision fixation and plate removal, were used as outcome indices of the reduction method effectiveness. The SWOT technique achieved satisfactory anatomical reduction in 27/40 patients when used alone. Other reduction methods were also used in 13/40 patients. No patient required revision fixation and three patients required late plate removal. SWOT can be used across the midface fracture pattern in conjunction with other methods or as a sole reduction method before miniplate fixation. PMID:24436763

  16. Effect of Complete Syndesmotic Disruption and Deltoid Injuries and Different Reduction Methods on Ankle Joint Contact Mechanics.

    PubMed

    LaMothe, Jeremy; Baxter, Josh R; Gilbert, Susannah; Murphy, Conor I; Karnovsky, Sydney C; Drakos, Mark C

    2017-06-01

    Syndesmotic injuries can be associated with poor patient outcomes and posttraumatic ankle arthritis, particularly in the case of malreduction. However, ankle joint contact mechanics following a syndesmotic injury and reduction remains poorly understood. The purpose of this study was to characterize the effects of a syndesmotic injury and reduction techniques on ankle joint contact mechanics in a biomechanical model. Ten cadaveric whole lower leg specimens with undisturbed proximal tibiofibular joints were prepared and tested in this study. Contact area, contact force, and peak contact pressure were measured in the ankle joint during simulated standing in the intact, injured, and 3 reduction conditions: screw fixation with a clamp, screw fixation without a clamp (thumb technique), and a suture-button construct. Differences in these ankle contact parameters were detected between conditions using repeated-measures analysis of variance. Syndesmotic disruption decreased tibial plafond contact area and force. Syndesmotic reduction did not restore ankle loading mechanics to values measured in the intact condition. Reduction with the thumb technique was able to restore significantly more joint contact area and force than the reduction clamp or suture-button construct. Syndesmotic disruption decreased joint contact area and force. Although the thumb technique performed significantly better than the reduction clamp and suture-button construct, syndesmotic reduction did not restore contact mechanics to intact levels. Decreased contact area and force with disruption imply that other structures are likely receiving more loads (eg, medial and lateral gutters), which may have clinical implications such as the development of posttraumatic arthritis.

  17. Reduction of radar cross-section of a wind turbine

    DOEpatents

    McDonald, Jacob Jeremiah; Brock, Billy C.; Clem, Paul G.; Loui, Hung; Allen, Steven E.

    2016-08-02

    The various technologies presented herein relate to formation of a wind turbine blade having a reduced radar signature in comparison with a turbine blade fabricated using conventional techniques. Various techniques and materials are presented to facilitate reduction in radar signature of a wind turbine blade, where such techniques and materials are amenable for incorporation into existing manufacturing techniques without degradation in mechanical or physical performance of the blade or major alteration of the blade profile.

  18. Image feature detection and extraction techniques performance evaluation for development of panorama under different light conditions

    NASA Astrophysics Data System (ADS)

    Patil, Venkat P.; Gohatre, Umakant B.

    2018-04-01

    The technique of obtaining a wider field-of-view of an image to get high resolution integrated image is normally required for development of panorama of a photographic images or scene from a sequence of part of multiple views. There are various image stitching methods developed recently. For image stitching five basic steps are adopted stitching which are Feature detection and extraction, Image registration, computing homography, image warping and Blending. This paper provides review of some of the existing available image feature detection and extraction techniques and image stitching algorithms by categorizing them into several methods. For each category, the basic concepts are first described and later on the necessary modifications made to the fundamental concepts by different researchers are elaborated. This paper also highlights about the some of the fundamental techniques for the process of photographic image feature detection and extraction methods under various illumination conditions. The Importance of Image stitching is applicable in the various fields such as medical imaging, astrophotography and computer vision. For comparing performance evaluation of the techniques used for image features detection three methods are considered i.e. ORB, SURF, HESSIAN and time required for input images feature detection is measured. Results obtained finally concludes that for daylight condition, ORB algorithm found better due to the fact that less tome is required for more features extracted where as for images under night light condition it shows that SURF detector performs better than ORB/HESSIAN detectors.

  19. Predictive brain networks for major depression in a semi-multimodal fusion hierarchical feature reduction framework.

    PubMed

    Yang, Jie; Yin, Yingying; Zhang, Zuping; Long, Jun; Dong, Jian; Zhang, Yuqun; Xu, Zhi; Li, Lei; Liu, Jie; Yuan, Yonggui

    2018-02-05

    Major depressive disorder (MDD) is characterized by dysregulation of distributed structural and functional networks. It is now recognized that structural and functional networks are related at multiple temporal scales. The recent emergence of multimodal fusion methods has made it possible to comprehensively and systematically investigate brain networks and thereby provide essential information for influencing disease diagnosis and prognosis. However, such investigations are hampered by the inconsistent dimensionality features between structural and functional networks. Thus, a semi-multimodal fusion hierarchical feature reduction framework is proposed. Feature reduction is a vital procedure in classification that can be used to eliminate irrelevant and redundant information and thereby improve the accuracy of disease diagnosis. Our proposed framework primarily consists of two steps. The first step considers the connection distances in both structural and functional networks between MDD and healthy control (HC) groups. By adding a constraint based on sparsity regularization, the second step fully utilizes the inter-relationship between the two modalities. However, in contrast to conventional multi-modality multi-task methods, the structural networks were considered to play only a subsidiary role in feature reduction and were not included in the following classification. The proposed method achieved a classification accuracy, specificity, sensitivity, and area under the curve of 84.91%, 88.6%, 81.29%, and 0.91, respectively. Moreover, the frontal-limbic system contributed the most to disease diagnosis. Importantly, by taking full advantage of the complementary information from multimodal neuroimaging data, the selected consensus connections may be highly reliable biomarkers of MDD. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Application of 3D printed customized external fixator in fracture reduction.

    PubMed

    Qiao, Feng; Li, Dichen; Jin, Zhongmin; Gao, Yongchang; Zhou, Tao; He, Jinlong; Cheng, Li

    2015-01-01

    Long bone fracture is common in traumatic osteopathic patients. Good reduction is beneficial for bone healing, preventing the complications such as delayed union, nonunion, malunion, but is hard to achieve. Repeated attempts during the surgery would increase the operation time, cause new damage to the fracture site and excessive exposure to radiation. Robotic and navigation techniques can help improve the reduction accuracy, however, the high cost and complexity of operation have limited their clinical application. We combined 3D printing with computer-assisted reduction technique to develop a customised external fixator with the function of fracture reduction. The original CT data obtained by scanning the fracture was imported to computer for reconstructing and reducing the 3D image of the fracture, based on which the external fixator (named as Q-Fixator) was designed and then fabricated by 3D printing techniques. The fracture reduction and fixation was achieved by connecting the pins inserted in the bones with the customised Q-Fixator. Experiments were conducted on three fracture models to demonstrate the reduction results. Good reduction results were obtained on all three fractured bone models, with an average rotation of 1.21°(± 0.24), angulation of 1.84°(± 0.28), and lateral displacement of 2.22 mm(± 0.62). A novel customised external fixator for long bone fracture reduction was readily developed using 3D printing technique. The customised external fixator had the advantages of easy manipulation, accurate reduction, minimally invasion and experience-independence. Future application of the customised external fixator can be extended to include the fixation function with stress adjustment and potentially optimise the fracture healing process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Breast reduction (mammoplasty) - slideshow

    MedlinePlus

    ... page: //medlineplus.gov/ency/presentations/100189.htm Breast reduction (mammoplasty) - series—Indications To use the sharing features ... Lickstein, MD, FACS, specializing in cosmetic and reconstructive plastic surgery, Palm Beach Gardens, FL. Review provided by ...

  2. Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection

    PubMed Central

    Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad

    2014-01-01

    Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices. PMID:25177107

  3. Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection.

    PubMed

    Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad

    2014-11-01

    Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.

  4. A scale space feature based registration technique for fusion of satellite imagery

    NASA Technical Reports Server (NTRS)

    Raghavan, Srini; Cromp, Robert F.; Campbell, William C.

    1997-01-01

    Feature based registration is one of the most reliable methods to register multi-sensor images (both active and passive imagery) since features are often more reliable than intensity or radiometric values. The only situation where a feature based approach will fail is when the scene is completely homogenous or densely textural in which case a combination of feature and intensity based methods may yield better results. In this paper, we present some preliminary results of testing our scale space feature based registration technique, a modified version of feature based method developed earlier for classification of multi-sensor imagery. The proposed approach removes the sensitivity in parameter selection experienced in the earlier version as explained later.

  5. Comparison of the 3 Different Injection Techniques Used in a Randomized Controlled Study Evaluating a Cross-Linked Sodium Hyaluronate Combined With Triamcinolone Hexacetonide (Cingal) for Osteoarthritis of the Knee: A Subgroup Analysis.

    PubMed

    McCormack, Robert; Lamontagne, Martin; Vannabouathong, Christopher; Deakon, Robert T; Belzile, Etienne L

    2017-01-01

    A recent trial demonstrated that patients with knee osteoarthritis treated with a sodium hyaluronate and corticosteroid combination (Cingal) experienced greater pain reductions compared with those treated with sodium hyaluronate alone (Monovisc) or saline up to 3 weeks postinjection. In this study, injections were administered by 1 of 3 approaches; however, there is currently no consensus on which, if any, of these techniques produce a more favorable outcome. To provide additional insight on this topic, the results of the previous trial were reanalyzed to determine whether (1) the effect of Cingal was significant within each injection technique and (2) pain reductions were similar between injection techniques across all treatment groups. Greater pain reductions with Cingal up to 3 weeks were only significant in the anteromedial subgroup. Across all therapies, both the anteromedial and anterolateral techniques demonstrated significantly greater pain reductions than the lateral midpatellar approach at 18 and 26 weeks.

  6. Comparison of the 3 Different Injection Techniques Used in a Randomized Controlled Study Evaluating a Cross-Linked Sodium Hyaluronate Combined With Triamcinolone Hexacetonide (Cingal) for Osteoarthritis of the Knee: A Subgroup Analysis

    PubMed Central

    McCormack, Robert; Lamontagne, Martin; Vannabouathong, Christopher; Deakon, Robert T; Belzile, Etienne L

    2017-01-01

    A recent trial demonstrated that patients with knee osteoarthritis treated with a sodium hyaluronate and corticosteroid combination (Cingal) experienced greater pain reductions compared with those treated with sodium hyaluronate alone (Monovisc) or saline up to 3 weeks postinjection. In this study, injections were administered by 1 of 3 approaches; however, there is currently no consensus on which, if any, of these techniques produce a more favorable outcome. To provide additional insight on this topic, the results of the previous trial were reanalyzed to determine whether (1) the effect of Cingal was significant within each injection technique and (2) pain reductions were similar between injection techniques across all treatment groups. Greater pain reductions with Cingal up to 3 weeks were only significant in the anteromedial subgroup. Across all therapies, both the anteromedial and anterolateral techniques demonstrated significantly greater pain reductions than the lateral midpatellar approach at 18 and 26 weeks. PMID:28839449

  7. Moment-based metrics for global sensitivity analysis of hydrological systems

    NASA Astrophysics Data System (ADS)

    Dell'Oca, Aronne; Riva, Monica; Guadagnini, Alberto

    2017-12-01

    We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE), other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of) analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.

  8. Significant reduction in arc frequency biased solar cells: Observations, diagnostics, and mitigation technique(s)

    NASA Technical Reports Server (NTRS)

    Upschulte, B. L.; Weyl, G. M.; Marinelli, W. J.; Aifer, E.; Hastings, D.; Snyder, D.

    1991-01-01

    A variety of experiments were performed which identify key factors contributing to the arcing of negatively biased high voltage solar cells. These efforts have led to reduction of greater than a factor of 100 in the arc frequency of a single cell following proper remediation procedures. Experiments naturally lead to and focussed on the adhesive/encapsulant that is used to bond the protective cover slip to the solar cell. An image-intensified charge coupled device (CCD) camera system recorded UV emission from arc events which occurred exclusively along the interfacial edge between the cover slip and the solar cell. Microscopic inspection of this interfacial region showed a bead of encapsulant along this entire edge. Elimination of this encapsulant bead reduced the arc frequency by two orders of magnitude. Water contamination was also identified as a key contributor which enhances arcing of the encapsulant bead along the solar cell edge. Spectrally resolved measurements of the observable UV light shows a feature assignable to OH(A-X) electronic emission, which is common for water contaminated discharges. Experiments in which the solar cell temperature was raised to 85 C showed a reduced arcing frequency, suggesting desorption of H2O. Exposing the solar cell to water vapor was shown to increase the arcing frequency. Clean dry gases such as O2, N2, and Ar show no enhancement of the arcing rate. Elimination of the exposed encapsulant eliminates any measurable sensitivity to H2O vapor.

  9. Space-time PM2.5 mapping in the severe haze region of Jing-Jin-Ji (China) using a synthetic approach.

    PubMed

    He, Junyu; Christakos, George

    2018-05-07

    Long- and short-term exposure to PM 2.5 is of great concern in China due to its adverse population health effects. Characteristic of the severity of the situation in China is that in the Jing-Jin-Ji region considered in this work a total of 2725 excess deaths have been attributed to short-term PM 2.5 exposure during the period January 10-31, 2013. Technically, the processing of large space-time PM 2.5 datasets and the mapping of the space-time distribution of PM 2.5 concentrations often constitute high-cost projects. To address this situation, we propose a synthetic modeling framework based on the integration of (a) the Bayesian maximum entropy method that assimilates auxiliary information from land-use regression and artificial neural network (ANN) model outputs based on PM 2.5 monitoring, satellite remote sensing data, land use and geographical records, with (b) a space-time projection technique that transforms the PM 2.5 concentration values from the original spatiotemporal domain onto a spatial domain that moves along the direction of the PM 2.5 velocity spread. An interesting methodological feature of the synthetic approach is that its components (methods or models) are complementary, i.e., one component can compensate for the occasional limitations of another component. Insight is gained in terms of a PM 2.5 case study covering the severe haze Jing-Jin-Ji region during October 1-31, 2015. The proposed synthetic approach explicitly accounted for physical space-time dependencies of the PM 2.5 distribution. Moreover, the assimilation of auxiliary information and the dimensionality reduction achieved by the synthetic approach produced rather impressive results: It generated PM 2.5 concentration maps with low estimation uncertainty (even at counties and villages far away from the monitoring stations, whereas during the haze periods the uncertainty reduction was over 50% compared to standard PM 2.5 mapping techniques); and it also proved to be computationally very efficient (the reduction in computational time was over 20% compared to standard mapping techniques). Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Mesoscale, Radiometrically Referenced, Multi-Temporal Hyperspectral Data for Co2 Leak Detection by Locating Spatial Variation of Biophysically Relevant Parameters

    NASA Astrophysics Data System (ADS)

    McCann, Cooper Patrick

    Low-cost flight-based hyperspectral imaging systems have the potential to provide valuable information for ecosystem and environmental studies as well as aide in land management and land health monitoring. This thesis describes (1) a bootstrap method of producing mesoscale, radiometrically-referenced hyperspectral data using the Landsat surface reflectance (LaSRC) data product as a reference target, (2) biophysically relevant basis functions to model the reflectance spectra, (3) an unsupervised classification technique based on natural histogram splitting of these biophysically relevant parameters, and (4) local and multi-temporal anomaly detection. The bootstrap method extends standard processing techniques to remove uneven illumination conditions between flight passes, allowing the creation of radiometrically self-consistent data. Through selective spectral and spatial resampling, LaSRC data is used as a radiometric reference target. Advantages of the bootstrap method include the need for minimal site access, no ancillary instrumentation, and automated data processing. Data from a flight on 06/02/2016 is compared with concurrently collected ground based reflectance spectra as a means of validation achieving an average error of 2.74%. Fitting reflectance spectra using basis functions, based on biophysically relevant spectral features, allows both noise and data reductions while shifting information from spectral bands to biophysical features. Histogram splitting is used to determine a clustering based on natural splittings of these fit parameters. The Indian Pines reference data enabled comparisons of the efficacy of this technique to established techniques. The splitting technique is shown to be an improvement over the ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. This improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA. Three hyperspectral flights over the Kevin Dome area, covering 1843 ha, acquired 06/21/2014, 06/24/2015 and 06/26/2016 are examined with different methods of anomaly detection. Detection of anomalies within a single data set is examined to determine, on a local scale, areas that are significantly different from the surrounding area. Additionally, the detection and identification of persistent anomalies and non-persistent anomalies was investigated across multiple data sets.

  11. A Novel Technique for Closed Reduction and Fixation of Paediatric Calcaneal Fracture Dislocation Injuries

    PubMed Central

    Faroug, Radwane; Stirling, Paul; Ali, Farhan

    2013-01-01

    Paediatric calcaneal fractures are rare injuries usually managed conservatively or with open reduction and internal fixation (ORIF). Closed reduction was previously thought to be impossible, and very few cases are reported in the literature. We report a new technique for closed reduction using Ilizarov half-rings. We report successful closed reduction and screwless fixation of an extra-articular calcaneal fracture dislocation in a 7-year-old boy. Reduction was achieved using two Ilizarov half-ring frames arranged perpendicular to each other, enabling simultaneous application of longitudinal and rotational traction. Anatomical reduction was achieved with restored angles of Bohler and Gissane. Two K-wires were the definitive fixation. Bony union with good functional outcome and minimal pain was achieved at eight-weeks follow up. ORIF of calcaneal fractures provides good functional outcome but is associated with high rates of malunion and postoperative pain. Preservation of the unique soft tissue envelope surrounding the calcaneus reduces the risk of infection. Closed reduction prevents distortion of these tissues and may lead to faster healing and mobilisation. Closed reduction and screwless fixation of paediatric calcaneal fractures is an achievable management option. Our technique has preserved the soft tissue envelope surrounding the calcaneus, has avoided retained metalwork related complications, and has resulted in a good functional outcome. PMID:23819090

  12. Ground Vibration Test Planning and Pre-Test Analysis for the X-33 Vehicle

    NASA Technical Reports Server (NTRS)

    Bedrossian, Herand; Tinker, Michael L.; Hidalgo, Homero

    2000-01-01

    This paper describes the results of the modal test planning and the pre-test analysis for the X-33 vehicle. The pre-test analysis included the selection of the target modes, selection of the sensor and shaker locations and the development of an accurate Test Analysis Model (TAM). For target mode selection, four techniques were considered, one based on the Modal Cost technique, one based on Balanced Singular Value technique, a technique known as the Root Sum Squared (RSS) method, and a Modal Kinetic Energy (MKE) approach. For selecting sensor locations, four techniques were also considered; one based on the Weighted Average Kinetic Energy (WAKE), one based on Guyan Reduction (GR), one emphasizing engineering judgment, and one based on an optimum sensor selection technique using Genetic Algorithm (GA) search technique combined with a criteria based on Hankel Singular Values (HSV's). For selecting shaker locations, four techniques were also considered; one based on the Weighted Average Driving Point Residue (WADPR), one based on engineering judgment and accessibility considerations, a frequency response method, and an optimum shaker location selection based on a GA search technique combined with a criteria based on HSV's. To evaluate the effectiveness of the proposed sensor and shaker locations for exciting the target modes, extensive numerical simulations were performed. Multivariate Mode Indicator Function (MMIF) was used to evaluate the effectiveness of each sensor & shaker set with respect to modal parameter identification. Several TAM reduction techniques were considered including, Guyan, IRS, Modal, and Hybrid. Based on a pre-test cross-orthogonality checks using various reduction techniques, a Hybrid TAM reduction technique was selected and was used for all three vehicle fuel level configurations.

  13. Heterobimetallic Pd–K carbene complexes via one-electron reductions of palladium radical carbenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Peng; Hoffbauer, Melissa R.; Vyushkova, Mariya

    2016-03-24

    Unprecedented sequential substitution/reduction synthetic strategy on the Pd radical carbenes afforded heterobimetallic Pd–K carbene complexes, which features novel Pd–C carbene–K structural moieties.

  14. Heterobimetallic Pd–K carbene complexes via one-electron reductions of palladium radical carbenes

    DOE PAGES

    Cui, Peng; Hoffbauer, Melissa R.; Vyushkova, Mariya; ...

    2016-01-01

    Unprecedented sequential substitution/reduction synthetic strategy on the Pd radical carbenes afforded heterobimetallic Pd–K carbene complexes, which features novel Pd–C carbene–K structural moieties.

  15. Feature Selection Methods for Robust Decoding of Finger Movements in a Non-human Primate

    PubMed Central

    Padmanaban, Subash; Baker, Justin; Greger, Bradley

    2018-01-01

    Objective: The performance of machine learning algorithms used for neural decoding of dexterous tasks may be impeded due to problems arising when dealing with high-dimensional data. The objective of feature selection algorithms is to choose a near-optimal subset of features from the original feature space to improve the performance of the decoding algorithm. The aim of our study was to compare the effects of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis (PCA), and Mutual Information Maximization on SVM classification performance for a dexterous decoding task. Approach: A nonhuman primate (NHP) was trained to perform small coordinated movements—similar to typing. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials (AP) during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon AP firing rates. We used the SVM classification to examine the functional parameters of (i) robustness to simulated failure and (ii) longevity of classification. We also compared the effect of using isolated-neuron and multi-unit firing rates as the feature vector supplied to the SVM. Main results: The average decoding accuracy for multi-unit features and single-unit features using Mutual Information Maximization (MIM) across 47 sessions was 96.74 ± 3.5% and 97.65 ± 3.36% respectively. The reduction in decoding accuracy between using 100% of the features and 10% of features based on MIM was 45.56% (from 93.7 to 51.09%) and 4.75% (from 95.32 to 90.79%) for multi-unit and single-unit features respectively. MIM had best performance compared to other feature selection methods. Significance: These results suggest improved decoding performance can be achieved by using optimally selected features. The results based on clinically relevant performance metrics also suggest that the decoding algorithm can be made robust by using optimal features and feature selection algorithms. We believe that even a few percent increase in performance is important and improves the decoding accuracy of the machine learning algorithm potentially increasing the ease of use of a brain machine interface. PMID:29467602

  16. Use of an automated chromium reduction system for hydrogen isotope ratio analysis of physiological fluids applied to doubly labeled water analysis.

    PubMed

    Schoeller, D A; Colligan, A S; Shriver, T; Avak, H; Bartok-Olson, C

    2000-09-01

    The doubly labeled water method is commonly used to measure total energy expenditure in free-living subjects. The method, however, requires accurate and precise deuterium abundance determinations, which can be laborious. The aim of this study was to evaluate a fully automated, high-throughput, chromium reduction technique for the measurement of deuterium abundances in physiological fluids. The chromium technique was compared with an off-line zinc bomb reduction technique and also subjected to test-retest analysis. Analysis of international water standards demonstrated that the chromium technique was accurate and had a within-day precision of <1 per thousand. Addition of organic matter to water samples demonstrated that the technique was sensitive to interference at levels between 2 and 5 g l(-1). Physiological samples could be analyzed without this interference, plasma by 10000 Da exclusion filtration, saliva by sedimentation and urine by decolorizing with carbon black. Chromium reduction of urine specimens from doubly labeled water studies indicated no bias relative to zinc reduction with a mean difference in calculated energy expenditure of -0.2 +/- 3.9%. Blinded reanalysis of urine specimens from a second doubly labeled water study demonstrated a test-retest coefficient of variation of 4%. The chromium reduction method was found to be a rapid, accurate and precise method for the analysis of urine specimens from doubly labeled water. Copyright 2000 John Wiley & Sons, Ltd.

  17. Improved classification accuracy by feature extraction using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.

    2003-05-01

    A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.

  18. Classification of pulmonary nodules in lung CT images using shape and texture features

    NASA Astrophysics Data System (ADS)

    Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Dutta, Anirvan; Garg, Mandeep; Khandelwal, Niranjan; Kumar, Prafulla

    2016-03-01

    Differentiation of malignant and benign pulmonary nodules is important for prognosis of lung cancer. In this paper, benign and malignant nodules are classified using support vector machine. Several shape-based and texture-based features are used to represent the pulmonary nodules in the feature space. A semi-automated technique is used for nodule segmentation. Relevant features are selected for efficient representation of nodules in the feature space. The proposed scheme and the competing technique are evaluated on a data set of 542 nodules of Lung Image Database Consortium and Image Database Resource Initiative. The nodules with composite rank of malignancy "1","2" are considered as benign and "4","5" are considered as malignant. Area under the receiver operating characteristics curve is 0:9465 for the proposed method. The proposed method outperforms the competing technique.

  19. Closed reduction of a rare type III dislocation of the first metatarsophalangeal joint.

    PubMed

    Tondera, E K; Baker, C C

    1996-09-01

    To discuss a rare Type III dislocation of the first metatarsophalangeal (MP) joint, without fracture, that used a closed reduction technique for correction. A 43-yr-old man suffered from an acute severe dislocation of his great toe as the result of acute forceful motion applied to the toe as his foot was depressed onto a brake pedal to avoid a motor vehicle accident. Physical examination and X-rays revealed the dislocation, muscle spasm, edema and severely restricted range of motion. The dislocation was corrected using a closed reduction technique, in this case a chiropractic manipulation. Fourteen months after reduction, the joint was intact, muscle strength was graded +5 normal, ranges of motion were within normal limits and no crepitation was noted. X-rays revealed normal intact joint congruency. The patient experienced full weight bearing, range of motion and function of the joint. Although a Type III dislocation of the great toe has only once been cited briefly in the literature, this classification carries a recommended surgical treatment protocol for correction. No literature describes a closed reduction of a Type III dislocation as described in this case report. It is apparent that a closed reduction technique using a chiropractic manipulation may be considered a valid alternative correction technique for Type III dislocations of the great toe.

  20. A New Feature-Enhanced Speckle Reduction Method Based on Multiscale Analysis for Ultrasound B-Mode Imaging.

    PubMed

    Kang, Jinbum; Lee, Jae Young; Yoo, Yangmo

    2016-06-01

    Effective speckle reduction in ultrasound B-mode imaging is important for enhancing the image quality and improving the accuracy in image analysis and interpretation. In this paper, a new feature-enhanced speckle reduction (FESR) method based on multiscale analysis and feature enhancement filtering is proposed for ultrasound B-mode imaging. In FESR, clinical features (e.g., boundaries and borders of lesions) are selectively emphasized by edge, coherence, and contrast enhancement filtering from fine to coarse scales while simultaneously suppressing speckle development via robust diffusion filtering. In the simulation study, the proposed FESR method showed statistically significant improvements in edge preservation, mean structure similarity, speckle signal-to-noise ratio, and contrast-to-noise ratio (CNR) compared with other speckle reduction methods, e.g., oriented speckle reducing anisotropic diffusion (OSRAD), nonlinear multiscale wavelet diffusion (NMWD), the Laplacian pyramid-based nonlinear diffusion and shock filter (LPNDSF), and the Bayesian nonlocal means filter (OBNLM). Similarly, the FESR method outperformed the OSRAD, NMWD, LPNDSF, and OBNLM methods in terms of CNR, i.e., 10.70 ± 0.06 versus 9.00 ± 0.06, 9.78 ± 0.06, 8.67 ± 0.04, and 9.22 ± 0.06 in the phantom study, respectively. Reconstructed B-mode images that were developed using the five speckle reduction methods were reviewed by three radiologists for evaluation based on each radiologist's diagnostic preferences. All three radiologists showed a significant preference for the abdominal liver images obtained using the FESR methods in terms of conspicuity, margin sharpness, artificiality, and contrast, p<0.0001. For the kidney and thyroid images, the FESR method showed similar improvement over other methods. However, the FESR method did not show statistically significant improvement compared with the OBNLM method in margin sharpness for the kidney and thyroid images. These results demonstrate that the proposed FESR method can improve the image quality of ultrasound B-mode imaging by enhancing the visualization of lesion features while effectively suppressing speckle noise.

  1. Morphology combined with ancillary techniques: An algorithm approach for thyroid nodules.

    PubMed

    Rossi, E D; Martini, M; Capodimonti, S; Cenci, T; Bilotta, M; Pierconti, F; Pontecorvi, A; Lombardi, C P; Fadda, G; Larocca, L M

    2018-04-23

    Several authors have underlined the limits of morphological analysis mostly in the diagnosis of follicular neoplasms (FN). The application of ancillary techniques, including immunocytochemistry (ICC) and molecular testing, contributes to a better definition of the risk of malignancy (ROM) and management of FN. According to literature, the application of models, including the evaluation of ICC, somatic mutations (ie, BRAF V 600E ), micro RNA analysis is proposed for FNs. This study discusses the validation of a diagnostic algorithm in FN with a special focus on the role of morphology then followed by ancillary techniques. From June 2014 to January 2016, we enrolled 37 FNs with histological follow-up. In the same reference period, 20 benign nodules and 20 positive for malignancy were selected as control. ICC, BRAF V 600E mutation and miR-375 were carried out on LBC. The 37 FNs included 14 atypia of undetermined significance/follicular lesion of undetermined significance and 23 FN. Specifically, atypia of undetermined significance/follicular lesion of undetermined significance resulted in three goitres, 10 follicular adenomas and one NIFTP whereas FN/suspicious for FN by seven follicular adenomas and 16 malignancies (nine non-invasive follicular thyroid neoplasms with papillary-like nuclear features, two invasive follicular variant of papillary thyroid carcinoma [PTC] and five PTC). The 20 positive for malignancy samples included two invasive follicular variant of PTC, 16 PTCs and two medullary carcinomas. The morphological features of BRAF V 600E mutation (nuclear features of PTC and moderate/abundant eosinophilic cytoplasms) were associated with 100% ROM. In the wild type cases, ROM was 83.3% in presence of a concordant positive ICC panel whilst significantly lower (10.5%) in a negative concordant ICC. High expression values of MirR-375 provided 100% ROM. The adoption of an algorithm might represent the best choice for the correct diagnosis of FNs. The morphological detection of BRAF V 600E represents the first step for the identification of malignant FNs. A significant reduction of unnecessary thyroidectomies is the goal of this application. © 2018 John Wiley & Sons Ltd.

  2. VizieR Online Data Catalog: SDSS DR7 white dwarf catalog (Kleinman+, 2013)

    NASA Astrophysics Data System (ADS)

    Kleinman, S. J.; Kepler, S. O.; Koester, D.; Pelisoli, I.; Pecanha, V.; Nitta, A.; Costa, J. E. S.; Krzesinski, J.; Dufour, P.; Lachapelle, F.-R.; Bergeron, P.; Yip, C.-W.; Harris, H. C.; Eisenstein, D. J.; Althaus, L.; Corsico, A.

    2013-01-01

    Here, we report on the white dwarf catalog built from the SDSS DR7 (Cat. II/294). We have applied automated techniques supplemented by complete, consistent human identifications of each candidate white dwarf spectrum. We make use of the latest SDSS reductions and white dwarf model atmosphere improvements in our spectral fits, providing logg and Teff determinations for each identified clean DA and DB where we use the word "clean" to identify spectra that show only features of non-magnetic, nonmixed, DA or DB stars. Our catalog includes all white dwarf stars from the earlier Kleinman et al. (2004, Cat. J/ApJ/607/426) and Eisenstein et al. (2006, Cat. J/ApJS/167/40) catalogs, although occasionally with different identifications. (1 data file).

  3. Life support systems for Mars transit

    NASA Technical Reports Server (NTRS)

    Macelroy, R. D.; Kliss, M.; Straight, C.

    1992-01-01

    The structural elements of life-support systems are reviewed in order to assess the suitability of specific features for use during a Mars mission. Life-support requirements are estimated by means of an approximate input/output analysis, and the advantages are listed relating to the use of recycling and regeneration techniques. The technological options for regeneration are presented in categories such as CO2 reduction, organics removal, polishing, food production, and organics oxidation. These data form the basis of proposed mission requirements and constraints as well as the definition of what constitutes an adequate reserve. Regenerative physical/chemical life-support systems are championed based exclusively on the mass savings inherent in the technology. The resiliency and 'soft' failure modes of bioregenerative life-support systems are identified as areas of investigation.

  4. A predictor-corrector technique for visualizing unsteady flow

    NASA Technical Reports Server (NTRS)

    Banks, David C.; Singer, Bart A.

    1995-01-01

    We present a method for visualizing unsteady flow by displaying its vortices. The vortices are identified by using a vorticity-predictor pressure-corrector scheme that follows vortex cores. The cross-sections of a vortex at each point along the core can be represented by a Fourier series. A vortex can be faithfully reconstructed from the series as a simple quadrilateral mesh, or its reconstruction can be enhanced to indicate helical motion. The mesh can reduce the representation of the flow features by a factor of one thousand or more compared with the volumetric dataset. With this amount of reduction it is possible to implement an interactive system on a graphics workstation to permit a viewer to examine, in three dimensions, the evolution of the vortical structures in a complex, unsteady flow.

  5. A Lagrangian analysis of a sudden stratospheric warming - Comparison of a model simulation and LIMS observations

    NASA Technical Reports Server (NTRS)

    Pierce, R. B.; Remsberg, Ellis E.; Fairlie, T. D.; Blackshear, W. T.; Grose, William L.; Turner, Richard E.

    1992-01-01

    Lagrangian area diagnostics and trajectory techniques are used to investigate the radiative and dynamical characteristics of a spontaneous sudden warming which occurred during a 2-yr Langley Research Center model simulation. The ability of the Langley Research Center GCM to simulate the major features of the stratospheric circulation during such highly disturbed periods is illustrated by comparison of the simulated warming to the observed circulation during the LIMS observation period. The apparent sink of vortex area associated with Rossby wave-breaking accounts for the majority of the reduction of the size of the vortex and also acts to offset the radiatively driven increase in the area occupied by the 'surf zone'. Trajectory analysis of selected material lines substantiates the conclusions from the area diagnostics.

  6. A fast efficient implicit scheme for the gasdynamic equations using a matrix reduction technique

    NASA Technical Reports Server (NTRS)

    Barth, T. J.; Steger, J. L.

    1985-01-01

    An efficient implicit finite-difference algorithm for the gasdynamic equations utilizing matrix reduction techniques is presented. A significant reduction in arithmetic operations is achieved without loss of the stability characteristics generality found in the Beam and Warming approximate factorization algorithm. Steady-state solutions to the conservative Euler equations in generalized coordinates are obtained for transonic flows and used to show that the method offers computational advantages over the conventional Beam and Warming scheme. Existing Beam and Warming codes can be retrofit with minimal effort. The theoretical extension of the matrix reduction technique to the full Navier-Stokes equations in Cartesian coordinates is presented in detail. Linear stability, using a Fourier stability analysis, is demonstrated and discussed for the one-dimensional Euler equations.

  7. A Technique for Reduction of Edentulous Fractures Using Dentures and SMARTLock Hybrid Fixation System

    PubMed Central

    Carlson, Anna Rose; Shammas, Ronnie Labib; Allori, Alexander Christopher

    2017-01-01

    Summary: Establishing anatomic reduction of an edentulous mandible fracture is a frequently acknowledged challenge in craniomaxillofacial trauma surgery. In this study, we report a novel method for the reduction of the edentulous mandible fracture, via fabrication of modified Gunning splints using existing dentures and SMARTLock hybrid arch bars. This technique dramatically simplifies the application of an arch bar to dentures, obviates the need for the fabrication of impressions and custom splints, and eliminates the lag time associated with the creation of splints. Furthermore, this method may be used with or without adjunctive rigid internal fixation. The technique described herein of creating Gunning splints with SMARTLock hybrid arch bars provides surgeons with a simple, rapid, single-stage solution for reduction of mandibular fractures in the edentulous patient. PMID:29062645

  8. Multi-Quadrant Biopsy Technique Improves Diagnostic Ability in Large Heterogeneous Renal Masses.

    PubMed

    Abel, E Jason; Heckman, Jennifer E; Hinshaw, Louis; Best, Sara; Lubner, Meghan; Jarrard, David F; Downs, Tracy M; Nakada, Stephen Y; Lee, Fred T; Huang, Wei; Ziemlewicz, Timothy

    2015-10-01

    Percutaneous biopsy obtained from a single location is prone to sampling error in large heterogeneous renal masses, leading to nondiagnostic results or failure to detect poor prognostic features. We evaluated the accuracy of percutaneous biopsy for large renal masses using a modified multi-quadrant technique vs a standard biopsy technique. Clinical and pathological data for all patients with cT2 or greater renal masses who underwent percutaneous biopsy from 2009 to 2014 were reviewed. The multi-quadrant technique was defined as multiple core biopsies from at least 4 separate solid enhancing areas in the tumor. The incidence of nondiagnostic findings, sarcomatoid features and procedural complications was recorded, and concordance between biopsy specimens and nephrectomy pathology was compared. A total of 122 biopsies were performed for 117 tumors in 116 patients (46 using the standard biopsy technique and 76 using the multi-quadrant technique). Median tumor size was 10 cm (IQR 8-12). Biopsy was nondiagnostic in 5 of 46 (10.9%) standard and 0 of 76 (0%) multi-quadrant biopsies (p=0.007). Renal cell carcinoma was identified in 96 of 115 (82.0%) tumors and nonrenal cell carcinoma tumors were identified in 21 (18.0%). One complication occurred using the standard biopsy technique and no complications were reported using the multi-quadrant technique. Sarcomatoid features were present in 23 of 96 (23.9%) large renal cell carcinomas studied. Sensitivity for identifying sarcomatoid features was higher using the multi-quadrant technique compared to the standard biopsy technique at 13 of 15 (86.7%) vs 2 of 8 (25.0%) (p=0.0062). The multi-quadrant percutaneous biopsy technique increases the ability to identify aggressive pathological features in large renal tumors and decreases nondiagnostic biopsy rates. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  9. TESTING OF INDOOR RADON REDUCTION TECHNIQUES IN 19 MARYLAND HOUSES

    EPA Science Inventory

    The report gives results of testing of indoor radon reduction techniques in 19 existing houses in Maryland. The focus was on passive measures: various passive soil depressurization methods, where natural wind and temperature effects are utilized to develop suction in the system; ...

  10. Classification of motor imagery tasks for BCI with multiresolution analysis and multiobjective feature selection.

    PubMed

    Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés

    2016-07-15

    Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.

  11. Reduction Osteotomy vs Pie-Crust Technique as Possible Alternatives for Medial Release in Total Knee Arthroplasty and Compared in a Prospective Randomized Controlled Trial.

    PubMed

    Ahn, Ji Hyun; Yang, Tae Yeong; Lee, Jang Yun

    2016-07-01

    To compare the gap change between the pie-crust technique and reduction osteotomy to determine their effects on flexion and extension gaps and their success rates in achieving ligament balancing during total knee arthroplasty. In a prospective randomized controlled trial, 106 total knee arthroplasties were allocated to each group with 53 cases. If there was a narrow medial gap with an imbalance of ≥3 mm after the initial limited medial release, either reduction osteotomy or pie-crust technique was performed. The changes of extension and flexion medial gaps along with the success rate of mediolateral balancing were compared. There was a significant difference in the change of medial gap in knee extension with mean changes of 3.5 ± 0.5 mm and 2.3 ± 0.8 mm in the reduction osteotomy and pie-crust groups, respectively (P < .001). For flexion gap, greater change was found in the pie-crust group compared with the reduction osteotomy group; the mean medial gap changes in knee flexion were 1.1 ± 0.5 mm and 2.3 ± 1.2 mm in the reduction osteotomy and pie-crust groups, respectively. The success rates were 90.6% and 67.9% in reduction osteotomy and pie-crust groups, respectively (P = .007). As an alternative medial release method, reduction osteotomy was more effective in extension gap balancing, and pie-crust technique was more effective in flexion gap balancing. The overall success rate of mediolateral ligament balancing was higher in the reduction osteotomy group than in the pie-crust group. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. New computational methods reveal tRNA identity element divergence between Proteobacteria and Cyanobacteria.

    PubMed

    Freyhult, Eva; Cui, Yuanyuan; Nilsson, Olle; Ardell, David H

    2007-10-01

    There are at least 21 subfunctional classes of tRNAs in most cells that, despite a very highly conserved and compact common structure, must interact specifically with different cliques of proteins or cause grave organismal consequences. Protein recognition of specific tRNA substrates is achieved in part through class-restricted tRNA features called tRNA identity determinants. In earlier work we used TFAM, a statistical classifier of tRNA function, to show evidence of unexpectedly large diversity among bacteria in tRNA identity determinants. We also created a data reduction technique called function logos to visualize identity determinants for a given taxon. Here we show evidence that determinants for lysylated isoleucine tRNAs are not the same in Proteobacteria as in other bacterial groups including the Cyanobacteria. Consistent with this, the lysylating biosynthetic enzyme TilS lacks a C-terminal domain in Cyanobacteria that is present in Proteobacteria. We present here, using function logos, a map estimating all potential identity determinants generally operational in Cyanobacteria and Proteobacteria. To further isolate the differences in potential tRNA identity determinants between Proteobacteria and Cyanobacteria, we created two new data reduction visualizations to contrast sequence and function logos between two taxa. One, called Information Difference logos (ID logos), shows the evolutionary gain or retention of functional information associated to features in one lineage. The other, Kullback-Leibler divergence Difference logos (KLD logos), shows recruitments or shifts in the functional associations of features, especially those informative in both lineages. We used these new logos to specifically isolate and visualize the differences in potential tRNA identity determinants between Proteobacteria and Cyanobacteria. Our graphical results point to numerous differences in potential tRNA identity determinants between these groups. Although more differences in general are explained by shifts in functional association rather than gains or losses, the apparent identity differences in lysylated isoleucine tRNAs appear to have evolved through both mechanisms.

  13. Early diagnosis of osteoporosis using radiogrammetry and texture analysis from hand and wrist radiographs in Indian population.

    PubMed

    Areeckal, A S; Jayasheelan, N; Kamath, J; Zawadynski, S; Kocher, M; David S, S

    2018-03-01

    We propose an automated low cost tool for early diagnosis of onset of osteoporosis using cortical radiogrammetry and cancellous texture analysis from hand and wrist radiographs. The trained classifier model gives a good performance accuracy in classifying between healthy and low bone mass subjects. We propose a low cost automated diagnostic tool for early diagnosis of reduction in bone mass using cortical radiogrammetry and cancellous texture analysis of hand and wrist radiographs. Reduction in bone mass could lead to osteoporosis, a disease observed to be increasingly occurring at a younger age in recent times. Dual X-ray absorptiometry (DXA), currently used in clinical practice, is expensive and available only in urban areas in India. Therefore, there is a need to develop a low cost diagnostic tool in order to facilitate large-scale screening of people for early diagnosis of osteoporosis at primary health centers. Cortical radiogrammetry from third metacarpal bone shaft and cancellous texture analysis from distal radius are used to detect low bone mass. Cortical bone indices and cancellous features using Gray Level Run Length Matrices and Laws' masks are extracted. A neural network classifier is trained using these features to classify healthy subjects and subjects having low bone mass. In our pilot study, the proposed segmentation method shows 89.9 and 93.5% accuracy in detecting third metacarpal bone shaft and distal radius ROI, respectively. The trained classifier shows training accuracy of 94.3% and test accuracy of 88.5%. An automated diagnostic technique for early diagnosis of onset of osteoporosis is developed using cortical radiogrammetric measurements and cancellous texture analysis of hand and wrist radiographs. The work shows that a combination of cortical and cancellous features improves the diagnostic ability and is a promising low cost tool for early diagnosis of increased risk of osteoporosis.

  14. Toward real-time performance benchmarks for Ada

    NASA Technical Reports Server (NTRS)

    Clapp, Russell M.; Duchesneau, Louis; Volz, Richard A.; Mudge, Trevor N.; Schultze, Timothy

    1986-01-01

    The issue of real-time performance measurements for the Ada programming language through the use of benchmarks is addressed. First, the Ada notion of time is examined and a set of basic measurement techniques are developed. Then a set of Ada language features believed to be important for real-time performance are presented and specific measurement methods discussed. In addition, other important time related features which are not explicitly part of the language but are part of the run-time related features which are not explicitly part of the language but are part of the run-time system are also identified and measurement techniques developed. The measurement techniques are applied to the language and run-time system features and the results are presented.

  15. A proposed framework on hybrid feature selection techniques for handling high dimensional educational data

    NASA Astrophysics Data System (ADS)

    Shahiri, Amirah Mohamed; Husain, Wahidah; Rashid, Nur'Aini Abd

    2017-10-01

    Huge amounts of data in educational datasets may cause the problem in producing quality data. Recently, data mining approach are increasingly used by educational data mining researchers for analyzing the data patterns. However, many research studies have concentrated on selecting suitable learning algorithms instead of performing feature selection process. As a result, these data has problem with computational complexity and spend longer computational time for classification. The main objective of this research is to provide an overview of feature selection techniques that have been used to analyze the most significant features. Then, this research will propose a framework to improve the quality of students' dataset. The proposed framework uses filter and wrapper based technique to support prediction process in future study.

  16. Compensation for the signal processing characteristics of ultrasound B-mode scanners in adaptive speckle reduction.

    PubMed

    Crawford, D C; Bell, D S; Bamber, J C

    1993-01-01

    A systematic method to compensate for nonlinear amplification of individual ultrasound B-scanners has been investigated in order to optimise performance of an adaptive speckle reduction (ASR) filter for a wide range of clinical ultrasonic imaging equipment. Three potential methods have been investigated: (1) a method involving an appropriate selection of the speckle recognition feature was successful when the scanner signal processing executes simple logarithmic compressions; (2) an inverse transform (decompression) of the B-mode image was effective in correcting for the measured characteristics of image data compression when the algorithm was implemented in full floating point arithmetic; (3) characterising the behaviour of the statistical speckle recognition feature under conditions of speckle noise was found to be the method of choice for implementation of the adaptive speckle reduction algorithm in limited precision integer arithmetic. In this example, the statistical features of variance and mean were investigated. The third method may be implemented on commercially available fast image processing hardware and is also better suited for transfer into dedicated hardware to facilitate real-time adaptive speckle reduction. A systematic method is described for obtaining ASR calibration data from B-mode images of a speckle producing phantom.

  17. Feature-extracted joint transform correlation.

    PubMed

    Alam, M S

    1995-12-10

    A new technique for real-time optical character recognition that uses a joint transform correlator is proposed. This technique employs feature-extracted patterns for the reference image to detect a wide range of characters in one step. The proposed technique significantly enhances the processing speed when compared with the presently available joint transform correlator architectures and shows feasibility for multichannel joint transform correlation.

  18. Application of reduced order modeling techniques to problems in heat conduction, isoelectric focusing and differential algebraic equations

    NASA Astrophysics Data System (ADS)

    Mathai, Pramod P.

    This thesis focuses on applying and augmenting 'Reduced Order Modeling' (ROM) techniques to large scale problems. ROM refers to the set of mathematical techniques that are used to reduce the computational expense of conventional modeling techniques, like finite element and finite difference methods, while minimizing the loss of accuracy that typically accompanies such a reduction. The first problem that we address pertains to the prediction of the level of heat dissipation in electronic and MEMS devices. With the ever decreasing feature sizes in electronic devices, and the accompanied rise in Joule heating, the electronics industry has, since the 1990s, identified a clear need for computationally cheap heat transfer modeling techniques that can be incorporated along with the electronic design process. We demonstrate how one can create reduced order models for simulating heat conduction in individual components that constitute an idealized electronic device. The reduced order models are created using Krylov Subspace Techniques (KST). We introduce a novel 'plug and play' approach, based on the small gain theorem in control theory, to interconnect these component reduced order models (according to the device architecture) to reliably and cheaply replicate whole device behavior. The final aim is to have this technique available commercially as a computationally cheap and reliable option that enables a designer to optimize for heat dissipation among competing VLSI architectures. Another place where model reduction is crucial to better design is Isoelectric Focusing (IEF) - the second problem in this thesis - which is a popular technique that is used to separate minute amounts of proteins from the other constituents that are present in a typical biological tissue sample. Fundamental questions about how to design IEF experiments still remain because of the high dimensional and highly nonlinear nature of the differential equations that describe the IEF process as well as the uncertainty in the parameters of the differential equations. There is a clear need to design better experiments for IEF without the current overhead of expensive chemicals and labor. We show how with a simpler modeling of the underlying chemistry, we can still achieve the accuracy that has been achieved in existing literature for modeling small ranges of pH (hydrogen ion concentration) in IEF, but with far less computational time. We investigate a further reduction of time by modeling the IEF problem using the Proper Orthogonal Decomposition (POD) technique and show why POD may not be sufficient due to the underlying constraints. The final problem that we address in this thesis addresses a certain class of dynamics with high stiffness - in particular, differential algebraic equations. With the help of simple examples, we show how the traditional POD procedure will fail to model certain high stiffness problems due to a particular behavior of the vector field which we will denote as twist. We further show how a novel augmentation to the traditional POD algorithm can model-reduce problems with twist in a computationally cheap manner without any additional data requirements.

  19. Interpreting expressive performance through listener judgments of musical tension

    PubMed Central

    Farbood, Morwaread M.; Upham, Finn

    2013-01-01

    This study examines listener judgments of musical tension for a recording of a Schubert song and its harmonic reduction. Continuous tension ratings collected in an experiment and quantitative descriptions of the piece's musical features, include dynamics, pitch height, harmony, onset frequency, and tempo, were analyzed from two different angles. In the first part of the analysis, the different processing timescales for disparate features contributing to tension were explored through the optimization of a predictive tension model. The results revealed the optimal time windows for harmony were considerably longer (~22 s) than for any other feature (~1–4 s). In the second part of the analysis, tension ratings for the individual verses of the song and its harmonic reduction were examined and compared. The results showed that although the average tension ratings between verses were very similar, differences in how and when participants reported tension changes highlighted performance decisions made in the interpretation of the score, ambiguity in tension implications of the music, and the potential importance of contrast between verses and phrases. Analysis of the tension ratings for the harmonic reduction also provided a new perspective for better understanding how complex musical features inform listener tension judgments. PMID:24416024

  20. Creating micro-scale surface topology to achieve anisotropic wettability on an aluminum surface

    NASA Astrophysics Data System (ADS)

    Sommers, Andrew D.; Jacobi, Anthony M.

    2006-08-01

    A technique for fabricating micropatterned aluminum surfaces with parallel grooves 30 µm wide and tens of microns in depth is described. Standard photolithographic techniques are used to obtain this precise surface-feature patterning. Positive photoresists, S1813 and AZ4620, are selected to mask the surface, and a mixture of BCl3 and Cl2 gases is used to perform the etching. Experimental data show that a droplet placed on the micro-grooved aluminum surface using a micro-syringe exhibits an increased apparent contact angle, and for droplets condensed on these etched surfaces, more than a 50% reduction in the volume needed for the onset of droplet sliding is manifest. No chemical surface treatment is necessary to achieve this water repellency; it is accomplished solely by an anisotropic surface morphology that manipulates droplet geometry and creates and exploits discontinuities in the three-phase contact line. These micro-structured surfaces are proposed for use in a broad range of air-cooling applications, where the management of condensate and defrost liquid on the heat transfer surface is essential to the energy-efficient operation of the machine.

  1. Scatterometry-based metrology for SAQP pitch walking using virtual reference

    NASA Astrophysics Data System (ADS)

    Kagalwala, Taher; Vaid, Alok; Mahendrakar, Sridhar; Lenahan, Michael; Fang, Fang; Isbester, Paul; Shifrin, Michael; Etzioni, Yoav; Cepler, Aron; Yellai, Naren; Dasari, Prasad; Bozdog, Cornel

    2016-03-01

    Advanced technology nodes, 10nm and beyond, employing multi-patterning techniques for pitch reduction pose new process and metrology challenges in maintaining consistent positioning of structural features. Self-Aligned Quadruple Patterning (SAQP) process is used to create the Fins in FinFET devices with pitch values well below optical lithography limits. The SAQP process bares compounding effects from successive Reactive Ion Etch (RIE) and spacer depositions. These processes induce a shift in the pitch value from one fin compared to another neighboring fin. This is known as pitch walking. Pitch walking affects device performance as well as later processes which work on an assumption that there is consistent spacing between fins. In SAQP there are 3 pitch walking parameters of interest, each linked to specific process steps in the flow. These pitch walking parameters are difficult to discriminate at a specific process step by singular evaluation technique or even with reference metrology such as Transmission Electron Microscopy (TEM). In this paper we will utilize a virtual reference to generate a scatterometry model to measure pitch walk for SAQP process flow.

  2. Measuring self-aligned quadruple patterning pitch walking with scatterometry-based metrology utilizing virtual reference

    NASA Astrophysics Data System (ADS)

    Kagalwala, Taher; Vaid, Alok; Mahendrakar, Sridhar; Lenahan, Michael; Fang, Fang; Isbester, Paul; Shifrin, Michael; Etzioni, Yoav; Cepler, Aron; Yellai, Naren; Dasari, Prasad; Bozdog, Cornel

    2016-10-01

    Advanced technology nodes, 10 nm and beyond, employing multipatterning techniques for pitch reduction pose new process and metrology challenges in maintaining consistent positioning of structural features. A self-aligned quadruple patterning (SAQP) process is used to create the fins in FinFET devices with pitch values well below optical lithography limits. The SAQP process bears the compounding effects from successive reactive ion etch and spacer depositions. These processes induce a shift in the pitch value from one fin compared to another neighboring fin. This is known as pitch walking. Pitch walking affects device performance as well as later processes, which work on an assumption that there is consistent spacing between fins. In SAQP, there are three pitch walking parameters of interest, each linked to specific process steps in the flow. These pitch walking parameters are difficult to discriminate at a specific process step by singular evaluation technique or even with reference metrology, such as transmission electron microscopy. We will utilize a virtual reference to generate a scatterometry model to measure pitch walk for SAQP process flow.

  3. High-resolution non-destructive three-dimensional imaging of integrated circuits.

    PubMed

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H R; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-15

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography-a high-resolution coherent diffractive imaging technique-can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  4. The parallel-sequential field subtraction techniques for nonlinear ultrasonic imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-04-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage and have sensitivity to particularly closed defects. This study utilizes two modes of focusing: parallel, in which the elements are fired together with a delay law, and sequential, in which elements are fired independently. In the parallel focusing, a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded; with elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images formed from the coherent component of the field and use this to characterize nonlinearity of closed fatigue cracks. In particular we monitor the reduction in amplitude at the fundamental frequency at each focal point and use this metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g., back wall or large scatters) and allow damage to be detected at an early stage.

  5. Reducing the number of reconstructions needed for estimating channelized observer performance

    NASA Astrophysics Data System (ADS)

    Pineda, Angel R.; Miedema, Hope; Brenner, Melissa; Altaf, Sana

    2018-03-01

    A challenge for task-based optimization is the time required for each reconstructed image in applications where reconstructions are time consuming. Our goal is to reduce the number of reconstructions needed to estimate the area under the receiver operating characteristic curve (AUC) of the infinitely-trained optimal channelized linear observer. We explore the use of classifiers which either do not invert the channel covariance matrix or do feature selection. We also study the assumption that multiple low contrast signals in the same image of a non-linear reconstruction do not significantly change the estimate of the AUC. We compared the AUC of several classifiers (Hotelling, logistic regression, logistic regression using Firth bias reduction and the least absolute shrinkage and selection operator (LASSO)) with a small number of observations both for normal simulated data and images from a total variation reconstruction in magnetic resonance imaging (MRI). We used 10 Laguerre-Gauss channels and the Mann-Whitney estimator for AUC. For this data, our results show that at small sample sizes feature selection using the LASSO technique can decrease bias of the AUC estimation with increased variance and that for large sample sizes the difference between these classifiers is small. We also compared the use of multiple signals in a single reconstructed image to reduce the number of reconstructions in a total variation reconstruction for accelerated imaging in MRI. We found that AUC estimation using multiple low contrast signals in the same image resulted in similar AUC estimates as doing a single reconstruction per signal leading to a 13x reduction in the number of reconstructions needed.

  6. Automated Reduction of Data from Images and Holograms

    NASA Technical Reports Server (NTRS)

    Lee, G. (Editor); Trolinger, James D. (Editor); Yu, Y. H. (Editor)

    1987-01-01

    Laser techniques are widely used for the diagnostics of aerodynamic flow and particle fields. The storage capability of holograms has made this technique an even more powerful. Over 60 researchers in the field of holography, particle sizing and image processing convened to discuss these topics. The research program of ten government laboratories, several universities, industry and foreign countries were presented. A number of papers on holographic interferometry with applications to fluid mechanics were given. Several papers on combustion and particle sizing, speckle velocimetry and speckle interferometry were given. A session on image processing and automated fringe data reduction techniques and the type of facilities for fringe reduction was held.

  7. Contour-based image warping

    NASA Astrophysics Data System (ADS)

    Chan, Kwai H.; Lau, Rynson W.

    1996-09-01

    Image warping concerns about transforming an image from one spatial coordinate to another. It is widely used for the vidual effect of deforming and morphing images in the film industry. A number of warping techniques have been introduced, which are mainly based on the corresponding pair mapping of feature points, feature vectors or feature patches (mostly triangular or quadrilateral). However, very often warping of an image object with an arbitrary shape is required. This requires a warping technique which is based on boundary contour instead of feature points or feature line-vectors. In addition, when feature point or feature vector based techniques are used, approximation of the object boundary by using point or vectors is required. In this case, the matching process of the corresponding pairs will be very time consuming if a fine approximation is required. In this paper, we propose a contour-based warping technique for warping image objects with arbitrary shapes. The novel idea of the new method is the introduction of mathematical morphology to allow a more flexible control of image warping. Two morphological operators are used as contour determinators. The erosion operator is used to warp image contents which are inside a user specified contour while the dilation operation is used to warp image contents which are outside of the contour. This new method is proposed to assist further development of a semi-automatic motion morphing system when accompanied with robust feature extractors such as deformable template or active contour model.

  8. Respiratory Artefact Removal in Forced Oscillation Measurements: A Machine Learning Approach.

    PubMed

    Pham, Thuy T; Thamrin, Cindy; Robinson, Paul D; McEwan, Alistair L; Leong, Philip H W

    2017-08-01

    Respiratory artefact removal for the forced oscillation technique can be treated as an anomaly detection problem. Manual removal is currently considered the gold standard, but this approach is laborious and subjective. Most existing automated techniques used simple statistics and/or rejected anomalous data points. Unfortunately, simple statistics are insensitive to numerous artefacts, leading to low reproducibility of results. Furthermore, rejecting anomalous data points causes an imbalance between the inspiratory and expiratory contributions. From a machine learning perspective, such methods are unsupervised and can be considered simple feature extraction. We hypothesize that supervised techniques can be used to find improved features that are more discriminative and more highly correlated with the desired output. Features thus found are then used for anomaly detection by applying quartile thresholding, which rejects complete breaths if one of its features is out of range. The thresholds are determined by both saliency and performance metrics rather than qualitative assumptions as in previous works. Feature ranking indicates that our new landmark features are among the highest scoring candidates regardless of age across saliency criteria. F1-scores, receiver operating characteristic, and variability of the mean resistance metrics show that the proposed scheme outperforms previous simple feature extraction approaches. Our subject-independent detector, 1IQR-SU, demonstrated approval rates of 80.6% for adults and 98% for children, higher than existing methods. Our new features are more relevant. Our removal is objective and comparable to the manual method. This is a critical work to automate forced oscillation technique quality control.

  9. TESTING OF INDOOR RADON REDUCTION TECHNIQUES IN BASEMENT HOUSES HAVING ADJOINING WINGS

    EPA Science Inventory

    The report gives results of tests of indoor radon reduction techniques in 12 existing Maryland houses, with the objective of determining when basement houses with adjoining wings require active soil depressurization (ASD) treatment of both wings, and when treatment of the basemen...

  10. Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models

    DTIC Science & Technology

    2008-08-01

    Level Outputs Campaign Level Model Campaign Level Outputs Aggregation Metamodeling Complexity (Spatial, Temporal, etc.) Others? Apply VRT (type......reduction, are called variance reduction techniques ( VRT ) [Law, 2006]. The implementation of some type of VRT can prove to be a very valuable tool

  11. A survey of mass analyzers. [characteristics and features of various instruments and techniques

    NASA Technical Reports Server (NTRS)

    Moore, W. W., Jr.; Tashbar, P. W.

    1973-01-01

    With the increasing applications of mass spectrometry technology to diverse services areas, a need has developed for a consolidated survey of the essential characteristics and features of the various instruments and techniques. This report is one approach to satisfying this need. Information has been collected and consolidated into a format which includes for each approach: (1) a general technique description, (2) instrument features information, and (3) a summary of pertinent advantages and disadvantages. With this information, the potential mass spectrometer user should be able to more efficiently select the most appropriate instrument.

  12. TEM Cell Testing of Cable Noise Reduction Techniques from 2 MHz to 200 MHz -- Part 2

    NASA Technical Reports Server (NTRS)

    Bradley, Arthur T.; Evans, William C.; Reed, Joshua L.; Shimp, Samuel K., III; Fitzpatrick, Fred D.

    2008-01-01

    This paper presents empirical results of cable noise reduction techniques as demonstrated in a TEM cell operating with radiated fields from 2 - 200 MHz. It is the second part of a two-paper series. The first paper discussed cable types and shield connections. In this second paper, the effects of load and source resistances and chassis connections are examined. For each topic, well established theories are compared to data from a real-world physical system. Finally, recommendations for minimizing cable susceptibility (and thus cable emissions) are presented. There are numerous papers and textbooks that present theoretical analyses of cable noise reduction techniques. However, empirical data is often targeted to low frequencies (e.g. <50 KHz) or high frequencies (>100 MHz). Additionally, a comprehensive study showing the relative effects of various noise reduction techniques is needed. These include the use of dedicated return wires, twisted wiring, cable shielding, shield connections, changing load or source impedances, and implementing load- or source-to-chassis isolation. We have created an experimental setup that emulates a real-world electrical system, while still allowing us to independently vary a host of parameters. The goal of the experiment was to determine the relative effectiveness of various noise reduction techniques when the cable is in the presence of radiated emissions from 2 MHz to 200 MHz.

  13. Advances in reduction techniques for tire contact problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1995-01-01

    Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.

  14. Comparing Pattern Recognition Feature Sets for Sorting Triples in the FIRST Database

    NASA Astrophysics Data System (ADS)

    Proctor, D. D.

    2006-07-01

    Pattern recognition techniques have been used with increasing success for coping with the tremendous amounts of data being generated by automated surveys. Usually this process involves construction of training sets, the typical examples of data with known classifications. Given a feature set, along with the training set, statistical methods can be employed to generate a classifier. The classifier is then applied to process the remaining data. Feature set selection, however, is still an issue. This paper presents techniques developed for accommodating data for which a substantive portion of the training set cannot be classified unambiguously, a typical case for low-resolution data. Significance tests on the sort-ordered, sample-size-normalized vote distribution of an ensemble of decision trees is introduced as a method of evaluating relative quality of feature sets. The technique is applied to comparing feature sets for sorting a particular radio galaxy morphology, bent-doubles, from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) database. Also examined are alternative functional forms for feature sets. Associated standard deviations provide the means to evaluate the effect of the number of folds, the number of classifiers per fold, and the sample size on the resulting classifications. The technique also may be applied to situations for which, although accurate classifications are available, the feature set is clearly inadequate, but is desired nonetheless to make the best of available information.

  15. SPHARA - A Generalized Spatial Fourier Analysis for Multi-Sensor Systems with Non-Uniformly Arranged Sensors: Application to EEG

    PubMed Central

    Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens

    2015-01-01

    Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications. PMID:25885290

  16. Accurate Identification of Cancerlectins through Hybrid Machine Learning Technology.

    PubMed

    Zhang, Jieru; Ju, Ying; Lu, Huijuan; Xuan, Ping; Zou, Quan

    2016-01-01

    Cancerlectins are cancer-related proteins that function as lectins. They have been identified through computational identification techniques, but these techniques have sometimes failed to identify proteins because of sequence diversity among the cancerlectins. Advanced machine learning identification methods, such as support vector machine and basic sequence features (n-gram), have also been used to identify cancerlectins. In this study, various protein fingerprint features and advanced classifiers, including ensemble learning techniques, were utilized to identify this group of proteins. We improved the prediction accuracy of the original feature extraction methods and classification algorithms by more than 10% on average. Our work provides a basis for the computational identification of cancerlectins and reveals the power of hybrid machine learning techniques in computational proteomics.

  17. Non-invasive subcutaneous fat reduction: a review.

    PubMed

    Kennedy, J; Verne, S; Griffith, R; Falto-Aizpurua, L; Nouri, K

    2015-09-01

    The risks, financial costs and lengthy downtime associated with surgical procedures for fat reduction have led to the development of a number of non-invasive techniques. Non-invasive body contouring now represents the fastest growing area of aesthetic medicine. There are currently four leading non-invasive techniques for reducing localized subcutaneous adipose tissue: low-level laser therapy (LLLT), cryolipolysis, radio frequency (RF) and high-intensity focused ultrasound (HIFU). To review and compare leading techniques and clinical outcomes of non-invasive subcutaneous fat reduction. The terms 'non-invasive', 'low-level laser', 'cryolipolysis', 'ultrasound' and 'radio frequency' were combined with 'lipolysis', 'fat reduction' or 'body contour' during separate searches in the PubMed database. We identified 31 studies (27 prospective clinical studies and four retrospective chart reviews) with a total of 2937 patients that had been treated with LLLT (n = 1114), cryolipolysis (n = 706), HIFU (n = 843) or RF (n = 116) or other techniques (n = 158) for fat reduction or body contouring. A majority of these patients experienced significant and satisfying results without any serious adverse effects. The studies investigating these devices have all varied in treatment regimen, body locations, follow-up times or outcome operationalization. Each technique differs in offered advantages and severity of adverse effects. However, multiple non-invasive devices are safe and effective for circumferential reduction in local fat tissue by 2 cm or more across the abdomen, hips and thighs. Results are consistent and reproducible for each device and none are associated with any serious or permanent adverse effects. © 2015 European Academy of Dermatology and Venereology.

  18. Developing a Reference of Normal Lung Sounds in Healthy Peruvian Children

    PubMed Central

    Ellington, Laura E.; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H.; Tielsch, James M.; Chavez, Miguel A.; Marin-Concha, Julio; Figueroa, Dante; West, James

    2018-01-01

    Purpose Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. Methods 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81 %) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Results Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47 % were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Conclusions Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments. PMID:24943262

  19. Developing a reference of normal lung sounds in healthy Peruvian children.

    PubMed

    Ellington, Laura E; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H; Tielsch, James M; Chavez, Miguel A; Marin-Concha, Julio; Figueroa, Dante; West, James; Checkley, William

    2014-10-01

    Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81%) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47% were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments.

  20. Stratification of pseudoprogression and true progression of glioblastoma multiform based on longitudinal diffusion tensor imaging without segmentation

    PubMed Central

    Qian, Xiaohua; Tan, Hua; Zhang, Jian; Zhao, Weilin; Chan, Michael D.; Zhou, Xiaobo

    2016-01-01

    Purpose: Pseudoprogression (PsP) can mimic true tumor progression (TTP) on magnetic resonance imaging in patients with glioblastoma multiform (GBM). The phenotypical similarity between PsP and TTP makes it a challenging task for physicians to distinguish these entities. So far, no approved biomarkers or computer-aided diagnosis systems have been used clinically for this purpose. Methods: To address this challenge, the authors developed an objective classification system for PsP and TTP based on longitudinal diffusion tensor imaging. A novel spatio-temporal discriminative dictionary learning scheme was proposed to differentiate PsP and TTP, thereby avoiding segmentation of the region of interest. The authors constructed a novel discriminative sparse matrix with the classification-oriented dictionary learning approach by excluding the shared features of two categories, so that the pooled features captured the subtle difference between PsP and TTP. The most discriminating features were then identified from the pooled features by their feature scoring system. Finally, the authors stratified patients with GBM into PsP and TTP by a support vector machine approach. Tenfold cross-validation (CV) and the area under the receiver operating characteristic (AUC) were used to assess the robustness of the developed system. Results: The average accuracy and AUC values after ten rounds of tenfold CV were 0.867 and 0.92, respectively. The authors also assessed the effects of different methods and factors (such as data types, pooling techniques, and dimensionality reduction approaches) on the performance of their classification system which obtained the best performance. Conclusions: The proposed objective classification system without segmentation achieved a desirable and reliable performance in differentiating PsP from TTP. Thus, the developed approach is expected to advance the clinical research and diagnosis of PsP and TTP. PMID:27806598

  1. A randomized, placebo-controlled trial of repetitive spinal magnetic stimulation in lumbosacral spondylotic pain.

    PubMed

    Lo, Yew L; Fook-Chong, Stephanie; Huerto, Antonio P; George, Jane M

    2011-07-01

    Lumbar spondylosis is a degenerative disorder of the spine, whereby pain is a prominent feature that poses therapeutic challenges even after surgical intervention. There are no randomized, placebo-controlled studies utilizing repetitive spinal magnetic stimulation (SMS) in pain associated with lumbar spondylosis. In this study, we utilize SMS technique for patients with this condition in a pilot clinical trial. We randomized 20 patients into SMS treatment or placebo arms. All patients must have clinical and radiological evidence of lumbar spondylosis. Patients should present with pain in the lumbar region, localized or radiating down the lower limbs in a radicular distribution. SMS was delivered with a Medtronic R30 repetitive magnetic stimulator (Medtronic Corporation, Skovlunde, Denmark) connected to a C-B60 figure of eight coil capable of delivering a maximum output of 2 Tesla per pulse. The coil measured 90 mm in each wing and was centered over the surface landmark corresponding to the cauda equina region. The coil was placed flat over the back with the handle pointing cranially. Each patient on active treatment received 200 trains of five pulses delivered at 10 Hz, at an interval of 5 seconds between each train. "Sham" SMS was delivered with the coil angled vertically and one of the wing edges in contact with the stimulation point. All patients tolerated the procedure well and no side effects of SMS were reported. In the treatment arm, SMS had resulted in significant pain reduction immediately and at Day 4 after treatment (P < 0.05). In the placebo arm, however, no significant pain reduction was seen immediately and at Day 4 after SMS. SMS in the treatment arm had resulted in mean pain reduction of 62.3% postprocedure and 17.4% at Day 4. The placebo arm only achieved pain reduction of 6.1% postprocedure and 4.5% at Day 4. This is the first study to show that a single session of SMS resulted in significant improvement of pain associated with lumbar spondylosis in a randomized, double-blind, placebo-controlled setting. The novel findings support the potential of this technique for future studies pertaining to neuropathic pain. Wiley Periodicals, Inc.

  2. Respondent Techniques for Reduction of Emotions Limiting School Adjustment: A Quantitative Review and Methodological Critique.

    ERIC Educational Resources Information Center

    Misra, Anjali; Schloss, Patrick J.

    1989-01-01

    The critical analysis of 23 studies using respondent techniques for the reduction of excessive emotional reactions in school children focuses on research design, dependent variables, independent variables, component analysis, and demonstrations of generalization and maintenance. Results indicate widespread methodological flaws that limit the…

  3. RADON REDUCTION TECHNIQUES FOR EXISTING DETACHED HOUSES - TECHNICAL GUIDANCE (THIRD EDITION) FOR ACTIVE SOIL DEPRESSURIZATION SYSTEMS

    EPA Science Inventory

    This technical guidance document is designed to aid in the selection, design, installation and operation of indoor radon reduction techniques using soil depressurization in existing houses. Its emphasis is on active soil depressurization; i.e., on systems that use a fan to depre...

  4. Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds

    NASA Astrophysics Data System (ADS)

    Abdo, Mohammad Gamal Mohammad Mostafa

    This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).

  5. Artificially intelligent recognition of Arabic speaker using voice print-based local features

    NASA Astrophysics Data System (ADS)

    Mahmood, Awais; Alsulaiman, Mansour; Muhammad, Ghulam; Akram, Sheeraz

    2016-11-01

    Local features for any pattern recognition system are based on the information extracted locally. In this paper, a local feature extraction technique was developed. This feature was extracted in the time-frequency plain by taking the moving average on the diagonal directions of the time-frequency plane. This feature captured the time-frequency events producing a unique pattern for each speaker that can be viewed as a voice print of the speaker. Hence, we referred to this technique as voice print-based local feature. The proposed feature was compared to other features including mel-frequency cepstral coefficient (MFCC) for speaker recognition using two different databases. One of the databases used in the comparison is a subset of an LDC database that consisted of two short sentences uttered by 182 speakers. The proposed feature attained 98.35% recognition rate compared to 96.7% for MFCC using the LDC subset.

  6. Histogram of gradient and binarized statistical image features of wavelet subband-based palmprint features extraction

    NASA Astrophysics Data System (ADS)

    Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab

    2017-11-01

    Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.

  7. Bayesian Fusion of Color and Texture Segmentations

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto

    2000-01-01

    In many applications one would like to use information from both color and texture features in order to segment an image. We propose a novel technique to combine "soft" segmentations computed for two or more features independently. Our algorithm merges models according to a mean entropy criterion, and allows to choose the appropriate number of classes for the final grouping. This technique also allows to improve the quality of supervised classification based on one feature (e.g. color) by merging information from unsupervised segmentation based on another feature (e.g., texture.)

  8. Mobile Healthcare for Automatic Driving Sleep-Onset Detection Using Wavelet-Based EEG and Respiration Signals

    PubMed Central

    Lee, Boon-Giin; Lee, Boon-Leng; Chung, Wan-Young

    2014-01-01

    Driving drowsiness is a major cause of traffic accidents worldwide and has drawn the attention of researchers in recent decades. This paper presents an application for in-vehicle non-intrusive mobile-device-based automatic detection of driver sleep-onset in real time. The proposed application classifies the driving mental fatigue condition by analyzing the electroencephalogram (EEG) and respiration signals of a driver in the time and frequency domains. Our concept is heavily reliant on mobile technology, particularly remote physiological monitoring using Bluetooth. Respiratory events are gathered, and eight-channel EEG readings are captured from the frontal, central, and parietal (Fpz-Cz, Pz-Oz) regions. EEGs are preprocessed with a Butterworth bandpass filter, and features are subsequently extracted from the filtered EEG signals by employing the wavelet-packet-transform (WPT) method to categorize the signals into four frequency bands: α, β, θ, and δ. A mutual information (MI) technique selects the most descriptive features for further classification. The reduction in the number of prominent features improves the sleep-onset classification speed in the support vector machine (SVM) and results in a high sleep-onset recognition rate. Test results reveal that the combined use of the EEG and respiration signals results in 98.6% recognition accuracy. Our proposed application explores the possibility of processing long-term multi-channel signals. PMID:25264954

  9. High-performance computer aided detection system for polyp detection in CT colonography with fluid and fecal tagging

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Wang, Shijun; Kabadi, Suraj; Summers, Ronald M.

    2009-02-01

    CT colonography (CTC) is a feasible and minimally invasive method for the detection of colorectal polyps and cancer screening. Computer-aided detection (CAD) of polyps has improved consistency and sensitivity of virtual colonoscopy interpretation and reduced interpretation burden. A CAD system typically consists of four stages: (1) image preprocessing including colon segmentation; (2) initial detection generation; (3) feature selection; and (4) detection classification. In our experience, three existing problems limit the performance of our current CAD system. First, highdensity orally administered contrast agents in fecal-tagging CTC have scatter effects on neighboring tissues. The scattering manifests itself as an artificial elevation in the observed CT attenuation values of the neighboring tissues. This pseudo-enhancement phenomenon presents a problem for the application of computer-aided polyp detection, especially when polyps are submerged in the contrast agents. Second, general kernel approach for surface curvature computation in the second stage of our CAD system could yield erroneous results for thin structures such as small (6-9 mm) polyps and for touching structures such as polyps that lie on haustral folds. Those erroneous curvatures will reduce the sensitivity of polyp detection. The third problem is that more than 150 features are selected from each polyp candidate in the third stage of our CAD system. These high dimensional features make it difficult to learn a good decision boundary for detection classification and reduce the accuracy of predictions. Therefore, an improved CAD system for polyp detection in CTC data is proposed by introducing three new techniques. First, a scale-based scatter correction algorithm is applied to reduce pseudo-enhancement effects in the image pre-processing stage. Second, a cubic spline interpolation method is utilized to accurately estimate curvatures for initial detection generation. Third, a new dimensionality reduction classifier, diffusion map and local linear embedding (DMLLE), is developed for classification and false positives (FP) reduction. Performance of the improved CAD system is evaluated and compared with our existing CAD system (without applying those techniques) using CT scans of 1186 patients. These scans are divided into a training set and a test set. The sensitivity of the improved CAD system increased 18% on training data at a rate of 5 FPs per patient and 15% on test data at a rate of 5 FPs per patient. Our results indicated that the improved CAD system achieved significantly better performance on medium-sized colonic adenomas with higher sensitivity and lower FP rate in CTC.

  10. Model and controller reduction of large-scale structures based on projection methods

    NASA Astrophysics Data System (ADS)

    Gildin, Eduardo

    The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that is, the reduced-order controller implemented with the full-order plant. A controller reduction approach is proposed such that to guarantee closed-loop stability. It is based on the concept of dissipativity (or positivity) of linear dynamical systems. Utilizing passivity preserving model reduction together with dissipative-LQG controllers, effective low-order optimal controllers are obtained. Results are shown through simulations.

  11. Reduction of Helicopter Blade-Vortex Interaction Noise by Active Rotor Control Technology

    NASA Technical Reports Server (NTRS)

    Yu, Yung H.; Gmelin, Bernd; Splettstoesser, Wolf; Brooks, Thomas F.; Philippe, Jean J.; Prieur, Jean

    1997-01-01

    Helicopter blade-vortex interaction noise is one of the most severe noise sources and is very important both in community annoyance and military detection. Research over the decades has substantially improved basic physical understanding of the mechanisms generating rotor blade-vortex interaction noise and also of controlling techniques, particularly using active rotor control technology. This paper reviews active rotor control techniques currently available for rotor blade vortex interaction noise reduction, including higher harmonic pitch control, individual blade control, and on-blade control technologies. Basic physical mechanisms of each active control technique are reviewed in terms of noise reduction mechanism and controlling aerodynamic or structural parameters of a blade. Active rotor control techniques using smart structures/materials are discussed, including distributed smart actuators to induce local torsional or flapping deformations, Published by Elsevier Science Ltd.

  12. Development of a quantitative intracranial vascular features extraction tool on 3D MRA using semiautomated open-curve active contour vessel tracing.

    PubMed

    Chen, Li; Mossa-Basha, Mahmud; Balu, Niranjan; Canton, Gador; Sun, Jie; Pimentel, Kristi; Hatsukami, Thomas S; Hwang, Jenq-Neng; Yuan, Chun

    2018-06-01

    To develop a quantitative intracranial artery measurement technique to extract comprehensive artery features from time-of-flight MR angiography (MRA). By semiautomatically tracing arteries based on an open-curve active contour model in a graphical user interface, 12 basic morphometric features and 16 basic intensity features for each artery were identified. Arteries were then classified as one of 24 types using prediction from a probability model. Based on the anatomical structures, features were integrated within 34 vascular groups for regional features of vascular trees. Eight 3D MRA acquisitions with intracranial atherosclerosis were assessed to validate this technique. Arterial tracings were validated by an experienced neuroradiologist who checked agreement at bifurcation and stenosis locations. This technique achieved 94% sensitivity and 85% positive predictive values (PPV) for bifurcations, and 85% sensitivity and PPV for stenosis. Up to 1,456 features, such as length, volume, and averaged signal intensity for each artery, as well as vascular group in each of the MRA images, could be extracted to comprehensively reflect characteristics, distribution, and connectivity of arteries. Length for the M1 segment of the middle cerebral artery extracted by this technique was compared with reviewer-measured results, and the intraclass correlation coefficient was 0.97. A semiautomated quantitative method to trace, label, and measure intracranial arteries from 3D-MRA was developed and validated. This technique can be used to facilitate quantitative intracranial vascular research, such as studying cerebrovascular adaptation to aging and disease conditions. Magn Reson Med 79:3229-3238, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. [Vertical reduction mammaplasty for gigantomastia with massive fibroadenomatosis: a case report].

    PubMed

    Schmid, N; De Greef, C; Calteux, N; Duhem, C; Faverly, D

    2006-12-01

    Vertical reduction mammaplasty is one of the most debated < short-scar > breast reduction technique. Advantages and drawbacks of the technique are discussed; most of the authors do not accept it as the technique of choice for high glandular resection weights. In our case report we achieve it for a resection weight up to two kilograms with an areolar transposition distance of more than ten centimetres. We show that it is reasonable to realize it dealing with gigantomastia. The massive fibroadenomatosis is observed following immunosuppressive treatment for kidney transplantation. Cyclosporine intake, even sporadic, is at the origin of the growth of these multiple, bilateral and large fibroadenomas. Drug-induced cytokines stimulate their development.

  14. Scattering features for lung cancer detection in fibered confocal fluorescence microscopy images.

    PubMed

    Rakotomamonjy, Alain; Petitjean, Caroline; Salaün, Mathieu; Thiberville, Luc

    2014-06-01

    To assess the feasibility of lung cancer diagnosis using fibered confocal fluorescence microscopy (FCFM) imaging technique and scattering features for pattern recognition. FCFM imaging technique is a new medical imaging technique for which interest has yet to be established for diagnosis. This paper addresses the problem of lung cancer detection using FCFM images and, as a first contribution, assesses the feasibility of computer-aided diagnosis through these images. Towards this aim, we have built a pattern recognition scheme which involves a feature extraction stage and a classification stage. The second contribution relies on the features used for discrimination. Indeed, we have employed the so-called scattering transform for extracting discriminative features, which are robust to small deformations in the images. We have also compared and combined these features with classical yet powerful features like local binary patterns (LBP) and their variants denoted as local quinary patterns (LQP). We show that scattering features yielded to better recognition performances than classical features like LBP and their LQP variants for the FCFM image classification problems. Another finding is that LBP-based and scattering-based features provide complementary discriminative information and, in some situations, we empirically establish that performance can be improved when jointly using LBP, LQP and scattering features. In this work we analyze the joint capability of FCFM images and scattering features for lung cancer diagnosis. The proposed method achieves a good recognition rate for such a diagnosis problem. It also performs well when used in conjunction with other features for other classical medical imaging classification problems. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Identification of appropriate patients for cardiometabolic risk management.

    PubMed

    Peters, Anne L

    2007-01-01

    Patients at increased risk for cardiovascular disease have a wide array of clinical features that should alert practitioners to the need for risk reduction. Some, but not all, of these features relate to insulin resistance. Multiple approaches exist for diagnosing and defining this risk, including the traditional Framingham risk assessment, various definitions of the metabolic syndrome, and assessment of risk factors not commonly included in the standard criteria. This article reviews the many clinical findings that should alert healthcare providers to the need for aggressive cardiovascular risk reduction.

  16. Combining active learning and semi-supervised learning techniques to extract protein interaction sentences.

    PubMed

    Song, Min; Yu, Hwanjo; Han, Wook-Shin

    2011-11-24

    Protein-protein interaction (PPI) extraction has been a focal point of many biomedical research and database curation tools. Both Active Learning and Semi-supervised SVMs have recently been applied to extract PPI automatically. In this paper, we explore combining the AL with the SSL to improve the performance of the PPI task. We propose a novel PPI extraction technique called PPISpotter by combining Deterministic Annealing-based SSL and an AL technique to extract protein-protein interaction. In addition, we extract a comprehensive set of features from MEDLINE records by Natural Language Processing (NLP) techniques, which further improve the SVM classifiers. In our feature selection technique, syntactic, semantic, and lexical properties of text are incorporated into feature selection that boosts the system performance significantly. By conducting experiments with three different PPI corpuses, we show that PPISpotter is superior to the other techniques incorporated into semi-supervised SVMs such as Random Sampling, Clustering, and Transductive SVMs by precision, recall, and F-measure. Our system is a novel, state-of-the-art technique for efficiently extracting protein-protein interaction pairs.

  17. Inter-proximal enamel reduction in contemporary orthodontics.

    PubMed

    Pindoria, J; Fleming, P S; Sharma, P K

    2016-12-16

    Inter-proximal enamel reduction has gained increasing prominence in recent years being advocated to provide space for orthodontic alignment, to refine contact points and to potentially improve long-term stability. An array of techniques and products are available ranging from hand-held abrasive strips to handpiece mounted burs and discs. The indications for inter-proximal enamel reduction and the importance of formal space analysis, together with the various techniques and armamentarium which may be used to perform it safely in both the labial and buccal segments are outlined.

  18. Score-level fusion of two-dimensional and three-dimensional palmprint for personal recognition systems

    NASA Astrophysics Data System (ADS)

    Chaa, Mourad; Boukezzoula, Naceur-Eddine; Attia, Abdelouahab

    2017-01-01

    Two types of scores extracted from two-dimensional (2-D) and three-dimensional (3-D) palmprint for personal recognition systems are merged, introducing a local image descriptor for 2-D palmprint-based recognition systems, named bank of binarized statistical image features (B-BSIF). The main idea of B-BSIF is that the extracted histograms from the binarized statistical image features (BSIF) code images (the results of applying the different BSIF descriptor size with the length 12) are concatenated into one to produce a large feature vector. 3-D palmprint contains the depth information of the palm surface. The self-quotient image (SQI) algorithm is applied for reconstructing illumination-invariant 3-D palmprint images. To extract discriminative Gabor features from SQI images, Gabor wavelets are defined and used. Indeed, the dimensionality reduction methods have shown their ability in biometrics systems. Given this, a principal component analysis (PCA)+linear discriminant analysis (LDA) technique is employed. For the matching process, the cosine Mahalanobis distance is applied. Extensive experiments were conducted on a 2-D and 3-D palmprint database with 10,400 range images from 260 individuals. Then, a comparison was made between the proposed algorithm and other existing methods in the literature. Results clearly show that the proposed framework provides a higher correct recognition rate. Furthermore, the best results were obtained by merging the score of B-BSIF descriptor with the score of the SQI+Gabor wavelets+PCA+LDA method, yielding an equal error rate of 0.00% and a recognition rate of rank-1=100.00%.

  19. A wavelet-based technique to predict treatment outcome for Major Depressive Disorder.

    PubMed

    Mumtaz, Wajid; Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad; Malik, Aamir Saeed

    2017-01-01

    Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant's treatment outcome may help during antidepressant's selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant's treatment outcome for the MDD patients.

  20. High purity, low dislocation GaAs single crystals

    NASA Technical Reports Server (NTRS)

    Chen, R. T.; Holmes, D. E.; Kirkpatrick, C. G.

    1983-01-01

    Liquid encapsulated Czochralski crystal growth techniques for producing undoped, high resistivity, low dislocation material suitable for device applications is described. Technique development resulted in reduction of dislocation densities in 3 inch GaAs crystals. Control over the melt stoichiometry was determined to be of critical importance for the reduction of twinning and polycrystallinity during growth.

  1. TESTING OF INDOOR RADON REDUCTION TECHNIQUES IN CENTRAL OHIO HOUSES: PHASE 2 (WINTER 1988-1989)

    EPA Science Inventory

    The report gives results of tests of developmental indoor radon reduction techniques in nine slab-on-grade and four crawl-space houses near Dayton. Ohio. he slab-on-grade tests indicated that, when there is a good layer of aggregate under the slab, the sub-slab ventilation (SSV) ...

  2. Acoustic analysis of aft noise reduction techniques measured on a subsonic tip speed 50.8 cm (twenty inch) diameter fan. [quiet engine program

    NASA Technical Reports Server (NTRS)

    Stimpert, D. L.; Clemons, A.

    1977-01-01

    Sound data which were obtained during tests of a 50.8 cm diameter, subsonic tip speed, low pressure ratio fan were analyzed. The test matrix was divided into two major investigations: (1) source noise reduction techniques; and (2) aft duct noise reduction with acoustic treatment. Source noise reduction techniques were investigated which include minimizing second harmonic noise by varying vane/blade ratio, variation in spacing, and lowering the Mach number through the vane row to lower fan broadband noise. Treatment in the aft duct which includes flow noise effects, faceplate porosity, rotor OGV treatment, slant cell treatment, and splitter simulation with variable depth on the outer wall and constant thickness treatment on the inner wall was investigated. Variable boundary conditions such as variation in treatment panel thickness and orientation, and mixed porosity combined with variable thickness were examined. Significant results are reported.

  3. Evaluation of protective gloves and working techniques for reducing hand-arm vibration exposure in the workplace.

    PubMed

    Milosevic, Matija; McConville, Kristiina M Valter

    2012-01-01

    Operation of handheld power tools results in exposure to hand-arm vibrations, which over time lead to numerous health complications. The objective of this study was to evaluate protective equipment and working techniques for the reduction of vibration exposure. Vibration transmissions were recorded during different work techniques: with one- and two-handed grip, while wearing protective gloves (standard, air and anti-vibration gloves) and while holding a foam-covered tool handle. The effect was examined by analyzing the reduction of transmitted vibrations at the wrist. The vibration transmission was recorded with a portable device using a triaxial accelerometer. The results suggest large and significant reductions of vibration with appropriate safety equipment. Reductions of 85.6% were achieved when anti-vibration gloves were used. Our results indicated that transmitted vibrations were affected by several factors and could be measured and significantly reduced.

  4. Solitaire salvage: a stent retriever-assisted catheter reduction technical report.

    PubMed

    Parry, Phillip Vaughan; Morales, Alejandro; Jankowitz, Brian Thomas

    2016-07-01

    The endovascular management of giant aneurysms often proves difficult with standard techniques. Obtaining distal access to allow catheter reduction is often key to approaching these aneurysms, but several anatomic challenges make this task unsafe and not feasible. Obtaining distal anchor points and performing catheter reduction maneuvers using adjunctive devices is not a novel concept, however using the Solitaire in order to do so may have some distinct advantages compared with previously described methods. Here we describe our novel Solitaire salvage technique, which allowed successful reduction of a looped catheter within an aneurysm in three cases. While this technique is expensive and therefore best performed after standard maneuvers have failed, in our experience it was effective, safe, and more efficient than other methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer

    NASA Astrophysics Data System (ADS)

    Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad

    2017-04-01

    Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.

  6. Marginal Fisher analysis and its variants for human gait recognition and content- based image retrieval.

    PubMed

    Xu, Dong; Yan, Shuicheng; Tao, Dacheng; Lin, Stephen; Zhang, Hong-Jiang

    2007-11-01

    Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for human gait recognition and content-based image retrieval (CBIR). In this paper, we present extensions of our recently proposed marginal Fisher analysis (MFA) to address these problems. For human gait recognition, we first present a direct application of MFA, then inspired by recent advances in matrix and tensor-based dimensionality reduction algorithms, we present matrix-based MFA for directly handling 2-D input in the form of gray-level averaged images. For CBIR, we deal with the relevance feedback problem by extending MFA to marginal biased analysis, in which within-class compactness is characterized only by the distances between each positive sample and its neighboring positive samples. In addition, we present a new technique to acquire a direct optimal solution for MFA without resorting to objective function modification as done in many previous algorithms. We conduct comprehensive experiments on the USF HumanID gait database and the Corel image retrieval database. Experimental results demonstrate that MFA and its extensions outperform related algorithms in both applications.

  7. Deposition of thin Si and Ge films by ballistic hot electron reduction in a solution-dripping mode and its application to the growth of thin SiGe films

    NASA Astrophysics Data System (ADS)

    Suda, Ryutaro; Yagi, Mamiko; Kojima, Akira; Mentek, Romain; Mori, Nobuya; Shirakashi, Jun-ichi; Koshida, Nobuyoshi

    2015-04-01

    To enhance the usefulness of ballistic hot electron injection into solutions for depositing thin group-IV films, a dripping scheme is proposed. A very small amount of SiCl4 or GeCl4 solution was dripped onto the surface of a nanocrystalline Si (nc-Si) electron emitter, and then the emitter is driven without using any counter electrodes. It is shown that thin Si and Ge films are deposited onto the emitting surface. Spectroscopic surface and compositional analyses showed no extrinsic carbon contaminations in deposited thin films, in contrast to the results of a previous study using the dipping scheme. The availability of this technique for depositing thin SiGe films is also demonstrated using a mixture SiCl4+GeCl4 solution. Ballistic hot electrons injected into solutions with appropriate kinetic energies promote preferential reduction of target ions with no by-products leading to nuclei formation for the thin film growth. Specific advantageous features of this clean, room-temperature, and power-effective process is discussed in comparison with the conventional dry and wet processes.

  8. 3,4-Ethylenedioxythiophene functionalized graphene with palladium nanoparticles for enhanced electrocatalytic oxygen reduction reaction

    NASA Astrophysics Data System (ADS)

    Choe, Ju Eun; Ahmed, Mohammad Shamsuddin; Jeon, Seungwon

    2015-05-01

    Poly(3,4-ethylenedioxythiophene) functionalized graphene with palladium nanoparticles (denoted as Pd/PEDOT/rGO) has been synthesized for electrochemical oxygen reduction reaction (ORR) in alkaline solution. The structural features of catalyst are characterized by scanning electron microscopy, transmission electron microscopy (TEM), energy-dispersive X-ray spectroscopy and X-ray photoelectron spectroscopy. The TEM images suggest a well dispersed PdNPs onto PEDOT/rGO film. The ORR activity of Pd/PEDOT/rGO has been investigated via cyclic voltammetry (CV), rotating disk electrode (RDE) and rotating ring disk electrode (RRDE) techniques in 0.1 M KOH aqueous solution. Comparative CV analysis suggests a general approach of intermolecular charge-transfer in between graphene sheet and PdNPs via PEDOT which leads to the better PdNPs dispersion and subsequently superior ORR kinetics. The results from ORR measurements show that Pd/PEDOT/rGO has remarkable electrocatalytic activity and stability compared to Pd/rGO and state-of-the-art Pt/C. The Koutecky-Levich and Tafel analysis suggest that the proposed main path in the ORR mechanism has direct four-electron transfer process with faster transfer kinetic rate on the Pd/PEDOT/rGO.

  9. Isolation and Characterization of FORMATE/NI(CYCLAM)^{2+} Complexes with Cryogenic Ion Vibrational Predissociation

    NASA Astrophysics Data System (ADS)

    Wolk, Arron B.; Fournier, Joseph A.; Wolke, Conrad T.; Johnson, Mark A.

    2013-06-01

    Transition metal-based organometallic catalysts are a promising means of converting CO_{2} to transportable fuels. Ni(cyclam)^{2+}(cyclam = 1,4,8,11-tetraazacyclotetradecane), a Ni^{II} complex ligated by four nitrogen centers, has shown promise as a catalyst selective for CO_{2} reduction in aqueous solutions. The cyclam ligand has four NH hydrogen bond donors that can adopt five conformations, each offering distinct binding motifs for coordination of CO_{2} close to the metal center. To probe the ligand conformation and the role of hydrogen bonding in adduct binding, we extract Ni(cyclam)^{2+} complexes with the formate anion and some of its analogs from solution using electrospray ionization, and characterize their structures using cryogenic ion vibrational predissociation spectroscopy. Using the signature vibrational features of the embedded carboxylate anion and the NH groups as reporters, we compare the binding motifs of oxalate, benzoate, and formate anions to the Ni(cyclam)^{2+} framework. Finally, we comment on possible routes to generate the singly charged Ni(cyclam)^{+} complex, a key intermediate that has been invoked in the catalytic CO_{2} reduction cycle, but has never been isolated through ion processing techniques.

  10. Noise Reduction Based on an Fe -Rh Interlayer in Exchange-Coupled Heat-Assisted Recording Media

    NASA Astrophysics Data System (ADS)

    Vogler, Christoph; Abert, Claas; Bruckner, Florian; Suess, Dieter

    2017-11-01

    High storage density and high data rate are two of the most desired properties of modern hard disk drives. Heat-assisted magnetic recording (HAMR) is believed to achieve both. Recording media, consisting of exchange-coupled grains with a high and a low TC part, were shown to have low dc noise—but increased ac noise—compared to hard magnetic single-phase grains like FePt. We extensively investigate the influence of an Fe -Rh interlayer on the magnetic noise in exchange-coupled grains. We find an optimal grain design that reduces the jitter in the down-track direction by up to 30% and in the off-track direction by up to 50%, depending on the head velocity, compared to the same structures without FeRh. Furthermore, the mechanisms causing this jitter reduction are demonstrated. Additionally, we show that, for short heat pulses and low write temperatures, the switching-time distribution of the analyzed grain structure is reduced by a factor of 4 compared to the same structure without an Fe -Rh layer. This feature could be interesting for HAMR use with a pulsed laser spot and could encourage discussion of this HAMR technique.

  11. Automatic Target Recognition: Statistical Feature Selection of Non-Gaussian Distributed Target Classes

    DTIC Science & Technology

    2011-06-01

    implementing, and evaluating many feature selection algorithms. Mucciardi and Gose compared seven different techniques for choosing subsets of pattern...122 THIS PAGE INTENTIONALLY LEFT BLANK 123 LIST OF REFERENCES [1] A. Mucciardi and E. Gose , “A comparison of seven techniques for

  12. Insightful monitoring of natural flood risk management features using a low-cost and participatory approach

    NASA Astrophysics Data System (ADS)

    Starkey, Eleanor; Barnes, Mhari; Quinn, Paul; Large, Andy

    2016-04-01

    Pressures associated with flooding and climate change have significantly increased over recent years. Natural Flood Risk Management (NFRM) is now seen as being a more appropriate and favourable approach in some locations. At the same time, catchment managers are also encouraged to adopt a more integrated, evidence-based and bottom-up approach. This includes engaging with local communities. Although NFRM features are being more readily installed, there is still limited evidence associated with their ability to reduce flood risk and offer multiple benefits. In particular, local communities and land owners are still uncertain about what the features entail and how they will perform, which is a huge barrier affecting widespread uptake. Traditional hydrometric monitoring techniques are well established but they still struggle to successfully monitor and capture NFRM performance spatially and temporally in a visual and more meaningful way for those directly affected on the ground. Two UK-based case studies are presented here where unique NFRM features have been carefully designed and installed in rural headwater catchments. This includes a 1km2 sub-catchment of the Haltwhistle Burn (northern England) and a 2km2 sub-catchment of Eddleston Water (southern Scotland). Both of these pilot sites are subject to prolonged flooding in winter and flash flooding in summer. This exacerbates sediment, debris and water quality issues downstream. Examples of NFRM features include ponds, woody debris and a log feature inspired by the children's game 'Kerplunk'. They have been tested and monitored over the 2015-2016 winter storms using low-cost techniques by both researchers and members of the community ('citizen scientists'). Results show that monitoring techniques such as regular consumer specification time-lapse cameras, photographs, videos and 'kite-cams' are suitable for long-term and low-cost monitoring of a variety of NFRM features. These techniques have been compared against traditional hydrometric monitoring equipment. It is clear that traditional techniques are expensive, require specialist skills and outputs are complicated to the untrained eye. These alternative methods tested are visually more meaningful, can be interpreted by all stakeholders and techniques can be easily utilised by citizen scientists, land owners or flood groups. Such techniques therefore offer a before, during and after NFRM monitoring solution which can be more realistically and readily implemented, supports engagement and subsequent uptake and maintenance of NFRM features on a local level. Although monitoring techniques presented are relatively simple, they are regarded as being essential given that many schemes are not monitored at all.

  13. Multi-quadrant biopsy technique improves diagnostic ability in large heterogeneous renal masses. Abel EJ, Heckman JE, Hinshaw L, Best S, Lubner M, Jarrard DF, Downs TM, Nakada SY, Lee FT Jr, Huang W, Ziemlewicz T.J Urol. 2015 Oct;194(4):886-91. [Epub 2015 Mar 30]. doi: 10.1016/j.juro.2015.03.106.

    PubMed

    Jay, Raman; Heckman, J E; Hinshaw, L; Best, S; Lubner, M; Jarrard, D F; Downs, T M; Nakada, S Y; Lee, F T; Huang, W; Ziemlewicz, T

    2017-03-01

    Percutaneous biopsy obtained from a single location is prone to sampling error in large heterogeneous renal masses, leading to nondiagnostic results or failure to detect poor prognostic features. We evaluated the accuracy of percutaneous biopsy for large renal masses using a modified multi-quadrant technique vs. a standard biopsy technique. Clinical and pathological data for all patients with cT2 or greater renal masses who underwent percutaneous biopsy from 2009 to 2014 were reviewed. The multi-quadrant technique was defined as multiple core biopsies from at least 4 separate solid enhancing areas in the tumor. The incidence of nondiagnostic findings, sarcomatoid features and procedural complications was recorded, and concordance between biopsy specimens and nephrectomy pathology was compared. A total of 122 biopsies were performed for 117 tumors in 116 patients (46 using the standard biopsy technique and 76 using the multi-quadrant technique). Median tumor size was 10cm (IQR: 8-12). Biopsy was nondiagnostic in 5 of 46 (10.9%) standard and 0 of 76 (0%) multi-quadrant biopsies (P = 0.007). Renal cell carcinoma was identified in 96 of 115 (82.0%) tumors and nonrenal cell carcinoma tumors were identified in 21 (18.0%). One complication occurred using the standard biopsy technique and no complications were reported using the multi-quadrant technique. Sarcomatoid features were present in 23 of 96 (23.9%) large renal cell carcinomas studied. Sensitivity for identifying sarcomatoid features was higher using the multi-quadrant technique compared to the standard biopsy technique at 13 of 15 (86.7%) vs. 2 of 8 (25.0%) (P = 0.0062). The multi-quadrant percutaneous biopsy technique increases the ability to identify aggressive pathological features in large renal tumors and decreases nondiagnostic biopsy rates. Copyright © 2017. Published by Elsevier Inc.

  14. Longitudinal MR cortical thinning of individuals and its correlation with PET metabolic reduction: a measurement consistency and correctness studies

    NASA Astrophysics Data System (ADS)

    Lin, Zhongmin S.; Avinash, Gopal; McMillan, Kathryn; Yan, Litao; Minoshima, Satoshi

    2014-03-01

    Cortical thinning and metabolic reduction can be possible imaging biomarkers for Alzheimer's disease (AD) diagnosis and monitoring. Many techniques have been developed for the cortical measurement and widely used for the clinical statistical studies. However, the measurement consistency of individuals, an essential requirement for a clinically useful technique, requires proper further investigation. Here we leverage our previously developed BSIM technique 1 to measure cortical thickness and thinning and use it with longitudinal MRI from ADNI to investigate measurement consistency and spatial resolution. 10 normal, 10 MCI, and 10 AD subjects in their 70s were selected for the study. Consistent cortical thinning patterns were observed in all baseline and follow up images. Rapid cortical thinning was shown in some MCI and AD cases. To evaluate the correctness of the cortical measurement, we compared longitudinal cortical thinning with clinical diagnosis and longitudinal PET metabolic reduction measured using 3D-SSP technique2 for the same person. Longitudinal MR cortical thinning and corresponding PET metabolic reduction showed high level pattern similarity revealing certain correlations worthy of further studies. Severe cortical thinning that might link to disease conversion from MCI to AD was observed in two cases. In summary, our results suggest that consistent cortical measurements using our technique may provide means for clinical diagnosis and monitoring at individual patient's level and MR cortical thinning measurement can complement PET metabolic reduction measurement.

  15. Reduction mammoplasty operative techniques for improved outcomes in the treatment of gigantomastia.

    PubMed

    Degeorge, Brent R; Colen, David L; Mericli, Alexander F; Drake, David B

    2013-01-01

    Gigantomastia, or excessive breast hypertrophy, which is broadly defined as macromastia requiring a surgical reduction of more than 1500 g of breast tissue per breast, poses a unique problem to the reconstructive surgeon. Various procedures have been described for reduction mammoplasty with specific skin incisions, patterns of breast parenchymal resection, and blood supply to the nipple-areolar complex; however, not all of these techniques can be directly applied in the setting of gigantomastia. We outline a simplified method for preoperative evaluation and operative technique, which has been optimized for the management of gigantomastia. A retrospective chart review of patients who have undergone reduction mammoplasty from 2006 to 2011 by a single surgeon at the University of Virginia was performed. Patients were subdivided based on weight of breast tissue resection into 2 groups: macromastia (<1500 g resection per breast) and gigantomastia (>1500 g resection per breast). Endpoints including patient demographics, operative techniques, and complication rates were recorded. The mean resection weights in the macromastia and gigantomastia groups, respectively, were 681 g ± 283 g and 2554 g ± 421 g. There were no differences in major complications between the 2 groups. The rate of free nipple graft utilization was not significantly different between the 2 groups. Our surgical approach to gigantomastia has advantages when applied to extremely large-volume breast reduction and provides both esthetic and reproducible results. The preoperative assessment and operative techniques described herein have been adapted to the management of gigantomastia to reduce the rates of surgical complications.

  16. Model reduction methods for control design

    NASA Technical Reports Server (NTRS)

    Dunipace, K. R.

    1988-01-01

    Several different model reduction methods are developed and detailed implementation information is provided for those methods. Command files to implement the model reduction methods in a proprietary control law analysis and design package are presented. A comparison and discussion of the various reduction techniques is included.

  17. Dust ion acoustic freak waves in a plasma with two temperature electrons featuring Tsallis distribution

    NASA Astrophysics Data System (ADS)

    Chahal, Balwinder Singh; Singh, Manpreet; Shalini; Saini, N. S.

    2018-02-01

    We present an investigation for the nonlinear dust ion acoustic wave modulation in a plasma composed of charged dust grains, two temperature (cold and hot) nonextensive electrons and ions. For this purpose, the multiscale reductive perturbation technique is used to obtain a nonlinear Schrödinger equation. The critical wave number, which indicates where the modulational instability sets in, has been determined precisely for various regimes. The influence of plasma background nonextensivity on the growth rate of modulational instability is discussed. The modulated wavepackets in the form of either bright or dark type envelope solitons may exist. Formation of rogue waves from bright envelope solitons is also discussed. The investigation indicates that the structural characteristics of these envelope excitations (width, amplitude) are significantly affected by nonextensivity, dust concentration, cold electron-ion density ratio and temperature ratio.

  18. Semantic wireless localization of WiFi terminals in smart buildings

    NASA Astrophysics Data System (ADS)

    Ahmadi, H.; Polo, A.; Moriyama, T.; Salucci, M.; Viani, F.

    2016-06-01

    The wireless localization of mobile terminals in indoor scenarios by means of a semantic interpretation of the environment is addressed in this work. A training-less approach based on the real-time calibration of a simple path loss model is proposed which combines (i) the received signal strength information measured by the wireless terminal and (ii) the topological features of the localization domain. A customized evolutionary optimization technique has been designed to estimate the optimal target position that fits the complex wireless indoor propagation and the semantic target-environment relation, as well. The proposed approach is experimentally validated in a real building area where the available WiFi network is opportunistically exploited for data collection. The presented results point out a reduction of the localization error obtained with the introduction of a very simple semantic interpretation of the considered scenario.

  19. Optical Correlation

    NASA Technical Reports Server (NTRS)

    Cotariu, Steven S.

    1991-01-01

    Pattern recognition may supplement or replace certain navigational aids on spacecraft in docking or landing activities. The need to correctly identify terrain features remains critical in preparation of autonomous planetary landing. One technique that may solve this problem is optical correlation. Correlation has been successfully demonstrated under ideal conditions; however, noise significantly affects the ability of the correlator to accurately identify input signals. Optical correlation in the presence of noise must be successfully demonstrated before this technology can be incorporated into system design. An optical correlator is designed and constructed using a modified 2f configuration. Liquid crystal televisions (LCTV) are used as the spatial light modulators (SLM) for both the input and filter devices. The filter LCTV is characterized and an operating curve is developed. Determination of this operating curve is critical for reduction of input noise. Correlation of live input with a programmable filter is demonstrated.

  20. Numerical simulations of wave propagation in long bars with application to Kolsky bar testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corona, Edmundo

    2014-11-01

    Material testing using the Kolsky bar, or split Hopkinson bar, technique has proven instrumental to conduct measurements of material behavior at strain rates in the order of 10 3 s -1. Test design and data reduction, however, remain empirical endeavors based on the experimentalist's experience. Issues such as wave propagation across discontinuities, the effect of the deformation of the bar surfaces in contact with the specimen, the effect of geometric features in tensile specimens (dog-bone shape), wave dispersion in the bars and other particulars are generally treated using simplified models. The work presented here was conducted in Q3 and Q4more » of FY14. The objective was to demonstrate the feasibility of numerical simulations of Kolsky bar tests, which was done successfully.« less

  1. Intrinsic modulation of pulse-coupled integrate-and-fire neurons

    NASA Astrophysics Data System (ADS)

    Coombes, S.; Lord, G. J.

    1997-11-01

    Intrinsic neuromodulation is observed in sensory and neuromuscular circuits and in biological central pattern generators. We model a simple neuronal circuit with a system of two pulse-coupled integrate-and-fire neurons and explore the parameter regimes for periodic firing behavior. The inclusion of biologically realistic features shows that the speed and onset of neuronal response plays a crucial role in determining the firing phase for periodic rhythms. We explore the neurophysiological function of distributed delays arising from both the synaptic transmission process and dendritic structure as well as discrete delays associated with axonal communication delays. Bifurcation and stability diagrams are constructed with a mixture of simple analysis, numerical continuation and the Kuramoto phase-reduction technique. Moreover, we show that, for asynchronous behavior, the strength of electrical synapses can control the firing rate of the system.

  2. Optical correlation

    NASA Astrophysics Data System (ADS)

    Cotariu, Steven S.

    1991-12-01

    Pattern recognition may supplement or replace certain navigational aids on spacecraft in docking or landing activities. The need to correctly identify terrain features remains critical in preparation of autonomous planetary landing. One technique that may solve this problem is optical correlation. Correlation has been successfully demonstrated under ideal conditions; however, noise significantly affects the ability of the correlator to accurately identify input signals. Optical correlation in the presence of noise must be successfully demonstrated before this technology can be incorporated into system design. An optical correlator is designed and constructed using a modified 2f configuration. Liquid crystal televisions (LCTV) are used as the spatial light modulators (SLM) for both the input and filter devices. The filter LCTV is characterized and an operating curve is developed. Determination of this operating curve is critical for reduction of input noise. Correlation of live input with a programmable filter is demonstrated.

  3. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English.

    PubMed

    Banzina, Elina; Dilley, Laura C; Hewitt, Lynne E

    2016-08-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found, likely due to a transfer of native phonological features. Next, a cross-modal phonological priming technique combined with a lexical decision task assessed the effect of inaccurate SS and UU syllable productions on native American English listeners' speech processing. Inaccurate UU vowels led to significant inhibition of lexical access, while reduced SS vowels revealed less interference. The results have implications for understanding the role of SS and UU syllables for word recognition and English pronunciation instruction.

  4. Apparatus and method for the spectrochemical analysis of liquids using the laser spark

    DOEpatents

    Cremers, David A.; Radziemski, Leon J.; Loree, Thomas R.

    1990-01-01

    A method and apparatus for the qualitative and quantitative spectroscopic investigation of elements present in a liquid sample using the laser spark. A series of temporally closely spaced spark pairs is induced in the liquid sample utilizing pulsed electromagnetic radiation from a pair of lasers. The light pulses are not significantly absorbed by the sample so that the sparks occur inside of the liquid. The emitted light from the breakdown events is spectrally and temporally resolved, and the time period between the two laser pulses in each spark pair is adjusted to maximize the signal-to-noise ratio of the emitted signals. In comparison with the single pulse technique, a substantial reduction in the limits of detectability for many elements has been demonstrated. Narrowing of spectral features results in improved discrimination against interfering species.

  5. Apparatus and method for the spectrochemical analysis of liquids using the laser spark

    DOEpatents

    Cremers, D.A.; Radziemski, L.J.; Loree, T.R.

    1984-05-01

    A method and apparatus are disclosed for the qualitative and quantitative spectroscopic investigation of elements present in a liquid sample using the laser spark. A series of temporally closely spaced spark pairs is induced in the liquid sample utilizing pulsed electromagnetic radiation from a pair of lasers. The light pulses are not significantly absorbed by the sample so that the sparks occur inside of the liquid. The emitted light from the breakdown events is spectrally and temporally resolved, and the time period between the two laser pulses in each spark pair is adjusted to maximize the signal-to-noise ratio of the emitted signals. In comparison with the single pulse technique, a substantial reduction in the limits of detectability for many elements has been demonstrated. Narrowing of spectral features results in improved discrimination against interfering species.

  6. A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang

    2009-11-01

    Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.

  7. Anterior transarticular C1-C2 fixation with contralateral screw insertion: a report of two cases and technical note.

    PubMed

    Lvov, Ivan; Grin, Andrey; Kaykov, Aleksandr; Smirnov, Vladimir; Krylov, Vladimir

    2017-08-08

    Anterior transarticular fixation of the C1-C2 vertebrae is a well-known technique that involves screw insertion through the body of the C2 vertebra into the lateral masses of the atlas through an anterior transcervical approach. Meanwhile, contralateral screw insertion has been previously described only in anatomical studies. We describe two case reports of the clinical application of this new technique. In Case 1, the patient was diagnosed with an unstable C1 fracture. The clinical features of the case did not allow for any type of posterior atlantoaxial fusion, Halo immobilization, or routine anterior fixation using the Reindl and Koller techniques. The possible manner of screw insertion into the anterior third of the right lateral mass was via a contralateral trajectory, which was performed in this case. Case 2 involved a patient with neglected posteriorly dislocated dens fracture who could not lie in the prone position due to concomitant cardiac pathology. Reduction of atlantoaxial dislocation was insufficient, even after scar tissue resection at the fracture, while transdental fusion was not possible. Considering the success of the previous case, atlantoaxial fixation was performed through the small approach, using the Reindl technique and contralateral screw insertion. These two cases demonstrate the potential of anterior transarticular fixation of C1-C2 vertebrae in cases where posterior atlantoaxial fusion is not achievable. This type of fixation can be performed through a single approach if one screw is inserted using the Reindl technique and another is inserted via a contralateral trajectory.

  8. Resonant inelastic X-ray scattering on synthetic nickel compounds and Ni-Fe hydrogenase protein

    NASA Astrophysics Data System (ADS)

    Sanganas, Oliver; Löscher, Simone; Pfirrmann, Stefan; Marinos, Nicolas; Glatzel, Pieter; Weng, Tsu-Chien; Limberg, Christian; Driess, Matthias; Dau, Holger; Haumann, Michael

    2009-11-01

    Ni-Fe hydrogenases are proteins catalyzing the oxidative cleavage of dihydrogen (H2) and proton reduction to H2 at high turnover rates. Their active site is a heterobimetallic center comprising one Ni and one Fe atom. To understand the function of the site, well resolved structural and electronic information is required. Such information is expected to become accessible by high resolution X-ray absorption and emission techniques, which are rapidly developing at third generation synchrotron radiation sources. We studied a number of synthetic Ni compounds, which mimic relevant features of the Ni site in hydrogenases, and the Ni site in the soluble, NAD-reducing hydrogenase (SH) from the bacterium Ralstonia eutropha by resonant inelastic X-ray scattering (RIXS) using a Rowland-type spectrometer at the ESRF. The SH is particularly interesting because its H2-cleavage reaction is highly resistant against inhibition by O2. Kα-fluorescence detected RIXS planes in the 1s→3d region of the X-ray absorption spectrum were recorded on the protein which allow to extract L3-edge type spectra Spectral features of the protein are compared to those of the model compounds.

  9. Cross-Stream PIV Measurements of Jets With Internal Lobed Mixers

    NASA Technical Reports Server (NTRS)

    Bridges, James; Wernet, Mark P.

    2004-01-01

    With emphasis being placed on enhanced mixing of jet plumes for noise reduction and on predictions of jet noise based upon turbulent kinetic energy, unsteady measurements of jet plumes are a very important part of jet noise studies. Given that hot flows are of most practical interest, optical techniques such as Particle Image Velocimetry (PIV) are applicable. When the flow has strong azimuthal features, such as those generated by chevrons or lobed mixers, traditional PIV, which aligns the measurement plane parallel to the dominant flow direction is very inefficient, requiring many planes of data to be acquired and stacked up to produce the desired flow cross-sections. This paper presents PIV data acquired in a plane normal to the jet axis, directly measuring the cross-stream gradients and features of an internally mixed nozzle operating at aircraft engine flow conditions. These nozzle systems included variations in lobed mixer penetration, lobe count, lobe scalloping, and nozzle length. Several cases validating the accuracy of the PIV data are examined along with examples of its use in answering questions about the jet noise generation processes in these nozzles. Of most interest is the relationship of low frequency aft-directed noise with turbulence kinetic energy and mean velocity.

  10. Automated diagnosis of Alzheimer's disease with multi-atlas based whole brain segmentations

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Tang, Xiaoying

    2017-03-01

    Voxel-based analysis is widely used in quantitative analysis of structural brain magnetic resonance imaging (MRI) and automated disease detection, such as Alzheimer's disease (AD). However, noise at the voxel level may cause low sensitivity to AD-induced structural abnormalities. This can be addressed with the use of a whole brain structural segmentation approach which greatly reduces the dimension of features (the number of voxels). In this paper, we propose an automatic AD diagnosis system that combines such whole brain segmen- tations with advanced machine learning methods. We used a multi-atlas segmentation technique to parcellate T1-weighted images into 54 distinct brain regions and extract their structural volumes to serve as the features for principal-component-analysis-based dimension reduction and support-vector-machine-based classification. The relationship between the number of retained principal components (PCs) and the diagnosis accuracy was systematically evaluated, in a leave-one-out fashion, based on 28 AD subjects and 23 age-matched healthy subjects. Our approach yielded pretty good classification results with 96.08% overall accuracy being achieved using the three foremost PCs. In addition, our approach yielded 96.43% specificity, 100% sensitivity, and 0.9891 area under the receiver operating characteristic curve.

  11. Robust and reliable banknote authentification and print flaw detection with opto-acoustical sensor fusion methods

    NASA Astrophysics Data System (ADS)

    Lohweg, Volker; Schaede, Johannes; Türke, Thomas

    2006-02-01

    The authenticity checking and inspection of bank notes is a high labour intensive process where traditionally every note on every sheet is inspected manually. However with the advent of more and more sophisticated security features, both visible and invisible, and the requirement of cost reduction in the printing process, it is clear that automation is required. As more and more print techniques and new security features will be established, total quality security, authenticity and bank note printing must be assured. Therefore, this factor necessitates amplification of a sensorial concept in general. We propose a concept for both authenticity checking and inspection methods for pattern recognition and classification for securities and banknotes, which is based on the concept of sensor fusion and fuzzy interpretation of data measures. In the approach different methods of authenticity analysis and print flaw detection are combined, which can be used for vending or sorting machines, as well as for printing machines. Usually only the existence or appearance of colours and their textures are checked by cameras. Our method combines the visible camera images with IR-spectral sensitive sensors, acoustical and other measurements like temperature and pressure of printing machines.

  12. Glaucoma risk index: automated glaucoma detection from color fundus images.

    PubMed

    Bock, Rüdiger; Meier, Jörg; Nyúl, László G; Hornegger, Joachim; Michelson, Georg

    2010-06-01

    Glaucoma as a neurodegeneration of the optic nerve is one of the most common causes of blindness. Because revitalization of the degenerated nerve fibers of the optic nerve is impossible early detection of the disease is essential. This can be supported by a robust and automated mass-screening. We propose a novel automated glaucoma detection system that operates on inexpensive to acquire and widely used digital color fundus images. After a glaucoma specific preprocessing, different generic feature types are compressed by an appearance-based dimension reduction technique. Subsequently, a probabilistic two-stage classification scheme combines these features types to extract the novel Glaucoma Risk Index (GRI) that shows a reasonable glaucoma detection performance. On a sample set of 575 fundus images a classification accuracy of 80% has been achieved in a 5-fold cross-validation setup. The GRI gains a competitive area under ROC (AUC) of 88% compared to the established topography-based glaucoma probability score of scanning laser tomography with AUC of 87%. The proposed color fundus image-based GRI achieves a competitive and reliable detection performance on a low-priced modality by the statistical analysis of entire images of the optic nerve head. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  13. Predictive models reduce talent development costs in female gymnastics.

    PubMed

    Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle

    2017-04-01

    This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.

  14. A New Dusts Sensor for Cultural Heritage Applications Based on Image Processing

    PubMed Central

    Proietti, Andrea; Leccese, Fabio; Caciotta, Maurizio; Morresi, Fabio; Santamaria, Ulderico; Malomo, Carmela

    2014-01-01

    In this paper, we propose a new sensor for the detection and analysis of dusts (seen as powders and fibers) in indoor environments, especially designed for applications in the field of Cultural Heritage or in other contexts where the presence of dust requires special care (surgery, clean rooms, etc.). The presented system relies on image processing techniques (enhancement, noise reduction, segmentation, metrics analysis) and it allows obtaining both qualitative and quantitative information on the accumulation of dust. This information aims to identify the geometric and topological features of the elements of the deposit. The curators can use this information in order to design suitable prevention and maintenance actions for objects and environments. The sensor consists of simple and relatively cheap tools, based on a high-resolution image acquisition system, a preprocessing software to improve the captured image and an analysis algorithm for the feature extraction and the classification of the elements of the dust deposit. We carried out some tests in order to validate the system operation. These tests were performed within the Sistine Chapel in the Vatican Museums, showing the good performance of the proposed sensor in terms of execution time and classification accuracy. PMID:24901977

  15. An efficient data mining framework for the characterization of symptomatic and asymptomatic carotid plaque using bidimensional empirical mode decomposition technique.

    PubMed

    Molinari, Filippo; Raghavendra, U; Gudigar, Anjan; Meiburger, Kristen M; Rajendra Acharya, U

    2018-02-23

    Atherosclerosis is a type of cardiovascular disease which may cause stroke. It is due to the deposition of fatty plaque in the artery walls resulting in the reduction of elasticity gradually and hence restricting the blood flow to the heart. Hence, an early prediction of carotid plaque deposition is important, as it can save lives. This paper proposes a novel data mining framework for the assessment of atherosclerosis in its early stage using ultrasound images. In this work, we are using 1353 symptomatic and 420 asymptomatic carotid plaque ultrasound images. Our proposed method classifies the symptomatic and asymptomatic carotid plaques using bidimensional empirical mode decomposition (BEMD) and entropy features. The unbalanced data samples are compensated using adaptive synthetic sampling (ADASYN), and the developed method yielded a promising accuracy of 91.43%, sensitivity of 97.26%, and specificity of 83.22% using fourteen features. Hence, the proposed method can be used as an assisting tool during the regular screening of carotid arteries in hospitals. Graphical abstract Outline for our efficient data mining framework for the characterization of symptomatic and asymptomatic carotid plaques.

  16. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    DOE PAGES

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; ...

    2016-06-15

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in realmore » time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments).« less

  17. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    PubMed Central

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-01-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments). PMID:27302087

  18. Comparing the performance of two CBIRS indexing schemes

    NASA Astrophysics Data System (ADS)

    Mueller, Wolfgang; Robbert, Guenter; Henrich, Andreas

    2003-01-01

    Content based image retrieval (CBIR) as it is known today has to deal with a number of challenges. Quickly summarized, the main challenges are firstly, to bridge the semantic gap between high-level concepts and low-level features using feedback, secondly to provide performance under adverse conditions. High-dimensional spaces, as well as a demanding machine learning task make the right way of indexing an important issue. When indexing multimedia data, most groups opt for extraction of high-dimensional feature vectors from the data, followed by dimensionality reduction like PCA (Principal Components Analysis) or LSI (Latent Semantic Indexing). The resulting vectors are indexed using spatial indexing structures such as kd-trees or R-trees, for example. Other projects, such as MARS and Viper propose the adaptation of text indexing techniques, notably the inverted file. Here, the Viper system is the most direct adaptation of text retrieval techniques to quantized vectors. However, while the Viper query engine provides decent performance together with impressive user-feedback behavior, as well as the possibility for easy integration of long-term learning algorithms, and support for potentially infinite feature vectors, there has been no comparison of vector-based methods and inverted-file-based methods under similar conditions. In this publication, we compare a CBIR query engine that uses inverted files (Bothrops, a rewrite of the Viper query engine based on a relational database), and a CBIR query engine based on LSD (Local Split Decision) trees for spatial indexing using the same feature sets. The Benchathlon initiative works on providing a set of images and ground truth for simulating image queries by example and corresponding user feedback. When performing the Benchathlon benchmark on a CBIR system (the System Under Test, SUT), a benchmarking harness connects over internet to the SUT, performing a number of queries using an agreed-upon protocol, the multimedia retrieval markup language (MRML). Using this benchmark one can measure the quality of retrieval, as well as the overall (speed) performance of the benchmarked system. Our Benchmarks will draw on the Benchathlon"s work for documenting the retrieval performance of both inverted file-based and LSD tree based techniques. However in addition to these results, we will present statistics, that can be obtained only inside the system under test. These statistics will include the number of complex mathematical operations, as well as the amount of data that has to be read from disk during operation of a query.

  19. Prostate cancer detection using machine learning techniques by employing combination of features extracting strategies.

    PubMed

    Hussain, Lal; Ahmed, Adeel; Saeed, Sharjil; Rathore, Saima; Awan, Imtiaz Ahmed; Shah, Saeed Arif; Majid, Abdul; Idris, Adnan; Awan, Anees Ahmed

    2018-02-06

    Prostate is a second leading causes of cancer deaths among men. Early detection of cancer can effectively reduce the rate of mortality caused by Prostate cancer. Due to high and multiresolution of MRIs from prostate cancer require a proper diagnostic systems and tools. In the past researchers developed Computer aided diagnosis (CAD) systems that help the radiologist to detect the abnormalities. In this research paper, we have employed novel Machine learning techniques such as Bayesian approach, Support vector machine (SVM) kernels: polynomial, radial base function (RBF) and Gaussian and Decision Tree for detecting prostate cancer. Moreover, different features extracting strategies are proposed to improve the detection performance. The features extracting strategies are based on texture, morphological, scale invariant feature transform (SIFT), and elliptic Fourier descriptors (EFDs) features. The performance was evaluated based on single as well as combination of features using Machine Learning Classification techniques. The Cross validation (Jack-knife k-fold) was performed and performance was evaluated in term of receiver operating curve (ROC) and specificity, sensitivity, Positive predictive value (PPV), negative predictive value (NPV), false positive rate (FPR). Based on single features extracting strategies, SVM Gaussian Kernel gives the highest accuracy of 98.34% with AUC of 0.999. While, using combination of features extracting strategies, SVM Gaussian kernel with texture + morphological, and EFDs + morphological features give the highest accuracy of 99.71% and AUC of 1.00.

  20. Online dimensionality reduction using competitive learning and Radial Basis Function network.

    PubMed

    Tomenko, Vladimir

    2011-06-01

    The general purpose dimensionality reduction method should preserve data interrelations at all scales. Additional desired features include online projection of new data, processing nonlinearly embedded manifolds and large amounts of data. The proposed method, called RBF-NDR, combines these features. RBF-NDR is comprised of two modules. The first module learns manifolds by utilizing modified topology representing networks and geodesic distance in data space and approximates sampled or streaming data with a finite set of reference patterns, thus achieving scalability. Using input from the first module, the dimensionality reduction module constructs mappings between observation and target spaces. Introduction of specific loss function and synthesis of the training algorithm for Radial Basis Function network results in global preservation of data structures and online processing of new patterns. The RBF-NDR was applied for feature extraction and visualization and compared with Principal Component Analysis (PCA), neural network for Sammon's projection (SAMANN) and Isomap. With respect to feature extraction, the method outperformed PCA and yielded increased performance of the model describing wastewater treatment process. As for visualization, RBF-NDR produced superior results compared to PCA and SAMANN and matched Isomap. For the Topic Detection and Tracking corpus, the method successfully separated semantically different topics. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis

    PubMed Central

    2013-01-01

    Background Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. Results We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. Conclusions When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time. PMID:23815620

  2. Type II Supernova Spectral Diversity. I. Observations, Sample Characterization, and Spectral Line Evolution

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Claudia P.; Anderson, Joseph P.; Hamuy, Mario; Morrell, Nidia; González-Gaitan, Santiago; Stritzinger, Maximilian D.; Phillips, Mark M.; Galbany, Lluis; Folatelli, Gastón; Dessart, Luc; Contreras, Carlos; Della Valle, Massimo; Freedman, Wendy L.; Hsiao, Eric Y.; Krisciunas, Kevin; Madore, Barry F.; Maza, José; Suntzeff, Nicholas B.; Prieto, Jose Luis; González, Luis; Cappellaro, Enrico; Navarrete, Mauricio; Pizzella, Alessandro; Ruiz, Maria T.; Smith, R. Chris; Turatto, Massimo

    2017-11-01

    We present 888 visual-wavelength spectra of 122 nearby type II supernovae (SNe II) obtained between 1986 and 2009, and ranging between 3 and 363 days post-explosion. In this first paper, we outline our observations and data reduction techniques, together with a characterization based on the spectral diversity of SNe II. A statistical analysis of the spectral matching technique is discussed as an alternative to nondetection constraints for estimating SN explosion epochs. The time evolution of spectral lines is presented and analyzed in terms of how this differs for SNe of different photometric, spectral, and environmental properties: velocities, pseudo-equivalent widths, decline rates, magnitudes, time durations, and environment metallicity. Our sample displays a large range in ejecta expansion velocities, from ˜9600 to ˜1500 km s-1 at 50 days post-explosion with a median {{{H}}}α value of 7300 km s-1. This is most likely explained through differing explosion energies. Significant diversity is also observed in the absolute strength of spectral lines, characterized through their pseudo-equivalent widths. This implies significant diversity in both temperature evolution (linked to progenitor radius) and progenitor metallicity between different SNe II. Around 60% of our sample shows an extra absorption component on the blue side of the {{{H}}}α P-Cygni profile (“Cachito” feature) between 7 and 120 days since explosion. Studying the nature of Cachito, we conclude that these features at early times (before ˜35 days) are associated with Si II λ 6355, while past the middle of the plateau phase they are related to high velocity (HV) features of hydrogen lines. This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile; and the Gemini Observatory, Cerro Pachon, Chile (Gemini Program GS-2008B-Q-56). Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile (ESO Programs 076.A-0156, 078.D-0048, 080.A-0516, and 082.A-0526).

  3. Content based image retrieval using local binary pattern operator and data mining techniques.

    PubMed

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honorio, J.; Goldstein, R.; Honorio, J.

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statisticalmore » theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.« less

  5. Quantitative analysis and feature recognition in 3-D microstructural data sets

    NASA Astrophysics Data System (ADS)

    Lewis, A. C.; Suh, C.; Stukowski, M.; Geltmacher, A. B.; Spanos, G.; Rajan, K.

    2006-12-01

    A three-dimensional (3-D) reconstruction of an austenitic stainless-steel microstructure was used as input for an image-based finite-element model to simulate the anisotropic elastic mechanical response of the microstructure. The quantitative data-mining and data-warehousing techniques used to correlate regions of high stress with critical microstructural features are discussed. Initial analysis of elastic stresses near grain boundaries due to mechanical loading revealed low overall correlation with their location in the microstructure. However, the use of data-mining and feature-tracking techniques to identify high-stress outliers revealed that many of these high-stress points are generated near grain boundaries and grain edges (triple junctions). These techniques also allowed for the differentiation between high stresses due to boundary conditions of the finite volume reconstructed, and those due to 3-D microstructural features.

  6. The GenTechnique Project: Developing an Open Environment for Learning Molecular Genetics.

    ERIC Educational Resources Information Center

    Calza, R. E.; Meade, J. T.

    1998-01-01

    The GenTechnique project at Washington State University uses a networked learning environment for molecular genetics learning. The project is developing courseware featuring animation, hyper-link controls, and interactive self-assessment exercises focusing on fundamental concepts. The first pilot course featured a Web-based module on DNA…

  7. Chromium:forsterite laser frequency comb stabilization and development of portable frequency references inside a hollow optical fiber

    NASA Astrophysics Data System (ADS)

    Thapa, Rajesh

    We have made significant accomplishments in the development of portable frequency standard inside hollow optical fibers. Such standards will improve portable optical frequency references available to the telecommunications industry. Our approach relies on the development of a stabilized Cr:forsterite laser to generate the frequency comb in the near-IR region. This laser is self referenced and locked to a CW laser which in turn is stabilized to a sub-Doppler feature of a molecular transition. The molecular transition is realized using a hollow core fiber filled with acetylene gas. We finally measured the absolute frequency of these molecular transitions to characterize the references. In this thesis, the major ideas, techniques and experimental results for the development and absolute frequency measurement of the portable frequency references are presented. A prism-based Cr:forsterite frequency comb is stabilized. We have effectively used the prism modulation along with power modulation inside the cavity in order to actively stabilize the frequency comb. We have also studied the carrier-envelope-offset frequency (f0) dynamics of the laser and its effect on laser stabilization. A reduction of f0 linewidth from ˜2 MHz to ˜20 kHz has also been observed. Both our in-loop and out-of-loop measurements of the comb stability showed that the comb is stable within a part in 1011 at 1-s gate time and is currently limited by our reference signal. In order to develop this portable frequency standard, saturated absorption spectroscopy is performed on the acetylene v1 + v3 band near 1532 nm inside different kinds of hollow optical fibers. The observed linewidths are a factor 2 narrower in the 20 mum fiber as compared to 10 mum fiber, and vary from 20-40 MHz depending on pressure and power. The 70 mum kagome fiber shows a further reduction in linewidth to less than 10 MHz. In order to seal the gas inside the hollow optical fiber, we have also developed a technique of splicing the hollow fiber to solid fiber in a standard commercial arc splicer, rather than the more expensive filament splicer, and achieved comparable splice loss. We locked a CW laser to the saturated absorption feature using a Frequency Modulation technique and then compared to an optical frequency comb. The stabilized frequency comb, providing a dense grid of reference frequencies in near-infrared region is used to characterize and measure the absolute frequency reference based on these hollow optical fibers.

  8. Assessing various Infrared (IR) microscopic imaging techniques for post-mortem interval evaluation of human skeletal remains.

    PubMed

    Woess, Claudia; Unterberger, Seraphin Hubert; Roider, Clemens; Ritsch-Marte, Monika; Pemberger, Nadin; Cemper-Kiesslich, Jan; Hatzer-Grubwieser, Petra; Parson, Walther; Pallua, Johannes Dominikus

    2017-01-01

    Due to the influence of many environmental processes, a precise determination of the post-mortem interval (PMI) of skeletal remains is known to be very complicated. Although methods for the investigation of the PMI exist, there still remains much room for improvement. In this study the applicability of infrared (IR) microscopic imaging techniques such as reflection-, ATR- and Raman- microscopic imaging for the estimation of the PMI of human skeletal remains was tested. PMI specific features were identified and visualized by overlaying IR imaging data with morphological tissue structures obtained using light microscopy to differentiate between forensic and archaeological bone samples. ATR and reflection spectra revealed that a more prominent peak at 1042 cm-1 (an indicator for bone mineralization) was observable in archeological bone material when compared with forensic samples. Moreover, in the case of the archaeological bone material, a reduction in the levels of phospholipids, proteins, nucleic acid sugars, complex carbohydrates as well as amorphous or fully hydrated sugars was detectable at (reciprocal wavelengths/energies) between 3000 cm-1 to 2800 cm-1. Raman spectra illustrated a similar picture with less ν2PO43-at 450 cm-1 and ν4PO43- from 590 cm-1 to 584 cm-1, amide III at 1272 cm-1 and protein CH2 deformation at 1446 cm-1 in archeological bone material/samples/sources. A semi-quantitative determination of various distributions of biomolecules by chemi-maps of reflection- and ATR- methods revealed that there were less carbohydrates and complex carbohydrates as well as amorphous or fully hydrated sugars in archaeological samples compared with forensic bone samples. Raman- microscopic imaging data showed a reduction in B-type carbonate and protein α-helices after a PMI of 3 years. The calculated mineral content ratio and the organic to mineral ratio displayed that the mineral content ratio increases, while the organic to mineral ratio decreases with time. Cluster-analyses of data from Raman microscopic imaging reconstructed histo-anatomical features in comparison to the light microscopic image and finally, by application of principal component analyses (PCA), it was possible to see a clear distinction between forensic and archaeological bone samples. Hence, the spectral characterization of inorganic and organic compounds by the afore mentioned techniques, followed by analyses such as multivariate imaging analysis (MIAs) and principal component analyses (PCA), appear to be suitable for the post mortem interval (PMI) estimation of human skeletal remains.

  9. Assessing various Infrared (IR) microscopic imaging techniques for post-mortem interval evaluation of human skeletal remains

    PubMed Central

    Roider, Clemens; Ritsch-Marte, Monika; Pemberger, Nadin; Cemper-Kiesslich, Jan; Hatzer-Grubwieser, Petra; Parson, Walther; Pallua, Johannes Dominikus

    2017-01-01

    Due to the influence of many environmental processes, a precise determination of the post-mortem interval (PMI) of skeletal remains is known to be very complicated. Although methods for the investigation of the PMI exist, there still remains much room for improvement. In this study the applicability of infrared (IR) microscopic imaging techniques such as reflection-, ATR- and Raman- microscopic imaging for the estimation of the PMI of human skeletal remains was tested. PMI specific features were identified and visualized by overlaying IR imaging data with morphological tissue structures obtained using light microscopy to differentiate between forensic and archaeological bone samples. ATR and reflection spectra revealed that a more prominent peak at 1042 cm-1 (an indicator for bone mineralization) was observable in archeological bone material when compared with forensic samples. Moreover, in the case of the archaeological bone material, a reduction in the levels of phospholipids, proteins, nucleic acid sugars, complex carbohydrates as well as amorphous or fully hydrated sugars was detectable at (reciprocal wavelengths/energies) between 3000 cm-1 to 2800 cm-1. Raman spectra illustrated a similar picture with less ν2PO43−at 450 cm-1 and ν4PO43− from 590 cm-1 to 584 cm-1, amide III at 1272 cm-1 and protein CH2 deformation at 1446 cm-1 in archeological bone material/samples/sources. A semi-quantitative determination of various distributions of biomolecules by chemi-maps of reflection- and ATR- methods revealed that there were less carbohydrates and complex carbohydrates as well as amorphous or fully hydrated sugars in archaeological samples compared with forensic bone samples. Raman- microscopic imaging data showed a reduction in B-type carbonate and protein α-helices after a PMI of 3 years. The calculated mineral content ratio and the organic to mineral ratio displayed that the mineral content ratio increases, while the organic to mineral ratio decreases with time. Cluster-analyses of data from Raman microscopic imaging reconstructed histo-anatomical features in comparison to the light microscopic image and finally, by application of principal component analyses (PCA), it was possible to see a clear distinction between forensic and archaeological bone samples. Hence, the spectral characterization of inorganic and organic compounds by the afore mentioned techniques, followed by analyses such as multivariate imaging analysis (MIAs) and principal component analyses (PCA), appear to be suitable for the post mortem interval (PMI) estimation of human skeletal remains. PMID:28334006

  10. A Novel Continuous Blood Pressure Estimation Approach Based on Data Mining Techniques.

    PubMed

    Miao, Fen; Fu, Nan; Zhang, Yuan-Ting; Ding, Xiao-Rong; Hong, Xi; He, Qingyun; Li, Ye

    2017-11-01

    Continuous blood pressure (BP) estimation using pulse transit time (PTT) is a promising method for unobtrusive BP measurement. However, the accuracy of this approach must be improved for it to be viable for a wide range of applications. This study proposes a novel continuous BP estimation approach that combines data mining techniques with a traditional mechanism-driven model. First, 14 features derived from simultaneous electrocardiogram and photoplethysmogram signals were extracted for beat-to-beat BP estimation. A genetic algorithm-based feature selection method was then used to select BP indicators for each subject. Multivariate linear regression and support vector regression were employed to develop the BP model. The accuracy and robustness of the proposed approach were validated for static, dynamic, and follow-up performance. Experimental results based on 73 subjects showed that the proposed approach exhibited excellent accuracy in static BP estimation, with a correlation coefficient and mean error of 0.852 and -0.001 ± 3.102 mmHg for systolic BP, and 0.790 and -0.004 ± 2.199 mmHg for diastolic BP. Similar performance was observed for dynamic BP estimation. The robustness results indicated that the estimation accuracy was lower by a certain degree one day after model construction but was relatively stable from one day to six months after construction. The proposed approach is superior to the state-of-the-art PTT-based model for an approximately 2-mmHg reduction in the standard derivation at different time intervals, thus providing potentially novel insights for cuffless BP estimation.

  11. Prediction of siRNA potency using sparse logistic regression.

    PubMed

    Hu, Wei; Hu, John

    2014-06-01

    RNA interference (RNAi) can modulate gene expression at post-transcriptional as well as transcriptional levels. Short interfering RNA (siRNA) serves as a trigger for the RNAi gene inhibition mechanism, and therefore is a crucial intermediate step in RNAi. There have been extensive studies to identify the sequence characteristics of potent siRNAs. One such study built a linear model using LASSO (Least Absolute Shrinkage and Selection Operator) to measure the contribution of each siRNA sequence feature. This model is simple and interpretable, but it requires a large number of nonzero weights. We have introduced a novel technique, sparse logistic regression, to build a linear model using single-position specific nucleotide compositions which has the same prediction accuracy of the linear model based on LASSO. The weights in our new model share the same general trend as those in the previous model, but have only 25 nonzero weights out of a total 84 weights, a 54% reduction compared to the previous model. Contrary to the linear model based on LASSO, our model suggests that only a few positions are influential on the efficacy of the siRNA, which are the 5' and 3' ends and the seed region of siRNA sequences. We also employed sparse logistic regression to build a linear model using dual-position specific nucleotide compositions, a task LASSO is not able to accomplish well due to its high dimensional nature. Our results demonstrate the superiority of sparse logistic regression as a technique for both feature selection and regression over LASSO in the context of siRNA design.

  12. Performance comparison of phenomenology-based features to generic features for false alarm reduction in UWB SAR imagery

    NASA Astrophysics Data System (ADS)

    Marble, Jay A.; Gorman, John D.

    1999-08-01

    A feature based approach is taken to reduce the occurrence of false alarms in foliage penetrating, ultra-wideband, synthetic aperture radar data. A set of 'generic' features is defined based on target size, shape, and pixel intensity. A second set of features is defined that contains generic features combined with features based on scattering phenomenology. Each set is combined using a quadratic polynomial discriminant (QPD), and performance is characterized by generating a receiver operating characteristic (ROC) curve. Results show that the feature set containing phenomenological features improves performance against both broadside and end-on targets. Performance against end-on targets, however, is especially pronounced.

  13. Arabic OCR: toward a complete system

    NASA Astrophysics Data System (ADS)

    El-Bialy, Ahmed M.; Kandil, Ahmed H.; Hashish, Mohamed; Yamany, Sameh M.

    1999-12-01

    Latin and Chinese OCR systems have been studied extensively in the literature. Yet little work was performed for Arabic character recognition. This is due to the technical challenges found in the Arabic text. Due to its cursive nature, a powerful and stable text segmentation is needed. Also; features capturing the characteristics of the rich Arabic character representation are needed to build the Arabic OCR. In this paper a novel segmentation technique which is font and size independent is introduced. This technique can segment the cursive written text line even if the line suffers from small skewness. The technique is not sensitive to the location of the centerline of the text line and can segment different font sizes and type (for different character sets) occurring on the same line. Features extraction is considered one of the most important phases of the text reading system. Ideally, the features extracted from a character image should capture the essential characteristics of this character that are independent of the font type and size. In such ideal case, the classifier stores a single prototype per character. However, it is practically challenging to find such ideal set of features. In this paper, a set of features that reflect the topological aspects of Arabia characters is proposed. These proposed features integrated with a topological matching technique introduce an Arabic text reading system that is semi Omni.

  14. Self-organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition

    NASA Astrophysics Data System (ADS)

    Buciu, Ioan; Pitas, Ioannis

    Two main theories exist with respect to face encoding and representation in the human visual system (HVS). The first one refers to the dense (holistic) representation of the face, where faces have "holon"-like appearance. The second one claims that a more appropriate face representation is given by a sparse code, where only a small fraction of the neural cells corresponding to face encoding is activated. Theoretical and experimental evidence suggest that the HVS performs face analysis (encoding, storing, face recognition, facial expression recognition) in a structured and hierarchical way, where both representations have their own contribution and goal. According to neuropsychological experiments, it seems that encoding for face recognition, relies on holistic image representation, while a sparse image representation is used for facial expression analysis and classification. From the computer vision perspective, the techniques developed for automatic face and facial expression recognition fall into the same two representation types. Like in Neuroscience, the techniques which perform better for face recognition yield a holistic image representation, while those techniques suitable for facial expression recognition use a sparse or local image representation. The proposed mathematical models of image formation and encoding try to simulate the efficient storing, organization and coding of data in the human cortex. This is equivalent with embedding constraints in the model design regarding dimensionality reduction, redundant information minimization, mutual information minimization, non-negativity constraints, class information, etc. The presented techniques are applied as a feature extraction step followed by a classification method, which also heavily influences the recognition results.

  15. 3800 Years of Quantitative Precipitation Reconstruction from the Northwest Yucatan Peninsula

    PubMed Central

    Carrillo-Bastos, Alicia; Islebe, Gerald A.; Torrescano-Valle, Nuria

    2013-01-01

    Precipitation over the last 3800 years has been reconstructed using modern pollen calibration and precipitation data. A transfer function was then performed via the linear method of partial least squares. By calculating precipitation anomalies, it is estimated that precipitation deficits were greater than surpluses, reaching 21% and <9%, respectively. The period from 50 BC to 800 AD was the driest of the record. The drought related to the abandonment of the Maya Preclassic period featured a 21% reduction in precipitation, while the drought of the Maya collapse (800 to 860 AD) featured a reduction of 18%. The Medieval Climatic Anomaly was a period of positive phases (3.8–7.6%). The Little Ice Age was a period of climatic variability, with reductions in precipitation but without deficits. PMID:24391940

  16. Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung

    2013-01-01

    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713

  17. Three-dimensional textural features of conventional MRI improve diagnostic classification of childhood brain tumours.

    PubMed

    Fetit, Ahmed E; Novak, Jan; Peet, Andrew C; Arvanitits, Theodoros N

    2015-09-01

    The aim of this study was to assess the efficacy of three-dimensional texture analysis (3D TA) of conventional MR images for the classification of childhood brain tumours in a quantitative manner. The dataset comprised pre-contrast T1 - and T2-weighted MRI series obtained from 48 children diagnosed with brain tumours (medulloblastoma, pilocytic astrocytoma and ependymoma). 3D and 2D TA were carried out on the images using first-, second- and higher order statistical methods. Six supervised classification algorithms were trained with the most influential 3D and 2D textural features, and their performances in the classification of tumour types, using the two feature sets, were compared. Model validation was carried out using the leave-one-out cross-validation (LOOCV) approach, as well as stratified 10-fold cross-validation, in order to provide additional reassurance. McNemar's test was used to test the statistical significance of any improvements demonstrated by 3D-trained classifiers. Supervised learning models trained with 3D textural features showed improved classification performances to those trained with conventional 2D features. For instance, a neural network classifier showed 12% improvement in area under the receiver operator characteristics curve (AUC) and 19% in overall classification accuracy. These improvements were statistically significant for four of the tested classifiers, as per McNemar's tests. This study shows that 3D textural features extracted from conventional T1 - and T2-weighted images can improve the diagnostic classification of childhood brain tumours. Long-term benefits of accurate, yet non-invasive, diagnostic aids include a reduction in surgical procedures, improvement in surgical and therapy planning, and support of discussions with patients' families. It remains necessary, however, to extend the analysis to a multicentre cohort in order to assess the scalability of the techniques used. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Meta-Analysis of Free-Response Studies, 1992-2008: Assessing the Noise Reduction Model in Parapsychology

    ERIC Educational Resources Information Center

    Storm, Lance; Tressoldi, Patrizio E.; Di Risio, Lorenzo

    2010-01-01

    We report the results of meta-analyses on 3 types of free-response study: (a) ganzfeld (a technique that enhances a communication anomaly referred to as "psi"); (b) nonganzfeld noise reduction using alleged psi-enhancing techniques such as dream psi, meditation, relaxation, or hypnosis; and (c) standard free response (nonganzfeld, no noise…

  19. Inquiry-Based Stress Reduction Meditation Technique for Teacher Burnout: A Qualitative Study

    ERIC Educational Resources Information Center

    Schnaider-Levi, Lia; Mitnik, Inbal; Zafrani, Keren; Goldman, Zehavit; Lev-Ari, Shahar

    2017-01-01

    An inquiry-based intervention has been found to have a positive effect on burnout and mental well-being parameters among teachers. The aim of the current study was to qualitatively evaluate the effect of the inquiry-based stress reduction (IBSR) meditation technique on the participants. Semi-structured interviews were conducted before and after…

  20. Recognition of Similar Shaped Handwritten Marathi Characters Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Jane, Archana P.; Pund, Mukesh A.

    2012-03-01

    The growing need have handwritten Marathi character recognition in Indian offices such as passport, railways etc has made it vital area of a research. Similar shape characters are more prone to misclassification. In this paper a novel method is provided to recognize handwritten Marathi characters based on their features extraction and adaptive smoothing technique. Feature selections methods avoid unnecessary patterns in an image whereas adaptive smoothing technique form smooth shape of charecters.Combination of both these approaches leads to the better results. Previous study shows that, no one technique achieves 100% accuracy in handwritten character recognition area. This approach of combining both adaptive smoothing & feature extraction gives better results (approximately 75-100) and expected outcomes.

Top