Sample records for dimensional feature space

  1. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    PubMed

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  2. Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding

    PubMed Central

    Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping

    2015-01-01

    Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771

  3. Binary classification of items of interest in a repeatable process

    DOEpatents

    Abell, Jeffrey A; Spicer, John Patrick; Wincek, Michael Anthony; Wang, Hui; Chakraborty, Debejyo

    2015-01-06

    A system includes host and learning machines. Each machine has a processor in electrical communication with at least one sensor. Instructions for predicting a binary quality status of an item of interest during a repeatable process are recorded in memory. The binary quality status includes passing and failing binary classes. The learning machine receives signals from the at least one sensor and identifies candidate features. Features are extracted from the candidate features, each more predictive of the binary quality status. The extracted features are mapped to a dimensional space having a number of dimensions proportional to the number of extracted features. The dimensional space includes most of the passing class and excludes at least 90 percent of the failing class. Received signals are compared to the boundaries of the recorded dimensional space to predict, in real time, the binary quality status of a subsequent item of interest.

  4. Phase space interrogation of the empirical response modes for seismically excited structures

    NASA Astrophysics Data System (ADS)

    Paul, Bibhas; George, Riya C.; Mishra, Sudib K.

    2017-07-01

    Conventional Phase Space Interrogation (PSI) for structural damage assessment relies on exciting the structure with low dimensional chaotic waveform, thereby, significantly limiting their applicability to large structures. The PSI technique is presently extended for structure subjected to seismic excitations. The high dimensionality of the phase space for seismic response(s) are overcome by the Empirical Mode Decomposition (EMD), decomposing the responses to a number of intrinsic low dimensional oscillatory modes, referred as Intrinsic Mode Functions (IMFs). Along with their low dimensionality, a few IMFs, retain sufficient information of the system dynamics to reflect the damage induced changes. The mutually conflicting nature of low-dimensionality and the sufficiency of dynamic information are taken care by the optimal choice of the IMF(s), which is shown to be the third/fourth IMFs. The optimal IMF(s) are employed for the reconstruction of the Phase space attractor following Taken's embedding theorem. The widely referred Changes in Phase Space Topology (CPST) feature is then employed on these Phase portrait(s) to derive the damage sensitive feature, referred as the CPST of the IMFs (CPST-IMF). The legitimacy of the CPST-IMF is established as a damage sensitive feature by assessing its variation with a number of damage scenarios benchmarked in the IASC-ASCE building. The damage localization capability, remarkable tolerance to noise contamination and the robustness under different seismic excitations of the feature are demonstrated.

  5. Exploring nonlinear feature space dimension reduction and data representation in breast Cadx with Laplacian eigenmaps and t-SNE.

    PubMed

    Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.

  6. A real negative selection algorithm with evolutionary preference for anomaly detection

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Chen, Wen; Li, Tao

    2017-04-01

    Traditional real negative selection algorithms (RNSAs) adopt the estimated coverage (c0) as the algorithm termination threshold, and generate detectors randomly. With increasing dimensions, the data samples could reside in the low-dimensional subspace, so that the traditional detectors cannot effectively distinguish these samples. Furthermore, in high-dimensional feature space, c0 cannot exactly reflect the detectors set coverage rate for the nonself space, and it could lead the algorithm to be terminated unexpectedly when the number of detectors is insufficient. These shortcomings make the traditional RNSAs to perform poorly in high-dimensional feature space. Based upon "evolutionary preference" theory in immunology, this paper presents a real negative selection algorithm with evolutionary preference (RNSAP). RNSAP utilizes the "unknown nonself space", "low-dimensional target subspace" and "known nonself feature" as the evolutionary preference to guide the generation of detectors, thus ensuring the detectors can cover the nonself space more effectively. Besides, RNSAP uses redundancy to replace c0 as the termination threshold, in this way RNSAP can generate adequate detectors under a proper convergence rate. The theoretical analysis and experimental result demonstrate that, compared to the classical RNSA (V-detector), RNSAP can achieve a higher detection rate, but with less detectors and computing cost.

  7. Using learning automata to determine proper subset size in high-dimensional spaces

    NASA Astrophysics Data System (ADS)

    Seyyedi, Seyyed Hossein; Minaei-Bidgoli, Behrouz

    2017-03-01

    In this paper, we offer a new method called FSLA (Finding the best candidate Subset using Learning Automata), which combines the filter and wrapper approaches for feature selection in high-dimensional spaces. Considering the difficulties of dimension reduction in high-dimensional spaces, FSLA's multi-objective functionality is to determine, in an efficient manner, a feature subset that leads to an appropriate tradeoff between the learning algorithm's accuracy and efficiency. First, using an existing weighting function, the feature list is sorted and selected subsets of the list of different sizes are considered. Then, a learning automaton verifies the performance of each subset when it is used as the input space of the learning algorithm and estimates its fitness upon the algorithm's accuracy and the subset size, which determines the algorithm's efficiency. Finally, FSLA introduces the fittest subset as the best choice. We tested FSLA in the framework of text classification. The results confirm its promising performance of attaining the identified goal.

  8. Visual scan-path analysis with feature space transient fixation moments

    NASA Astrophysics Data System (ADS)

    Dempere-Marco, Laura; Hu, Xiao-Peng; Yang, Guang-Zhong

    2003-05-01

    The study of eye movements provides useful insight into the cognitive processes underlying visual search tasks. The analysis of the dynamics of eye movements has often been approached from a purely spatial perspective. In many cases, however, it may not be possible to define meaningful or consistent dynamics without considering the features underlying the scan paths. In this paper, the definition of the feature space has been attempted through the concept of visual similarity and non-linear low dimensional embedding, which defines a mapping from the image space into a low dimensional feature manifold that preserves the intrinsic similarity of image patterns. This has enabled the definition of perceptually meaningful features without the use of domain specific knowledge. Based on this, this paper introduces a new concept called Feature Space Transient Fixation Moments (TFM). The approach presented tackles the problem of feature space representation of visual search through the use of TFM. We demonstrate the practical values of this concept for characterizing the dynamics of eye movements in goal directed visual search tasks. We also illustrate how this model can be used to elucidate the fundamental steps involved in skilled search tasks through the evolution of transient fixation moments.

  9. Exploring nonlinear feature space dimension reduction and data representation in breast CADx with Laplacian eigenmaps and t-SNE

    PubMed Central

    Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497

  10. Multiple-output support vector machine regression with feature selection for arousal/valence space emotion assessment.

    PubMed

    Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A

    2014-01-01

    Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).

  11. Nonlinear dimensionality reduction of CT histogram based feature space for predicting recurrence-free survival in non-small-cell lung cancer

    NASA Astrophysics Data System (ADS)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Aokage, K.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.

    2015-03-01

    Advantages of CT scanners with high resolution have allowed the improved detection of lung cancers. In the recent release of positive results from the National Lung Screening Trial (NLST) in the US showing that CT screening does in fact have a positive impact on the reduction of lung cancer related mortality. While this study does show the efficacy of CT based screening, physicians often face the problems of deciding appropriate management strategies for maximizing patient survival and for preserving lung function. Several key manifold-learning approaches efficiently reveal intrinsic low-dimensional structures latent in high-dimensional data spaces. This study was performed to investigate whether the dimensionality reduction can identify embedded structures from the CT histogram feature of non-small-cell lung cancer (NSCLC) space to improve the performance in predicting the likelihood of RFS for patients with NSCLC.

  12. A Generic multi-dimensional feature extraction method using multiobjective genetic programming.

    PubMed

    Zhang, Yang; Rockett, Peter I

    2009-01-01

    In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.

  13. Sample-space-based feature extraction and class preserving projection for gene expression data.

    PubMed

    Wang, Wenjun

    2013-01-01

    In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.

  14. Surrogate modelling for the prediction of spatial fields based on simultaneous dimensionality reduction of high-dimensional input/output spaces.

    PubMed

    Crevillén-García, D

    2018-04-01

    Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.

  15. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  16. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing

    PubMed Central

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246

  17. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    PubMed

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.

  18. EM in high-dimensional spaces.

    PubMed

    Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim

    2005-06-01

    This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.

  19. Variable importance in nonlinear kernels (VINK): classification of digitized histopathology.

    PubMed

    Ginsburg, Shoshana; Ali, Sahirzeeshan; Lee, George; Basavanhally, Ajay; Madabhushi, Anant

    2013-01-01

    Quantitative histomorphometry is the process of modeling appearance of disease morphology on digitized histopathology images via image-based features (e.g., texture, graphs). Due to the curse of dimensionality, building classifiers with large numbers of features requires feature selection (which may require a large training set) or dimensionality reduction (DR). DR methods map the original high-dimensional features in terms of eigenvectors and eigenvalues, which limits the potential for feature transparency or interpretability. Although methods exist for variable selection and ranking on embeddings obtained via linear DR schemes (e.g., principal components analysis (PCA)), similar methods do not yet exist for nonlinear DR (NLDR) methods. In this work we present a simple yet elegant method for approximating the mapping between the data in the original feature space and the transformed data in the kernel PCA (KPCA) embedding space; this mapping provides the basis for quantification of variable importance in nonlinear kernels (VINK). We show how VINK can be implemented in conjunction with the popular Isomap and Laplacian eigenmap algorithms. VINK is evaluated in the contexts of three different problems in digital pathology: (1) predicting five year PSA failure following radical prostatectomy, (2) predicting Oncotype DX recurrence risk scores for ER+ breast cancers, and (3) distinguishing good and poor outcome p16+ oropharyngeal tumors. We demonstrate that subsets of features identified by VINK provide similar or better classification or regression performance compared to the original high dimensional feature sets.

  20. Optimal linear and nonlinear feature extraction based on the minimization of the increased risk of misclassification. [Bayes theorem - statistical analysis/data processing

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.

    1974-01-01

    General classes of nonlinear and linear transformations were investigated for the reduction of the dimensionality of the classification (feature) space so that, for a prescribed dimension m of this space, the increase of the misclassification risk is minimized.

  1. Deep neural networks for texture classification-A theoretical analysis.

    PubMed

    Basu, Saikat; Mukhopadhyay, Supratik; Karki, Manohar; DiBiano, Robert; Ganguly, Sangram; Nemani, Ramakrishna; Gayaka, Shreekant

    2018-01-01

    We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Sampling and Visualizing Creases with Scale-Space Particles

    PubMed Central

    Kindlmann, Gordon L.; Estépar, Raúl San José; Smith, Stephen M.; Westin, Carl-Fredrik

    2010-01-01

    Particle systems have gained importance as a methodology for sampling implicit surfaces and segmented objects to improve mesh generation and shape analysis. We propose that particle systems have a significantly more general role in sampling structure from unsegmented data. We describe a particle system that computes samplings of crease features (i.e. ridges and valleys, as lines or surfaces) that effectively represent many anatomical structures in scanned medical data. Because structure naturally exists at a range of sizes relative to the image resolution, computer vision has developed the theory of scale-space, which considers an n-D image as an (n + 1)-D stack of images at different blurring levels. Our scale-space particles move through continuous four-dimensional scale-space according to spatial constraints imposed by the crease features, a particle-image energy that draws particles towards scales of maximal feature strength, and an inter-particle energy that controls sampling density in space and scale. To make scale-space practical for large three-dimensional data, we present a spline-based interpolation across scale from a small number of pre-computed blurrings at optimally selected scales. The configuration of the particle system is visualized with tensor glyphs that display information about the local Hessian of the image, and the scale of the particle. We use scale-space particles to sample the complex three-dimensional branching structure of airways in lung CT, and the major white matter structures in brain DTI. PMID:19834216

  3. Transition probabilities for non self-adjoint Hamiltonians in infinite dimensional Hilbert spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagarello, F., E-mail: fabio.bagarello@unipa.it

    In a recent paper we have introduced several possible inequivalent descriptions of the dynamics and of the transition probabilities of a quantum system when its Hamiltonian is not self-adjoint. Our analysis was carried out in finite dimensional Hilbert spaces. This is useful, but quite restrictive since many physically relevant quantum systems live in infinite dimensional Hilbert spaces. In this paper we consider this situation, and we discuss some applications to well known models, introduced in the literature in recent years: the extended harmonic oscillator, the Swanson model and a generalized version of the Landau levels Hamiltonian. Not surprisingly we willmore » find new interesting features not previously found in finite dimensional Hilbert spaces, useful for a deeper comprehension of this kind of physical systems.« less

  4. Computing a Comprehensible Model for Spam Filtering

    NASA Astrophysics Data System (ADS)

    Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael

    In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.

  5. Binary classification of items of interest in a repeatable process

    DOEpatents

    Abell, Jeffrey A.; Spicer, John Patrick; Wincek, Michael Anthony; Wang, Hui; Chakraborty, Debejyo

    2014-06-24

    A system includes host and learning machines in electrical communication with sensors positioned with respect to an item of interest, e.g., a weld, and memory. The host executes instructions from memory to predict a binary quality status of the item. The learning machine receives signals from the sensor(s), identifies candidate features, and extracts features from the candidates that are more predictive of the binary quality status relative to other candidate features. The learning machine maps the extracted features to a dimensional space that includes most of the items from a passing binary class and excludes all or most of the items from a failing binary class. The host also compares the received signals for a subsequent item of interest to the dimensional space to thereby predict, in real time, the binary quality status of the subsequent item of interest.

  6. High dimensional feature reduction via projection pursuit

    NASA Technical Reports Server (NTRS)

    Jimenez, Luis; Landgrebe, David

    1994-01-01

    The recent development of more sophisticated remote sensing systems enables the measurement of radiation in many more spectral intervals than previously possible. An example of that technology is the AVIRIS system, which collects image data in 220 bands. As a result of this, new algorithms must be developed in order to analyze the more complex data effectively. Data in a high dimensional space presents a substantial challenge, since intuitive concepts valid in a 2-3 dimensional space to not necessarily apply in higher dimensional spaces. For example, high dimensional space is mostly empty. This results from the concentration of data in the corners of hypercubes. Other examples may be cited. Such observations suggest the need to project data to a subspace of a much lower dimension on a problem specific basis in such a manner that information is not lost. Projection Pursuit is a technique that will accomplish such a goal. Since it processes data in lower dimensions, it should avoid many of the difficulties of high dimensional spaces. In this paper, we begin the investigation of some of the properties of Projection Pursuit for this purpose.

  7. Predict subcellular locations of singleplex and multiplex proteins by semi-supervised learning and dimension-reducing general mode of Chou's PseAAC.

    PubMed

    Pacharawongsakda, Eakasit; Theeramunkong, Thanaruk

    2013-12-01

    Predicting protein subcellular location is one of major challenges in Bioinformatics area since such knowledge helps us understand protein functions and enables us to select the targeted proteins during drug discovery process. While many computational techniques have been proposed to improve predictive performance for protein subcellular location, they have several shortcomings. In this work, we propose a method to solve three main issues in such techniques; i) manipulation of multiplex proteins which may exist or move between multiple cellular compartments, ii) handling of high dimensionality in input and output spaces and iii) requirement of sufficient labeled data for model training. Towards these issues, this work presents a new computational method for predicting proteins which have either single or multiple locations. The proposed technique, namely iFLAST-CORE, incorporates the dimensionality reduction in the feature and label spaces with co-training paradigm for semi-supervised multi-label classification. For this purpose, the Singular Value Decomposition (SVD) is applied to transform the high-dimensional feature space and label space into the lower-dimensional spaces. After that, due to limitation of labeled data, the co-training regression makes use of unlabeled data by predicting the target values in the lower-dimensional spaces of unlabeled data. In the last step, the component of SVD is used to project labels in the lower-dimensional space back to those in the original space and an adaptive threshold is used to map a numeric value to a binary value for label determination. A set of experiments on viral proteins and gram-negative bacterial proteins evidence that our proposed method improve the classification performance in terms of various evaluation metrics such as Aiming (or Precision), Coverage (or Recall) and macro F-measure, compared to the traditional method that uses only labeled data.

  8. Multisensor Analysis of Spectral Dimensionality and Soil Diversity in the Great Central Valley of California.

    PubMed

    Sousa, Daniel; Small, Christopher

    2018-02-14

    Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area - despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system.

  9. Multisensor Analysis of Spectral Dimensionality and Soil Diversity in the Great Central Valley of California

    PubMed Central

    Small, Christopher

    2018-01-01

    Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area – despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system. PMID:29443900

  10. Multiscale Anomaly Detection and Image Registration Algorithms for Airborne Landmine Detection

    DTIC Science & Technology

    2008-05-01

    with the sensed image. The two- dimensional correlation coefficient r for two matrices A and B both of size M ×N is given by r = ∑ m ∑ n (Amn...correlation based method by matching features in a high- dimensional feature- space . The current implementation of the SIFT algorithm uses a brute-force...by repeatedly convolving the image with a Guassian kernel. Each plane of the scale

  11. Feature extraction with deep neural networks by a generalized discriminant analysis.

    PubMed

    Stuhlsatz, André; Lippel, Jens; Zielke, Thomas

    2012-04-01

    We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.

  12. Dimensionality reduction in epidemic spreading models

    NASA Astrophysics Data System (ADS)

    Frasca, M.; Rizzo, A.; Gallo, L.; Fortuna, L.; Porfiri, M.

    2015-09-01

    Complex dynamical systems often exhibit collective dynamics that are well described by a reduced set of key variables in a low-dimensional space. Such a low-dimensional description offers a privileged perspective to understand the system behavior across temporal and spatial scales. In this work, we propose a data-driven approach to establish low-dimensional representations of large epidemic datasets by using a dimensionality reduction algorithm based on isometric features mapping (ISOMAP). We demonstrate our approach on synthetic data for epidemic spreading in a population of mobile individuals. We find that ISOMAP is successful in embedding high-dimensional data into a low-dimensional manifold, whose topological features are associated with the epidemic outbreak. Across a range of simulation parameters and model instances, we observe that epidemic outbreaks are embedded into a family of closed curves in a three-dimensional space, in which neighboring points pertain to instants that are close in time. The orientation of each curve is unique to a specific outbreak, and the coordinates correlate with the number of infected individuals. A low-dimensional description of epidemic spreading is expected to improve our understanding of the role of individual response on the outbreak dynamics, inform the selection of meaningful global observables, and, possibly, aid in the design of control and quarantine procedures.

  13. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  14. High Dimensional Classification Using Features Annealed Independence Rules.

    PubMed

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  15. Tonal Interface to MacroMolecules (TIMMol): A Textual and Tonal Tool for Molecular Visualization

    ERIC Educational Resources Information Center

    Cordes, Timothy J.; Carlson, C. Britt; Forest, Katrina T.

    2008-01-01

    We developed the three-dimensional visualization software, Tonal Interface to MacroMolecules or TIMMol, for studying atomic coordinates of protein structures. Key features include audio tones indicating x, y, z location, identification of the cursor location in one-dimensional and three-dimensional space, textual output that can be easily linked…

  16. Effective degrees of freedom of a random walk on a fractal

    NASA Astrophysics Data System (ADS)

    Balankin, Alexander S.

    2015-12-01

    We argue that a non-Markovian random walk on a fractal can be treated as a Markovian process in a fractional dimensional space with a suitable metric. This allows us to define the fractional dimensional space allied to the fractal as the ν -dimensional space Fν equipped with the metric induced by the fractal topology. The relation between the number of effective spatial degrees of freedom of walkers on the fractal (ν ) and fractal dimensionalities is deduced. The intrinsic time of random walk in Fν is inferred. The Laplacian operator in Fν is constructed. This allows us to map physical problems on fractals into the corresponding problems in Fν. In this way, essential features of physics on fractals are revealed. Particularly, subdiffusion on path-connected fractals is elucidated. The Coulomb potential of a point charge on a fractal embedded in the Euclidean space is derived. Intriguing attributes of some types of fractals are highlighted.

  17. Knowledge Driven Image Mining with Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  18. Knowledge Driven Image Mining with Mixture Density Mercer Kernals

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper we present the theory of Mercer Kernels; describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  19. Elucidating high-dimensional cancer hallmark annotation via enriched ontology.

    PubMed

    Yan, Shankai; Wong, Ka-Chun

    2017-09-01

    Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. A Novel Method Using Abstract Convex Underestimation in Ab-Initio Protein Structure Prediction for Guiding Search in Conformational Feature Space.

    PubMed

    Hao, Xiao-Hu; Zhang, Gui-Jun; Zhou, Xiao-Gen; Yu, Xu-Feng

    2016-01-01

    To address the searching problem of protein conformational space in ab-initio protein structure prediction, a novel method using abstract convex underestimation (ACUE) based on the framework of evolutionary algorithm was proposed. Computing such conformations, essential to associate structural and functional information with gene sequences, is challenging due to the high-dimensionality and rugged energy surface of the protein conformational space. As a consequence, the dimension of protein conformational space should be reduced to a proper level. In this paper, the high-dimensionality original conformational space was converted into feature space whose dimension is considerably reduced by feature extraction technique. And, the underestimate space could be constructed according to abstract convex theory. Thus, the entropy effect caused by searching in the high-dimensionality conformational space could be avoided through such conversion. The tight lower bound estimate information was obtained to guide the searching direction, and the invalid searching area in which the global optimal solution is not located could be eliminated in advance. Moreover, instead of expensively calculating the energy of conformations in the original conformational space, the estimate value is employed to judge if the conformation is worth exploring to reduce the evaluation time, thereby making computational cost lower and the searching process more efficient. Additionally, fragment assembly and the Monte Carlo method are combined to generate a series of metastable conformations by sampling in the conformational space. The proposed method provides a novel technique to solve the searching problem of protein conformational space. Twenty small-to-medium structurally diverse proteins were tested, and the proposed ACUE method was compared with It Fix, HEA, Rosetta and the developed method LEDE without underestimate information. Test results show that the ACUE method can more rapidly and more efficiently obtain the near-native protein structure.

  1. Creating Body Shapes From Verbal Descriptions by Linking Similarity Spaces.

    PubMed

    Hill, Matthew Q; Streuber, Stephan; Hahn, Carina A; Black, Michael J; O'Toole, Alice J

    2016-11-01

    Brief verbal descriptions of people's bodies (e.g., "curvy," "long-legged") can elicit vivid mental images. The ease with which these mental images are created belies the complexity of three-dimensional body shapes. We explored the relationship between body shapes and body descriptions and showed that a small number of words can be used to generate categorically accurate representations of three-dimensional bodies. The dimensions of body-shape variation that emerged in a language-based similarity space were related to major dimensions of variation computed directly from three-dimensional laser scans of 2,094 bodies. This relationship allowed us to generate three-dimensional models of people in the shape space using only their coordinates on analogous dimensions in the language-based description space. Human descriptions of photographed bodies and their corresponding models matched closely. The natural mapping between the spaces illustrates the role of language as a concise code for body shape that captures perceptually salient global and local body features. © The Author(s) 2016.

  2. Visual classification of medical data using MLP mapping.

    PubMed

    Cağatay Güler, E; Sankur, B; Kahya, Y P; Raudys, S

    1998-05-01

    In this work we discuss the design of a novel non-linear mapping method for visual classification based on multilayer perceptrons (MLP) and assigned class target values. In training the perceptron, one or more target output values for each class in a 2-dimensional space are used. In other words, class membership information is interpreted visually as closeness to target values in a 2D feature space. This mapping is obtained by training the multilayer perceptron (MLP) using class membership information, input data and judiciously chosen target values. Weights are estimated in such a way that each training feature of the corresponding class is forced to be mapped onto the corresponding 2-dimensional target value.

  3. Six-dimensional real and reciprocal space small-angle X-ray scattering tomography

    NASA Astrophysics Data System (ADS)

    Schaff, Florian; Bech, Martin; Zaslansky, Paul; Jud, Christoph; Liebi, Marianne; Guizar-Sicairos, Manuel; Pfeiffer, Franz

    2015-11-01

    When used in combination with raster scanning, small-angle X-ray scattering (SAXS) has proven to be a valuable imaging technique of the nanoscale, for example of bone, teeth and brain matter. Although two-dimensional projection imaging has been used to characterize various materials successfully, its three-dimensional extension, SAXS computed tomography, poses substantial challenges, which have yet to be overcome. Previous work using SAXS computed tomography was unable to preserve oriented SAXS signals during reconstruction. Here we present a solution to this problem and obtain a complete SAXS computed tomography, which preserves oriented scattering information. By introducing virtual tomography axes, we take advantage of the two-dimensional SAXS information recorded on an area detector and use it to reconstruct the full three-dimensional scattering distribution in reciprocal space for each voxel of the three-dimensional object in real space. The presented method could be of interest for a combined six-dimensional real and reciprocal space characterization of mesoscopic materials with hierarchically structured features with length scales ranging from a few nanometres to a few millimetres—for example, biomaterials such as bone or teeth, or functional materials such as fuel-cell or battery components.

  4. Six-dimensional real and reciprocal space small-angle X-ray scattering tomography.

    PubMed

    Schaff, Florian; Bech, Martin; Zaslansky, Paul; Jud, Christoph; Liebi, Marianne; Guizar-Sicairos, Manuel; Pfeiffer, Franz

    2015-11-19

    When used in combination with raster scanning, small-angle X-ray scattering (SAXS) has proven to be a valuable imaging technique of the nanoscale, for example of bone, teeth and brain matter. Although two-dimensional projection imaging has been used to characterize various materials successfully, its three-dimensional extension, SAXS computed tomography, poses substantial challenges, which have yet to be overcome. Previous work using SAXS computed tomography was unable to preserve oriented SAXS signals during reconstruction. Here we present a solution to this problem and obtain a complete SAXS computed tomography, which preserves oriented scattering information. By introducing virtual tomography axes, we take advantage of the two-dimensional SAXS information recorded on an area detector and use it to reconstruct the full three-dimensional scattering distribution in reciprocal space for each voxel of the three-dimensional object in real space. The presented method could be of interest for a combined six-dimensional real and reciprocal space characterization of mesoscopic materials with hierarchically structured features with length scales ranging from a few nanometres to a few millimetres--for example, biomaterials such as bone or teeth, or functional materials such as fuel-cell or battery components.

  5. Penalized Ordinal Regression Methods for Predicting Stage of Cancer in High-Dimensional Covariate Spaces.

    PubMed

    Gentry, Amanda Elswick; Jackson-Cook, Colleen K; Lyon, Debra E; Archer, Kellie J

    2015-01-01

    The pathological description of the stage of a tumor is an important clinical designation and is considered, like many other forms of biomedical data, an ordinal outcome. Currently, statistical methods for predicting an ordinal outcome using clinical, demographic, and high-dimensional correlated features are lacking. In this paper, we propose a method that fits an ordinal response model to predict an ordinal outcome for high-dimensional covariate spaces. Our method penalizes some covariates (high-throughput genomic features) without penalizing others (such as demographic and/or clinical covariates). We demonstrate the application of our method to predict the stage of breast cancer. In our model, breast cancer subtype is a nonpenalized predictor, and CpG site methylation values from the Illumina Human Methylation 450K assay are penalized predictors. The method has been made available in the ordinalgmifs package in the R programming environment.

  6. Dynamic adaptive learning for decision-making supporting systems

    NASA Astrophysics Data System (ADS)

    He, Haibo; Cao, Yuan; Chen, Sheng; Desai, Sachi; Hohil, Myron E.

    2008-03-01

    This paper proposes a novel adaptive learning method for data mining in support of decision-making systems. Due to the inherent characteristics of information ambiguity/uncertainty, high dimensionality and noisy in many homeland security and defense applications, such as surveillances, monitoring, net-centric battlefield, and others, it is critical to develop autonomous learning methods to efficiently learn useful information from raw data to help the decision making process. The proposed method is based on a dynamic learning principle in the feature spaces. Generally speaking, conventional approaches of learning from high dimensional data sets include various feature extraction (principal component analysis, wavelet transform, and others) and feature selection (embedded approach, wrapper approach, filter approach, and others) methods. However, very limited understandings of adaptive learning from different feature spaces have been achieved. We propose an integrative approach that takes advantages of feature selection and hypothesis ensemble techniques to achieve our goal. Based on the training data distributions, a feature score function is used to provide a measurement of the importance of different features for learning purpose. Then multiple hypotheses are iteratively developed in different feature spaces according to their learning capabilities. Unlike the pre-set iteration steps in many of the existing ensemble learning approaches, such as adaptive boosting (AdaBoost) method, the iterative learning process will automatically stop when the intelligent system can not provide a better understanding than a random guess in that particular subset of feature spaces. Finally, a voting algorithm is used to combine all the decisions from different hypotheses to provide the final prediction results. Simulation analyses of the proposed method on classification of different US military aircraft databases show the effectiveness of this method.

  7. Intelligent feature selection techniques for pattern classification of Lamb wave signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hinders, Mark K.; Miller, Corey A.

    2014-02-18

    Lamb wave interaction with flaws is a complex, three-dimensional phenomenon, which often frustrates signal interpretation schemes based on mode arrival time shifts predicted by dispersion curves. As the flaw severity increases, scattering and mode conversion effects will often dominate the time-domain signals, obscuring available information about flaws because multiple modes may arrive on top of each other. Even for idealized flaw geometries the scattering and mode conversion behavior of Lamb waves is very complex. Here, multi-mode Lamb waves in a metal plate are propagated across a rectangular flat-bottom hole in a sequence of pitch-catch measurements corresponding to the double crossholemore » tomography geometry. The flaw is sequentially deepened, with the Lamb wave measurements repeated at each flaw depth. Lamb wave tomography reconstructions are used to identify which waveforms have interacted with the flaw and thereby carry information about its depth. Multiple features are extracted from each of the Lamb wave signals using wavelets, which are then fed to statistical pattern classification algorithms that identify flaw severity. In order to achieve the highest classification accuracy, an optimal feature space is required but it’s never known a priori which features are going to be best. For structural health monitoring we make use of the fact that physical flaws, such as corrosion, will only increase over time. This allows us to identify feature vectors which are topologically well-behaved by requiring that sequential classes “line up” in feature vector space. An intelligent feature selection routine is illustrated that identifies favorable class distributions in multi-dimensional feature spaces using computational homology theory. Betti numbers and formal classification accuracies are calculated for each feature space subset to establish a correlation between the topology of the class distribution and the corresponding classification accuracy.« less

  8. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-10-21

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.

  9. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  10. Model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald

    1992-01-01

    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.

  11. Effective degrees of freedom of a random walk on a fractal.

    PubMed

    Balankin, Alexander S

    2015-12-01

    We argue that a non-Markovian random walk on a fractal can be treated as a Markovian process in a fractional dimensional space with a suitable metric. This allows us to define the fractional dimensional space allied to the fractal as the ν-dimensional space F(ν) equipped with the metric induced by the fractal topology. The relation between the number of effective spatial degrees of freedom of walkers on the fractal (ν) and fractal dimensionalities is deduced. The intrinsic time of random walk in F(ν) is inferred. The Laplacian operator in F(ν) is constructed. This allows us to map physical problems on fractals into the corresponding problems in F(ν). In this way, essential features of physics on fractals are revealed. Particularly, subdiffusion on path-connected fractals is elucidated. The Coulomb potential of a point charge on a fractal embedded in the Euclidean space is derived. Intriguing attributes of some types of fractals are highlighted.

  12. Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation

    NASA Astrophysics Data System (ADS)

    Yu, Jialin; Sun, Jifeng; Luo, Shasha; Duan, Bichao

    2017-09-01

    Estimating three-dimensional (3D) human poses from a single camera is usually implemented by searching pose candidates with image descriptors. Existing methods usually suppose that the mapping from feature space to pose space is linear, but in fact, their mapping relationship is highly nonlinear, which heavily degrades the performance of 3D pose estimation. We propose a method to recover 3D pose from a silhouette image. It is based on the multiview feature embedding (MFE) and the locality-sensitive autoencoders (LSAEs). On the one hand, we first depict the manifold regularized sparse low-rank approximation for MFE and then the input image is characterized by a fused feature descriptor. On the other hand, both the fused feature and its corresponding 3D pose are separately encoded by LSAEs. A two-layer back-propagation neural network is trained by parameter fine-tuning and then used to map the encoded 2D features to encoded 3D poses. Our LSAE ensures a good preservation of the local topology of data points. Experimental results demonstrate the effectiveness of our proposed method.

  13. Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk

    NASA Astrophysics Data System (ADS)

    Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina

    2014-03-01

    We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.

  14. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    PubMed

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  15. The Three-Dimensional Morphology of VY Canis Majoris. II. Polarimetry and the Line-of-Sight Distribution of the Ejecta

    NASA Astrophysics Data System (ADS)

    Jones, Terry Jay; Humphreys, Roberta M.; Helton, L. Andrew; Gui, Changfeng; Huang, Xiang

    2007-06-01

    We use imaging polarimetry taken with the HST Advanced Camera for Surveys High Resolution Camera to explore the three-dimensional structure of the circumstellar dust distribution around the red supergiant VY Canis Majoris. The polarization vectors of the nebulosity surrounding VY CMa show a strong centrosymmetric pattern in all directions except directly east and range from 10% to 80% in fractional polarization. In regions that are optically thin, and therefore likely to have only single scattering, we use the fractional polarization and photometric color to locate the physical position of the dust along the line of sight. Most of the individual arclike features and clumps seen in the intensity image are also features in the fractional polarization map. These features must be distinct geometric objects. If they were just local density enhancements, the fractional polarization would not change so abruptly at the edge of the feature. The location of these features in the ejecta of VY CMa using polarimetry provides a determination of their three-dimensional geometry independent of, but in close agreement with, the results from our study of their kinematics (Paper I). Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  16. Online dimensionality reduction using competitive learning and Radial Basis Function network.

    PubMed

    Tomenko, Vladimir

    2011-06-01

    The general purpose dimensionality reduction method should preserve data interrelations at all scales. Additional desired features include online projection of new data, processing nonlinearly embedded manifolds and large amounts of data. The proposed method, called RBF-NDR, combines these features. RBF-NDR is comprised of two modules. The first module learns manifolds by utilizing modified topology representing networks and geodesic distance in data space and approximates sampled or streaming data with a finite set of reference patterns, thus achieving scalability. Using input from the first module, the dimensionality reduction module constructs mappings between observation and target spaces. Introduction of specific loss function and synthesis of the training algorithm for Radial Basis Function network results in global preservation of data structures and online processing of new patterns. The RBF-NDR was applied for feature extraction and visualization and compared with Principal Component Analysis (PCA), neural network for Sammon's projection (SAMANN) and Isomap. With respect to feature extraction, the method outperformed PCA and yielded increased performance of the model describing wastewater treatment process. As for visualization, RBF-NDR produced superior results compared to PCA and SAMANN and matched Isomap. For the Topic Detection and Tracking corpus, the method successfully separated semantically different topics. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Growing a hypercubical output space in a self-organizing feature map.

    PubMed

    Bauer, H U; Villmann, T

    1997-01-01

    Neural maps project data from an input space onto a neuron position in a (often lower dimensional) output space grid in a neighborhood preserving way, with neighboring neurons in the output space responding to neighboring data points in the input space. A map-learning algorithm can achieve an optimal neighborhood preservation only, if the output space topology roughly matches the effective structure of the data in the input space. We here present a growth algorithm, called the GSOM or growing self-organizing map, which enhances a widespread map self-organization process, Kohonen's self-organizing feature map (SOFM), by an adaptation of the output space grid during learning. The GSOM restricts the output space structure to the shape of a general hypercubical shape, with the overall dimensionality of the grid and its extensions along the different directions being subject of the adaptation. This constraint meets the demands of many larger information processing systems, of which the neural map can be a part. We apply our GSOM-algorithm to three examples, two of which involve real world data. Using recently developed methods for measuring the degree of neighborhood preservation in neural maps, we find the GSOM-algorithm to produce maps which preserve neighborhoods in a nearly optimal fashion.

  18. Quantitative lithologic mapping in spectral ratio feature space - Volcanic, sedimentary and metamorphic terrains

    NASA Technical Reports Server (NTRS)

    Campos-Marquetti, Raul, Jr.; Rockwell, Barnaby

    1990-01-01

    The nature of spectral lithologic mapping is studied utilizing ratios centered around the wavelength means of TM imagery. Laboratory-derived spectra are analyzed to determine the two-dimensional relationships and distributions visible in spectral ratio feature space. The spectral distributions of various rocks and minerals in ratio feature space are found to be controlled by several spectrally dominant molecules. Three study areas were examined: Rawhide Mining District, Nevada; Manzano Mountains, New Mexico; and the Sevilleta Long Term Ecological Research site in New Mexico. It is shown that, in the comparison of two ratio plots of laboratory reflectance spectra, i.e., 0.66/0.485 micron versus 1.65/2.22 microns with those derived from TM data, several molecules spectrally dominate the reflectance characteristic of surface lithologic units. Utilizing the above ratio combination, two areas are successfully mapped based on their distribution in spectral ratio feature space.

  19. Effective traffic features selection algorithm for cyber-attacks samples

    NASA Astrophysics Data System (ADS)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  20. 3D chromosome rendering from Hi-C data using virtual reality

    NASA Astrophysics Data System (ADS)

    Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing

    2015-01-01

    Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.

  1. Three-dimensional perspective software for representation of digital imagery data. [Olympic National Park, Washington

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1980-01-01

    A generalized three dimensional perspective software capability was developed within the framework of a low cost computer oriented geographically based information system using the Earth Resources Laboratory Applications Software (ELAS) operating subsystem. This perspective software capability, developed primarily to support data display requirements at the NASA/NSTL Earth Resources Laboratory, provides a means of displaying three dimensional feature space object data in two dimensional picture plane coordinates and makes it possible to overlay different types of information on perspective drawings to better understand the relationship of physical features. An example topographic data base is constructed and is used as the basic input to the plotting module. Examples are shown which illustrate oblique viewing angles that convey spatial concepts and relationships represented by the topographic data planes.

  2. Functional Connectivity among Spikes in Low Dimensional Space during Working Memory Task in Rat

    PubMed Central

    Tian, Xin

    2014-01-01

    Working memory (WM) is critically important in cognitive tasks. The functional connectivity has been a powerful tool for understanding the mechanism underlying the information processing during WM tasks. The aim of this study is to investigate how to effectively characterize the dynamic variations of the functional connectivity in low dimensional space among the principal components (PCs) which were extracted from the instantaneous firing rate series. Spikes were obtained from medial prefrontal cortex (mPFC) of rats with implanted microelectrode array and then transformed into continuous series via instantaneous firing rate method. Granger causality method is proposed to study the functional connectivity. Then three scalar metrics were applied to identify the changes of the reduced dimensionality functional network during working memory tasks: functional connectivity (GC), global efficiency (E) and casual density (CD). As a comparison, GC, E and CD were also calculated to describe the functional connectivity in the original space. The results showed that these network characteristics dynamically changed during the correct WM tasks. The measure values increased to maximum, and then decreased both in the original and in the reduced dimensionality. Besides, the feature values of the reduced dimensionality were significantly higher during the WM tasks than they were in the original space. These findings suggested that functional connectivity among the spikes varied dynamically during the WM tasks and could be described effectively in the low dimensional space. PMID:24658291

  3. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ochilov, S.; Alam, M. S.; Bal, A.

    2006-05-01

    Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.

  4. Blended particle filters for large-dimensional chaotic dynamical systems

    PubMed Central

    Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.

    2014-01-01

    A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886

  5. Multiview alignment hashing for efficient image search.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2015-03-01

    Hashing is a popular and efficient method for nearest neighbor search in large-scale data spaces by embedding high-dimensional feature descriptors into a similarity preserving Hamming space with a low dimension. For most hashing methods, the performance of retrieval heavily depends on the choice of the high-dimensional feature descriptor. Furthermore, a single type of feature cannot be descriptive enough for different images when it is used for hashing. Thus, how to combine multiple representations for learning effective hashing functions is an imminent task. In this paper, we present a novel unsupervised multiview alignment hashing approach based on regularized kernel nonnegative matrix factorization, which can find a compact representation uncovering the hidden semantics and simultaneously respecting the joint probability distribution of data. In particular, we aim to seek a matrix factorization to effectively fuse the multiple information sources meanwhile discarding the feature redundancy. Since the raised problem is regarded as nonconvex and discrete, our objective function is then optimized via an alternate way with relaxation and converges to a locally optimal solution. After finding the low-dimensional representation, the hashing functions are finally obtained through multivariable logistic regression. The proposed method is systematically evaluated on three data sets: 1) Caltech-256; 2) CIFAR-10; and 3) CIFAR-20, and the results show that our method significantly outperforms the state-of-the-art multiview hashing techniques.

  6. Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition

    NASA Astrophysics Data System (ADS)

    Hafizhelmi Kamaru Zaman, Fadhlan

    2018-03-01

    Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.

  7. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.

  8. The formation method of the feature space for the identification of fatigued bills

    NASA Astrophysics Data System (ADS)

    Kang, Dongshik; Oshiro, Ayumu; Ozawa, Kenji; Mitsui, Ikugo

    2014-10-01

    Fatigued bills make a trouble such as the paper jam in a bill handling machine. In the discrimination of fatigued bills using an acoustic signal, the variation of an observed bill sound is considered to be one of causes in misclassification. Therefore a technique has demanded in order to make the classification of fatigued bills more efficient. In this paper, we proposed the algorithm that extracted feature quantity of bill sound from acoustic signal using the frequency difference, and carried out discrimination experiment of fatigued bill money by Support Vector Machine(SVM). The feature quantity of frequency difference can represent the frequency components of an acoustic signal is varied by the fatigued degree of bill money. The generalization performance of SVM does not depend on the size of dimensions of the feature space, even in a high dimensional feature space such as bill-acoustic signals. Furthermore, SVM can induce an optimal classifier which considers the combination of features by the virtue of polynomial kernel functions.

  9. Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition

    PubMed Central

    Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao

    2017-01-01

    Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943

  10. Abstract Conceptual Feature Ratings Predict Gaze within Written Word Arrays: Evidence from a Visual Wor(l)d Paradigm

    ERIC Educational Resources Information Center

    Primativo, Silvia; Reilly, Jamie; Crutch, Sebastian J

    2017-01-01

    The Abstract Conceptual Feature (ACF) framework predicts that word meaning is represented within a high-dimensional semantic space bounded by weighted contributions of perceptual, affective, and encyclopedic information. The ACF, like latent semantic analysis, is amenable to distance metrics between any two words. We applied predictions of the ACF…

  11. Self-organizing neural networks--an alternative way of cluster analysis in clinical chemistry.

    PubMed

    Reibnegger, G; Wachter, H

    1996-04-15

    Supervised learning schemes have been employed by several workers for training neural networks designed to solve clinical problems. We demonstrate that unsupervised techniques can also produce interesting and meaningful results. Using a data set on the chemical composition of milk from 22 different mammals, we demonstrate that self-organizing feature maps (Kohonen networks) as well as a modified version of error backpropagation technique yield results mimicking conventional cluster analysis. Both techniques are able to project a potentially multi-dimensional input vector onto a two-dimensional space whereby neighborhood relationships remain conserved. Thus, these techniques can be used for reducing dimensionality of complicated data sets and for enhancing comprehensibility of features hidden in the data matrix.

  12. Simplifying the representation of complex free-energy landscapes using sketch-map

    PubMed Central

    Ceriotti, Michele; Tribello, Gareth A.; Parrinello, Michele

    2011-01-01

    A new scheme, sketch-map, for obtaining a low-dimensional representation of the region of phase space explored during an enhanced dynamics simulation is proposed. We show evidence, from an examination of the distribution of pairwise distances between frames, that some features of the free-energy surface are inherently high-dimensional. This makes dimensionality reduction problematic because the data does not satisfy the assumptions made in conventional manifold learning algorithms We therefore propose that when dimensionality reduction is performed on trajectory data one should think of the resultant embedding as a quickly sketched set of directions rather than a road map. In other words, the embedding tells one about the connectivity between states but does not provide the vectors that correspond to the slow degrees of freedom. This realization informs the development of sketch-map, which endeavors to reproduce the proximity information from the high-dimensionality description in a space of lower dimensionality even when a faithful embedding is not possible. PMID:21730167

  13. A new approach for embedding causal sets into Minkowski space

    NASA Astrophysics Data System (ADS)

    Liu, He; Reid, David D.

    2018-06-01

    This paper reports on recent work toward an approach for embedding causal sets into two-dimensional Minkowski space. The main new feature of the present scheme is its use of the spacelike distance measure to construct an ordering of causal set elements within anti-chains of a causal set as an aid to the embedding procedure.

  14. An efficient sampling algorithm for uncertain abnormal data detection in biomedical image processing and disease prediction.

    PubMed

    Liu, Fei; Zhang, Xi; Jia, Yan

    2015-01-01

    In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.

  15. Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.

    PubMed

    Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng

    2018-01-01

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  16. Three-dimensional object recognitions from two-dimensional images using wavelet transforms and neural networks

    NASA Astrophysics Data System (ADS)

    Deschenes, Sylvain; Sheng, Yunlong; Chevrette, Paul C.

    1998-03-01

    3D object classification from 2D IR images is shown. The wavelet transform is used for edge detection. Edge tracking is used for removing noise effectively int he wavelet transform. The invariant Fourier descriptor is used to describe the contour curves. Invariance under out-of-plane rotation is achieved by the feature space trajectory neural network working as a classifier.

  17. LDA boost classification: boosting by topics

    NASA Astrophysics Data System (ADS)

    Lei, La; Qiao, Guo; Qimin, Cao; Qitao, Li

    2012-12-01

    AdaBoost is an efficacious classification algorithm especially in text categorization (TC) tasks. The methodology of setting up a classifier committee and voting on the documents for classification can achieve high categorization precision. However, traditional Vector Space Model can easily lead to the curse of dimensionality and feature sparsity problems; so it affects classification performance seriously. This article proposed a novel classification algorithm called LDABoost based on boosting ideology which uses Latent Dirichlet Allocation (LDA) to modeling the feature space. Instead of using words or phrase, LDABoost use latent topics as the features. In this way, the feature dimension is significantly reduced. Improved Naïve Bayes (NB) is designed as the weaker classifier which keeps the efficiency advantage of classic NB algorithm and has higher precision. Moreover, a two-stage iterative weighted method called Cute Integration in this article is proposed for improving the accuracy by integrating weak classifiers into strong classifier in a more rational way. Mutual Information is used as metrics of weights allocation. The voting information and the categorization decision made by basis classifiers are fully utilized for generating the strong classifier. Experimental results reveals LDABoost making categorization in a low-dimensional space, it has higher accuracy than traditional AdaBoost algorithms and many other classic classification algorithms. Moreover, its runtime consumption is lower than different versions of AdaBoost, TC algorithms based on support vector machine and Neural Networks.

  18. Why is the World four-dimensional? Hermann Weyl’s 1955 argument and the topology of causation

    NASA Astrophysics Data System (ADS)

    De Bianchi, Silvia

    2017-08-01

    This paper approaches the question of space dimensionality by discussing a neglected argument proposed by Hermann Weyl in 1955. In Why is the World Four-Dimensional? (1955), Weyl offered a different argument from the one generally attributed to him and presented in Raum-Zeit-Materie. In the first sections of the paper, this new argument and its features are spelled-out, and in the last section, I shall develop some useful remarks on the concept of topology of causation that can still inform our reflection on the dimensionality of the world.

  19. The oligonucleotide frequency derived error gradient and its application to the binning of metagenome fragments

    PubMed Central

    2009-01-01

    Background The characterisation, or binning, of metagenome fragments is an important first step to further downstream analysis of microbial consortia. Here, we propose a one-dimensional signature, OFDEG, derived from the oligonucleotide frequency profile of a DNA sequence, and show that it is possible to obtain a meaningful phylogenetic signal for relatively short DNA sequences. The one-dimensional signal is essentially a compact representation of higher dimensional feature spaces of greater complexity and is intended to improve on the tetranucleotide frequency feature space preferred by current compositional binning methods. Results We compare the fidelity of OFDEG against tetranucleotide frequency in both an unsupervised and semi-supervised setting on simulated metagenome benchmark data. Four tests were conducted using assembler output of Arachne and phrap, and for each, performance was evaluated on contigs which are greater than or equal to 8 kbp in length and contigs which are composed of at least 10 reads. Using both G-C content in conjunction with OFDEG gave an average accuracy of 96.75% (semi-supervised) and 95.19% (unsupervised), versus 94.25% (semi-supervised) and 82.35% (unsupervised) for tetranucleotide frequency. Conclusion We have presented an observation of an alternative characteristic of DNA sequences. The proposed feature representation has proven to be more beneficial than the existing tetranucleotide frequency space to the metagenome binning problem. We do note, however, that our observation of OFDEG deserves further anlaysis and investigation. Unsupervised clustering revealed OFDEG related features performed better than standard tetranucleotide frequency in representing a relevant organism specific signal. Further improvement in binning accuracy is given by semi-supervised classification using OFDEG. The emphasis on a feature-driven, bottom-up approach to the problem of binning reveals promising avenues for future development of techniques to characterise short environmental sequences without bias toward cultivable organisms. PMID:19958473

  20. A consensus embedding approach for segmentation of high resolution in vivo prostate magnetic resonance imagery

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Rosen, Mark; Madabhushi, Anant

    2008-03-01

    Current techniques for localization of prostatic adenocarcinoma (CaP) via blinded trans-rectal ultrasound biopsy are associated with a high false negative detection rate. While high resolution endorectal in vivo Magnetic Resonance (MR) prostate imaging has been shown to have improved contrast and resolution for CaP detection over ultrasound, similarity in intensity characteristics between benign and cancerous regions on MR images contribute to a high false positive detection rate. In this paper, we present a novel unsupervised segmentation method that employs manifold learning via consensus schemes for detection of cancerous regions from high resolution 1.5 Tesla (T) endorectal in vivo prostate MRI. A significant contribution of this paper is a method to combine multiple weak, lower-dimensional representations of high dimensional feature data in a way analogous to classifier ensemble schemes, and hence create a stable and accurate reduced dimensional representation. After correcting for MR image intensity artifacts, such as bias field inhomogeneity and intensity non-standardness, our algorithm extracts over 350 3D texture features at every spatial location in the MR scene at multiple scales and orientations. Non-linear dimensionality reduction schemes such as Locally Linear Embedding (LLE) and Graph Embedding (GE) are employed to create multiple low dimensional data representations of this high dimensional texture feature space. Our novel consensus embedding method is used to average object adjacencies from within the multiple low dimensional projections so that class relationships are preserved. Unsupervised consensus clustering is then used to partition the objects in this consensus embedding space into distinct classes. Quantitative evaluation on 18 1.5 T prostate MR data against corresponding histology obtained from the multi-site ACRIN trials show a sensitivity of 92.65% and a specificity of 82.06%, which suggests that our method is successfully able to detect suspicious regions in the prostate.

  1. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    PubMed Central

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  2. Groupwise registration of cardiac perfusion MRI sequences using normalized mutual information in high dimension

    NASA Astrophysics Data System (ADS)

    Hamrouni, Sameh; Rougon, Nicolas; Pr"teux, Françoise

    2011-03-01

    In perfusion MRI (p-MRI) exams, short-axis (SA) image sequences are captured at multiple slice levels along the long-axis of the heart during the transit of a vascular contrast agent (Gd-DTPA) through the cardiac chambers and muscle. Compensating cardio-thoracic motions is a requirement for enabling computer-aided quantitative assessment of myocardial ischaemia from contrast-enhanced p-MRI sequences. The classical paradigm consists of registering each sequence frame on a reference image using some intensity-based matching criterion. In this paper, we introduce a novel unsupervised method for the spatio-temporal groupwise registration of cardiac p-MRI exams based on normalized mutual information (NMI) between high-dimensional feature distributions. Here, local contrast enhancement curves are used as a dense set of spatio-temporal features, and statistically matched through variational optimization to a target feature distribution derived from a registered reference template. The hard issue of probability density estimation in high-dimensional state spaces is bypassed by using consistent geometric entropy estimators, allowing NMI to be computed directly from feature samples. Specifically, a computationally efficient kth-nearest neighbor (kNN) estimation framework is retained, leading to closed-form expressions for the gradient flow of NMI over finite- and infinite-dimensional motion spaces. This approach is applied to the groupwise alignment of cardiac p-MRI exams using a free-form Deformation (FFD) model for cardio-thoracic motions. Experiments on simulated and natural datasets suggest its accuracy and robustness for registering p-MRI exams comprising more than 30 frames.

  3. Online feature selection with streaming features.

    PubMed

    Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan

    2013-05-01

    We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.

  4. Universal dynamical properties preclude standard clustering in a large class of biochemical data.

    PubMed

    Gomez, Florian; Stoop, Ralph L; Stoop, Ruedi

    2014-09-01

    Clustering of chemical and biochemical data based on observed features is a central cognitive step in the analysis of chemical substances, in particular in combinatorial chemistry, or of complex biochemical reaction networks. Often, for reasons unknown to the researcher, this step produces disappointing results. Once the sources of the problem are known, improved clustering methods might revitalize the statistical approach of compound and reaction search and analysis. Here, we present a generic mechanism that may be at the origin of many clustering difficulties. The variety of dynamical behaviors that can be exhibited by complex biochemical reactions on variation of the system parameters are fundamental system fingerprints. In parameter space, shrimp-like or swallow-tail structures separate parameter sets that lead to stable periodic dynamical behavior from those leading to irregular behavior. We work out the genericity of this phenomenon and demonstrate novel examples for their occurrence in realistic models of biophysics. Although we elucidate the phenomenon by considering the emergence of periodicity in dependence on system parameters in a low-dimensional parameter space, the conclusions from our simple setting are shown to continue to be valid for features in a higher-dimensional feature space, as long as the feature-generating mechanism is not too extreme and the dimension of this space is not too high compared with the amount of available data. For online versions of super-paramagnetic clustering see http://stoop.ini.uzh.ch/research/clustering. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Q-space analysis of light scattering by ice crystals

    NASA Astrophysics Data System (ADS)

    Heinson, Yuli W.; Maughan, Justin B.; Ding, Jiachen; Chakrabarti, Amitabha; Yang, Ping; Sorensen, Christopher M.

    2016-12-01

    Q-space analysis is applied to extensive simulations of the single-scattering properties of ice crystals with various habits/shapes over a range of sizes. The analysis uncovers features common to all the shapes: a forward scattering regime with intensity quantitatively related to the Rayleigh scattering by the particle and the internal coupling parameter, followed by a Guinier regime dependent upon the particle size, a complex power law regime with incipient two dimensional diffraction effects, and, in some cases, an enhanced backscattering regime. The effects of significant absorption on the scattering profile are also studied. The overall features found for the ice crystals are similar to features in scattering from same sized spheres.

  6. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  7. Optical recognition of statistical patterns

    NASA Astrophysics Data System (ADS)

    Lee, S. H.

    1981-12-01

    Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.

  8. Optical recognition of statistical patterns

    NASA Technical Reports Server (NTRS)

    Lee, S. H.

    1981-01-01

    Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.

  9. Efficient analysis of three dimensional EUV mask induced imaging artifacts using the waveguide decomposition method

    NASA Astrophysics Data System (ADS)

    Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas

    2009-10-01

    This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.

  10. Graph theory approach to the eigenvalue problem of large space structures

    NASA Technical Reports Server (NTRS)

    Reddy, A. S. S. R.; Bainum, P. M.

    1981-01-01

    Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.

  11. Online signature recognition using principal component analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Hwang, Seung-Jun; Park, Seung-Je; Baek, Joong-Hwan

    2016-12-01

    In this paper, we propose an algorithm for on-line signature recognition using fingertip point in the air from the depth image acquired by Kinect. We extract 10 statistical features from X, Y, Z axis, which are invariant to changes in shifting and scaling of the signature trajectories in three-dimensional space. Artificial neural network is adopted to solve the complex signature classification problem. 30 dimensional features are converted into 10 principal components using principal component analysis, which is 99.02% of total variances. We implement the proposed algorithm and test to actual on-line signatures. In experiment, we verify the proposed method is successful to classify 15 different on-line signatures. Experimental result shows 98.47% of recognition rate when using only 10 feature vectors.

  12. Stereo Image Ranging For An Autonomous Robot Vision System

    NASA Astrophysics Data System (ADS)

    Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven

    1985-12-01

    The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.

  13. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    PubMed

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  14. GPS test range mission planning

    NASA Astrophysics Data System (ADS)

    Roberts, Iris P.; Hancock, Thomas P.

    The principal features of the Test Range User Mission Planner (TRUMP), a PC-resident tool designed to aid in deploying and utilizing GPS-based test range assets, are reviewed. TRUMP features time history plots of time-space-position information (TSPI); performance based on a dynamic GPS/inertial system simulation; time history plots of TSPI data link connectivity; digital terrain elevation data maps with user-defined cultural features; and two-dimensional coverage plots of ground-based test range assets. Some functions to be added during the next development phase are discussed.

  15. Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics

    NASA Astrophysics Data System (ADS)

    Wehmeyer, Christoph; Noé, Frank

    2018-06-01

    Inspired by the success of deep learning techniques in the physical and chemical sciences, we apply a modification of an autoencoder type deep neural network to the task of dimension reduction of molecular dynamics data. We can show that our time-lagged autoencoder reliably finds low-dimensional embeddings for high-dimensional feature spaces which capture the slow dynamics of the underlying stochastic processes—beyond the capabilities of linear dimension reduction techniques.

  16. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    PubMed

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  17. Experimental witness of genuine high-dimensional entanglement

    NASA Astrophysics Data System (ADS)

    Guo, Yu; Hu, Xiao-Min; Liu, Bi-Heng; Huang, Yun-Feng; Li, Chuan-Feng; Guo, Guang-Can

    2018-06-01

    Growing interest has been invested in exploring high-dimensional quantum systems, for their promising perspectives in certain quantum tasks. How to characterize a high-dimensional entanglement structure is one of the basic questions to take full advantage of it. However, it is not easy for us to catch the key feature of high-dimensional entanglement, for the correlations derived from high-dimensional entangled states can be possibly simulated with copies of lower-dimensional systems. Here, we follow the work of Kraft et al. [Phys. Rev. Lett. 120, 060502 (2018), 10.1103/PhysRevLett.120.060502], and present the experimental realizing of creation and detection, by the normalized witness operation, of the notion of genuine high-dimensional entanglement, which cannot be decomposed into lower-dimensional Hilbert space and thus form the entanglement structures existing in high-dimensional systems only. Our experiment leads to further exploration of high-dimensional quantum systems.

  18. A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations

    PubMed Central

    Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia

    2015-01-01

    Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738

  19. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors

    PubMed Central

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-01-01

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233

  20. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    PubMed

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-07-07

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

  1. Complex Functions with GeoGebra

    ERIC Educational Resources Information Center

    Breda, Ana Maria D'azevedo; Dos Santos, José Manuel Dos Santos

    2016-01-01

    Complex functions, generally feature some interesting peculiarities, seen as extensions of real functions. The visualization of complex functions properties usually requires the simultaneous visualization of two-dimensional spaces. The multiple Windows of GeoGebra, combined with its ability of algebraic computation with complex numbers, allow the…

  2. Fast, Distributed Algorithms in Deep Networks

    DTIC Science & Technology

    2016-05-11

    may not have realized how vital she was in making this project a reality is Professor Crainiceanu. Without knowing who you were, you invited me into...objective function. Training is complete when (2) converges, or stated alternatively , when the difference between t and φL can no longer be...the state-of-the art approaches simply rely on random initialization. We propose an alternative 10 (a) Features in 1-dimensional space (b) Features

  3. Geometric Representations of Condition Queries on Three-Dimensional Vector Fields

    NASA Technical Reports Server (NTRS)

    Henze, Chris

    1999-01-01

    Condition queries on distributed data ask where particular conditions are satisfied. It is possible to represent condition queries as geometric objects by plotting field data in various spaces derived from the data, and by selecting loci within these derived spaces which signify the desired conditions. Rather simple geometric partitions of derived spaces can represent complex condition queries because much complexity can be encapsulated in the derived space mapping itself A geometric view of condition queries provides a useful conceptual unification, allowing one to intuitively understand many existing vector field feature detection algorithms -- and to design new ones -- as variations on a common theme. A geometric representation of condition queries also provides a simple and coherent basis for computer implementation, reducing a wide variety of existing and potential vector field feature detection techniques to a few simple geometric operations.

  4. Three-Dimensional Messages for Interstellar Communication

    NASA Astrophysics Data System (ADS)

    Vakoch, Douglas A.

    One of the challenges facing independently evolved civilizations separated by interstellar distances is to communicate information unique to one civilization. One commonly proposed solution is to begin with two-dimensional pictorial representations of mathematical concepts and physical objects, in the hope that this will provide a foundation for overcoming linguistic barriers. However, significant aspects of such representations are highly conventional, and may not be readily intelligible to a civilization with different conventions. The process of teaching conventions of representation may be facilitated by the use of three-dimensional representations redundantly encoded in multiple formats (e.g., as both vectors and as rasters). After having illustrated specific conventions for representing mathematical objects in a three-dimensional space, this method can be used to describe a physical environment shared by transmitter and receiver: a three-dimensional space defined by the transmitter--receiver axis, and containing stars within that space. This method can be extended to show three-dimensional representations varying over time. Having clarified conventions for representing objects potentially familiar to both sender and receiver, novel objects can subsequently be depicted. This is illustrated through sequences showing interactions between human beings, which provide information about human behavior and personality. Extensions of this method may allow the communication of such culture-specific features as aesthetic judgments and religious beliefs. Limitations of this approach will be noted, with specific reference to ETI who are not primarily visual.

  5. Approximation of Optimal Infinite Dimensional Compensators for Flexible Structures

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Mingori, D. L.; Adamian, A.; Jabbari, F.

    1985-01-01

    The infinite dimensional compensator for a large class of flexible structures, modeled as distributed systems are discussed, as well as an approximation scheme for designing finite dimensional compensators to approximate the infinite dimensional compensator. The approximation scheme is applied to develop a compensator for a space antenna model based on wrap-rib antennas being built currently. While the present model has been simplified, it retains the salient features of rigid body modes and several distributed components of different characteristics. The control and estimator gains are represented by functional gains, which provide graphical representations of the control and estimator laws. These functional gains also indicate the convergence of the finite dimensional compensators and show which modes the optimal compensator ignores.

  6. Cross-entropy embedding of high-dimensional data using the neural gas model.

    PubMed

    Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi

    2005-01-01

    A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).

  7. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    NASA Astrophysics Data System (ADS)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  8. Exploring the parahippocampal cortex response to high and low spatial frequency spaces.

    PubMed

    Zeidman, Peter; Mullally, Sinéad L; Schwarzkopf, Dietrich Samuel; Maguire, Eleanor A

    2012-05-30

    The posterior parahippocampal cortex (PHC) supports a range of cognitive functions, in particular scene processing. However, it has recently been suggested that PHC engagement during functional MRI simply reflects the representation of three-dimensional local space. If so, PHC should respond to space in the absence of scenes, geometric layout, objects or contextual associations. It has also been reported that PHC activation may be influenced by low-level visual properties of stimuli such as spatial frequency. Here, we tested whether PHC was responsive to the mere sense of space in highly simplified stimuli, and whether this was affected by their spatial frequency distribution. Participants were scanned using functional MRI while viewing depictions of simple three-dimensional space, and matched control stimuli that did not depict a space. Half the stimuli were low-pass filtered to ascertain the impact of spatial frequency. We observed a significant interaction between space and spatial frequency in bilateral PHC. Specifically, stimuli depicting space (more than nonspatial stimuli) engaged the right PHC when they featured high spatial frequencies. In contrast, the interaction in the left PHC did not show a preferential response to space. We conclude that a simple depiction of three-dimensional space that is devoid of objects, scene layouts or contextual associations is sufficient to robustly engage the right PHC, at least when high spatial frequencies are present. We suggest that coding for the presence of space may be a core function of PHC, and could explain its engagement in a range of tasks, including scene processing, where space is always present.

  9. Improved classification accuracy by feature extraction using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.

    2003-05-01

    A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.

  10. Application of Linear Discriminant Analysis in Dimensionality Reduction for Hand Motion Classification

    NASA Astrophysics Data System (ADS)

    Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.

    2012-01-01

    The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.

  11. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah

    2017-02-01

    Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.

  12. Clustering by reordering of similarity and Laplacian matrices: Application to galaxy clusters

    NASA Astrophysics Data System (ADS)

    Mahmoud, E.; Shoukry, A.; Takey, A.

    2018-04-01

    Similarity metrics, kernels and similarity-based algorithms have gained much attention due to their increasing applications in information retrieval, data mining, pattern recognition and machine learning. Similarity Graphs are often adopted as the underlying representation of similarity matrices and are at the origin of known clustering algorithms such as spectral clustering. Similarity matrices offer the advantage of working in object-object (two-dimensional) space where visualization of clusters similarities is available instead of object-features (multi-dimensional) space. In this paper, sparse ɛ-similarity graphs are constructed and decomposed into strong components using appropriate methods such as Dulmage-Mendelsohn permutation (DMperm) and/or Reverse Cuthill-McKee (RCM) algorithms. The obtained strong components correspond to groups (clusters) in the input (feature) space. Parameter ɛi is estimated locally, at each data point i from a corresponding narrow range of the number of nearest neighbors. Although more advanced clustering techniques are available, our method has the advantages of simplicity, better complexity and direct visualization of the clusters similarities in a two-dimensional space. Also, no prior information about the number of clusters is needed. We conducted our experiments on two and three dimensional, low and high-sized synthetic datasets as well as on an astronomical real-dataset. The results are verified graphically and analyzed using gap statistics over a range of neighbors to verify the robustness of the algorithm and the stability of the results. Combining the proposed algorithm with gap statistics provides a promising tool for solving clustering problems. An astronomical application is conducted for confirming the existence of 45 galaxy clusters around the X-ray positions of galaxy clusters in the redshift range [0.1..0.8]. We re-estimate the photometric redshifts of the identified galaxy clusters and obtain acceptable values compared to published spectroscopic redshifts with a 0.029 standard deviation of their differences.

  13. Shape component analysis: structure-preserving dimension reduction on biological shape spaces.

    PubMed

    Lee, Hao-Chih; Liao, Tao; Zhang, Yongjie Jessica; Yang, Ge

    2016-03-01

    Quantitative shape analysis is required by a wide range of biological studies across diverse scales, ranging from molecules to cells and organisms. In particular, high-throughput and systems-level studies of biological structures and functions have started to produce large volumes of complex high-dimensional shape data. Analysis and understanding of high-dimensional biological shape data require dimension-reduction techniques. We have developed a technique for non-linear dimension reduction of 2D and 3D biological shape representations on their Riemannian spaces. A key feature of this technique is that it preserves distances between different shapes in an embedded low-dimensional shape space. We demonstrate an application of this technique by combining it with non-linear mean-shift clustering on the Riemannian spaces for unsupervised clustering of shapes of cellular organelles and proteins. Source code and data for reproducing results of this article are freely available at https://github.com/ccdlcmu/shape_component_analysis_Matlab The implementation was made in MATLAB and supported on MS Windows, Linux and Mac OS. geyang@andrew.cmu.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Guiding exploration in conformational feature space with Lipschitz underestimation for ab-initio protein structure prediction.

    PubMed

    Hao, Xiaohu; Zhang, Guijun; Zhou, Xiaogen

    2018-04-01

    Computing conformations which are essential to associate structural and functional information with gene sequences, is challenging due to the high dimensionality and rugged energy surface of the protein conformational space. Consequently, the dimension of the protein conformational space should be reduced to a proper level, and an effective exploring algorithm should be proposed. In this paper, a plug-in method for guiding exploration in conformational feature space with Lipschitz underestimation (LUE) for ab-initio protein structure prediction is proposed. The conformational space is converted into ultrafast shape recognition (USR) feature space firstly. Based on the USR feature space, the conformational space can be further converted into Underestimation space according to Lipschitz estimation theory for guiding exploration. As a consequence of the use of underestimation model, the tight lower bound estimate information can be used for exploration guidance, the invalid sampling areas can be eliminated in advance, and the number of energy function evaluations can be reduced. The proposed method provides a novel technique to solve the exploring problem of protein conformational space. LUE is applied to differential evolution (DE) algorithm, and metropolis Monte Carlo(MMC) algorithm which is available in the Rosetta; When LUE is applied to DE and MMC, it will be screened by the underestimation method prior to energy calculation and selection. Further, LUE is compared with DE and MMC by testing on 15 small-to-medium structurally diverse proteins. Test results show that near-native protein structures with higher accuracy can be obtained more rapidly and efficiently with the use of LUE. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Cardiac Monitor

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Under contract to Johnson Space Center, the University of Minnesota developed the concept of impedance cardiography as an alternative to thermodilution to access astronaut heart function in flight. NASA then contracted Space Labs, Inc. to construct miniature space units based on this technology. Several companies then launched their own impedance cardiography, including Renaissance Technologies, which manufactures the IQ System. The IQ System is 5 to 17 times cheaper than thermodilution, and features the signal processing technology called TFD (Time Frequency Distribution). TFD provides three- dimensional distribution of the blood circulation force signals, allowing visualization of changes in power, frequency and time.

  16. Efficient Data Mining for Local Binary Pattern in Texture Image Analysis

    PubMed Central

    Kwak, Jin Tae; Xu, Sheng; Wood, Bradford J.

    2015-01-01

    Local binary pattern (LBP) is a simple gray scale descriptor to characterize the local distribution of the grey levels in an image. Multi-resolution LBP and/or combinations of the LBPs have shown to be effective in texture image analysis. However, it is unclear what resolutions or combinations to choose for texture analysis. Examining all the possible cases is impractical and intractable due to the exponential growth in a feature space. This limits the accuracy and time- and space-efficiency of LBP. Here, we propose a data mining approach for LBP, which efficiently explores a high-dimensional feature space and finds a relatively smaller number of discriminative features. The features can be any combinations of LBPs. These may not be achievable with conventional approaches. Hence, our approach not only fully utilizes the capability of LBP but also maintains the low computational complexity. We incorporated three different descriptors (LBP, local contrast measure, and local directional derivative measure) with three spatial resolutions and evaluated our approach using two comprehensive texture databases. The results demonstrated the effectiveness and robustness of our approach to different experimental designs and texture images. PMID:25767332

  17. Baby de Sitter black holes and dS3/CFT2

    NASA Astrophysics Data System (ADS)

    de Buyl, Sophie; Detournay, Stéphane; Giribet, Gaston; Ng, Gim Seng

    2014-02-01

    Unlike three-dimensional Einstein gravity, three-dimensional massive gravity admits asymptotically de Sitter space (dS) black hole solutions. These black holes present interesting features and provide us with toy models to study the dS/CFT correspondence. A remarkable property of these black holes is that they are always in thermal equilibrium with the cosmological horizon of the space that hosts them. This invites us to study the thermodynamics of these solutions within the context of dS/CFT. We study the asymptotic symmetry group of the theory and find that it indeed coincides with the local two-dimensional conformal algebra. The charge algebra associated to the asymptotic Killing vectors consists of two copies of the Virasoro algebra with non-vanishing central extension. We compute the mass and angular momentum of the dS black holes and verify that a naive application of Cardy's formula exactly reproduces the entropy of both the black hole and the cosmological horizon. By adapting the holographic renormalization techniques to the case of dS space, we define the boundary stress tensor of the dual Euclidean conformal field theory.

  18. Nonlinear three-dimensional verification of the SPECYL and PIXIE3D magnetohydrodynamics codes for fusion plasmas

    NASA Astrophysics Data System (ADS)

    Bonfiglio, D.; Chacón, L.; Cappello, S.

    2010-08-01

    With the increasing impact of scientific discovery via advanced computation, there is presently a strong emphasis on ensuring the mathematical correctness of computational simulation tools. Such endeavor, termed verification, is now at the center of most serious code development efforts. In this study, we address a cross-benchmark nonlinear verification study between two three-dimensional magnetohydrodynamics (3D MHD) codes for fluid modeling of fusion plasmas, SPECYL [S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996)] and PIXIE3D [L. Chacón, Phys. Plasmas 15, 056103 (2008)], in their common limit of application: the simple viscoresistive cylindrical approximation. SPECYL is a serial code in cylindrical geometry that features a spectral formulation in space and a semi-implicit temporal advance, and has been used extensively to date for reversed-field pinch studies. PIXIE3D is a massively parallel code in arbitrary curvilinear geometry that features a conservative, solenoidal finite-volume discretization in space, and a fully implicit temporal advance. The present study is, in our view, a first mandatory step in assessing the potential of any numerical 3D MHD code for fluid modeling of fusion plasmas. Excellent agreement is demonstrated over a wide range of parameters for several fusion-relevant cases in both two- and three-dimensional geometries.

  19. Nonlinear three-dimensional verification of the SPECYL and PIXIE3D magnetohydrodynamics codes for fusion plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonfiglio, Daniele; Chacon, Luis; Cappello, Susanna

    2010-01-01

    With the increasing impact of scientific discovery via advanced computation, there is presently a strong emphasis on ensuring the mathematical correctness of computational simulation tools. Such endeavor, termed verification, is now at the center of most serious code development efforts. In this study, we address a cross-benchmark nonlinear verification study between two three-dimensional magnetohydrodynamics (3D MHD) codes for fluid modeling of fusion plasmas, SPECYL [S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996)] and PIXIE3D [L. Chacon, Phys. Plasmas 15, 056103 (2008)], in their common limit of application: the simple viscoresistive cylindrical approximation. SPECYL is a serial code inmore » cylindrical geometry that features a spectral formulation in space and a semi-implicit temporal advance, and has been used extensively to date for reversed-field pinch studies. PIXIE3D is a massively parallel code in arbitrary curvilinear geometry that features a conservative, solenoidal finite-volume discretization in space, and a fully implicit temporal advance. The present study is, in our view, a first mandatory step in assessing the potential of any numerical 3D MHD code for fluid modeling of fusion plasmas. Excellent agreement is demonstrated over a wide range of parameters for several fusion-relevant cases in both two- and three-dimensional geometries.« less

  20. Direct Manipulation in Virtual Reality

    NASA Technical Reports Server (NTRS)

    Bryson, Steve

    2003-01-01

    Virtual Reality interfaces offer several advantages for scientific visualization such as the ability to perceive three-dimensional data structures in a natural way. The focus of this chapter is direct manipulation, the ability for a user in virtual reality to control objects in the virtual environment in a direct and natural way, much as objects are manipulated in the real world. Direct manipulation provides many advantages for the exploration of complex, multi-dimensional data sets, by allowing the investigator the ability to intuitively explore the data environment. Because direct manipulation is essentially a control interface, it is better suited for the exploration and analysis of a data set than for the publishing or communication of features found in that data set. Thus direct manipulation is most relevant to the analysis of complex data that fills a volume of three-dimensional space, such as a fluid flow data set. Direct manipulation allows the intuitive exploration of that data, which facilitates the discovery of data features that would be difficult to find using more conventional visualization methods. Using a direct manipulation interface in virtual reality, an investigator can, for example, move a data probe about in space, watching the results and getting a sense of how the data varies within its spatial volume.

  1. Mapping the Indonesian territory, based on pollution, social demography and geographical data, using self organizing feature map

    NASA Astrophysics Data System (ADS)

    Hernawati, Kuswari; Insani, Nur; Bambang S. H., M.; Nur Hadi, W.; Sahid

    2017-08-01

    This research aims to mapping the 33 (thirty-three) provinces in Indonesia, based on the data on air, water and soil pollution, as well as social demography and geography data, into a clustered model. The method used in this study was unsupervised method that combines the basic concept of Kohonen or Self-Organizing Feature Maps (SOFM). The method is done by providing the design parameters for the model based on data related directly/ indirectly to pollution, which are the demographic and social data, pollution levels of air, water and soil, as well as the geographical situation of each province. The parameters used consists of 19 features/characteristics, including the human development index, the number of vehicles, the availability of the plant's water absorption and flood prevention, as well as geographic and demographic situation. The data used were secondary data from the Central Statistics Agency (BPS), Indonesia. The data are mapped into SOFM from a high-dimensional vector space into two-dimensional vector space according to the closeness of location in term of Euclidean distance. The resulting outputs are represented in clustered grouping. Thirty-three provinces are grouped into five clusters, where each cluster has different features/characteristics and level of pollution. The result can used to help the efforts on prevention and resolution of pollution problems on each cluster in an effective and efficient way.

  2. Using Gaussian windows to explore a multivariate data set

    NASA Technical Reports Server (NTRS)

    Jaeckel, Louis A.

    1991-01-01

    In an earlier paper, I recounted an exploratory analysis, using Gaussian windows, of a data set derived from the Infrared Astronomical Satellite. Here, my goals are to develop strategies for finding structural features in a data set in a many-dimensional space, and to find ways to describe the shape of such a data set. After a brief review of Gaussian windows, I describe the current implementation of the method. I give some ways of describing features that we might find in the data, such as clusters and saddle points, and also extended structures such as a 'bar', which is an essentially one-dimensional concentration of data points. I then define a distance function, which I use to determine which data points are 'associated' with a feature. Data points not associated with any feature are called 'outliers'. I then explore the data set, giving the strategies that I used and quantitative descriptions of the features that I found, including clusters, bars, and a saddle point. I tried to use strategies and procedures that could, in principle, be used in any number of dimensions.

  3. An optical flow-based state-space model of the vocal folds.

    PubMed

    Granados, Alba; Brunskog, Jonas

    2017-06-01

    High-speed movies of the vocal fold vibration are valuable data to reveal vocal fold features for voice pathology diagnosis. This work presents a suitable Bayesian model and a purely theoretical discussion for further development of a framework for continuum biomechanical features estimation. A linear and Gaussian nonstationary state-space model is proposed and thoroughly discussed. The evolution model is based on a self-sustained three-dimensional finite element model of the vocal folds, and the observation model involves a dense optical flow algorithm. The results show that the method is able to capture different deformation patterns between the computed optical flow and the finite element deformation, controlled by the choice of the model tissue parameters.

  4. Detecting multiple moving objects in crowded environments with coherent motion regions

    DOEpatents

    Cheriyadat, Anil M.; Radke, Richard J.

    2013-06-11

    Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.

  5. Evidence of tampering in watermark identification

    NASA Astrophysics Data System (ADS)

    McLauchlan, Lifford; Mehrübeoglu, Mehrübe

    2009-08-01

    In this work, watermarks are embedded in digital images in the discrete wavelet transform (DWT) domain. Principal component analysis (PCA) is performed on the DWT coefficients. Next higher order statistics based on the principal components and the eigenvalues are determined for different sets of images. Feature sets are analyzed for different types of attacks in m dimensional space. The results demonstrate the separability of the features for the tampered digital copies. Different feature sets are studied to determine more effective tamper evident feature sets. The digital forensics, the probable manipulation(s) or modification(s) performed on the digital information can be identified using the described technique.

  6. Odor Impression Prediction from Mass Spectra.

    PubMed

    Nozaki, Yuji; Nakamoto, Takamichi

    2016-01-01

    The sense of smell arises from the perception of odors from chemicals. However, the relationship between the impression of odor and the numerous physicochemical parameters has yet to be understood owing to its complexity. As such, there is no established general method for predicting the impression of odor of a chemical only from its physicochemical properties. In this study, we designed a novel predictive model based on an artificial neural network with a deep structure for predicting odor impression utilizing the mass spectra of chemicals, and we conducted a series of computational analyses to evaluate its performance. Feature vectors extracted from the original high-dimensional space using two autoencoders equipped with both input and output layers in the model are used to build a mapping function from the feature space of mass spectra to the feature space of sensory data. The results of predictions obtained by the proposed new method have notable accuracy (R≅0.76) in comparison with a conventional method (R≅0.61).

  7. Drug-target interaction prediction using ensemble learning and dimensionality reduction.

    PubMed

    Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong

    2017-10-01

    Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Physics in space-time with scale-dependent metrics

    NASA Astrophysics Data System (ADS)

    Balankin, Alexander S.

    2013-10-01

    We construct three-dimensional space Rγ3 with the scale-dependent metric and the corresponding Minkowski space-time Mγ,β4 with the scale-dependent fractal (DH) and spectral (DS) dimensions. The local derivatives based on scale-dependent metrics are defined and differential vector calculus in Rγ3 is developed. We state that Mγ,β4 provides a unified phenomenological framework for dimensional flow observed in quite different models of quantum gravity. Nevertheless, the main attention is focused on the special case of flat space-time M1/3,14 with the scale-dependent Cantor-dust-like distribution of admissible states, such that DH increases from DH=2 on the scale ≪ℓ0 to DH=4 in the infrared limit ≫ℓ0, where ℓ0 is the characteristic length (e.g. the Planck length, or characteristic size of multi-fractal features in heterogeneous medium), whereas DS≡4 in all scales. Possible applications of approach based on the scale-dependent metric to systems of different nature are briefly discussed.

  9. Resonance-Based Time-Frequency Manifold for Feature Extraction of Ship-Radiated Noise.

    PubMed

    Yan, Jiaquan; Sun, Haixin; Chen, Hailan; Junejo, Naveed Ur Rehman; Cheng, En

    2018-03-22

    In this paper, a novel time-frequency signature using resonance-based sparse signal decomposition (RSSD), phase space reconstruction (PSR), time-frequency distribution (TFD) and manifold learning is proposed for feature extraction of ship-radiated noise, which is called resonance-based time-frequency manifold (RTFM). This is suitable for analyzing signals with oscillatory, non-stationary and non-linear characteristics in a situation of serious noise pollution. Unlike the traditional methods which are sensitive to noise and just consider one side of oscillatory, non-stationary and non-linear characteristics, the proposed RTFM can provide the intact feature signature of all these characteristics in the form of a time-frequency signature by the following steps: first, RSSD is employed on the raw signal to extract the high-oscillatory component and abandon the low-oscillatory component. Second, PSR is performed on the high-oscillatory component to map the one-dimensional signal to the high-dimensional phase space. Third, TFD is employed to reveal non-stationary information in the phase space. Finally, manifold learning is applied to the TFDs to fetch the intrinsic non-linear manifold. A proportional addition of the top two RTFMs is adopted to produce the improved RTFM signature. All of the case studies are validated on real audio recordings of ship-radiated noise. Case studies of ship-radiated noise on different datasets and various degrees of noise pollution manifest the effectiveness and robustness of the proposed method.

  10. Resonance-Based Time-Frequency Manifold for Feature Extraction of Ship-Radiated Noise

    PubMed Central

    Yan, Jiaquan; Sun, Haixin; Chen, Hailan; Junejo, Naveed Ur Rehman; Cheng, En

    2018-01-01

    In this paper, a novel time-frequency signature using resonance-based sparse signal decomposition (RSSD), phase space reconstruction (PSR), time-frequency distribution (TFD) and manifold learning is proposed for feature extraction of ship-radiated noise, which is called resonance-based time-frequency manifold (RTFM). This is suitable for analyzing signals with oscillatory, non-stationary and non-linear characteristics in a situation of serious noise pollution. Unlike the traditional methods which are sensitive to noise and just consider one side of oscillatory, non-stationary and non-linear characteristics, the proposed RTFM can provide the intact feature signature of all these characteristics in the form of a time-frequency signature by the following steps: first, RSSD is employed on the raw signal to extract the high-oscillatory component and abandon the low-oscillatory component. Second, PSR is performed on the high-oscillatory component to map the one-dimensional signal to the high-dimensional phase space. Third, TFD is employed to reveal non-stationary information in the phase space. Finally, manifold learning is applied to the TFDs to fetch the intrinsic non-linear manifold. A proportional addition of the top two RTFMs is adopted to produce the improved RTFM signature. All of the case studies are validated on real audio recordings of ship-radiated noise. Case studies of ship-radiated noise on different datasets and various degrees of noise pollution manifest the effectiveness and robustness of the proposed method. PMID:29565288

  11. DD-HDS: A method for visualization and exploration of high-dimensional data.

    PubMed

    Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard

    2007-09-01

    Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.

  12. A hybrid fault diagnosis approach based on mixed-domain state features for rotating machinery.

    PubMed

    Xue, Xiaoming; Zhou, Jianzhong

    2017-01-01

    To make further improvement in the diagnosis accuracy and efficiency, a mixed-domain state features data based hybrid fault diagnosis approach, which systematically blends both the statistical analysis approach and the artificial intelligence technology, is proposed in this work for rolling element bearings. For simplifying the fault diagnosis problems, the execution of the proposed method is divided into three steps, i.e., fault preliminary detection, fault type recognition and fault degree identification. In the first step, a preliminary judgment about the health status of the equipment can be evaluated by the statistical analysis method based on the permutation entropy theory. If fault exists, the following two processes based on the artificial intelligence approach are performed to further recognize the fault type and then identify the fault degree. For the two subsequent steps, mixed-domain state features containing time-domain, frequency-domain and multi-scale features are extracted to represent the fault peculiarity under different working conditions. As a powerful time-frequency analysis method, the fast EEMD method was employed to obtain multi-scale features. Furthermore, due to the information redundancy and the submergence of original feature space, a novel manifold learning method (modified LGPCA) is introduced to realize the low-dimensional representations for high-dimensional feature space. Finally, two cases with 12 working conditions respectively have been employed to evaluate the performance of the proposed method, where vibration signals were measured from an experimental bench of rolling element bearing. The analysis results showed the effectiveness and the superiority of the proposed method of which the diagnosis thought is more suitable for practical application. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Discriminative clustering on manifold for adaptive transductive classification.

    PubMed

    Zhang, Zhao; Jia, Lei; Zhang, Min; Li, Bing; Zhang, Li; Li, Fanzhang

    2017-10-01

    In this paper, we mainly propose a novel adaptive transductive label propagation approach by joint discriminative clustering on manifolds for representing and classifying high-dimensional data. Our framework seamlessly combines the unsupervised manifold learning, discriminative clustering and adaptive classification into a unified model. Also, our method incorporates the adaptive graph weight construction with label propagation. Specifically, our method is capable of propagating label information using adaptive weights over low-dimensional manifold features, which is different from most existing studies that usually predict the labels and construct the weights in the original Euclidean space. For transductive classification by our formulation, we first perform the joint discriminative K-means clustering and manifold learning to capture the low-dimensional nonlinear manifolds. Then, we construct the adaptive weights over the learnt manifold features, where the adaptive weights are calculated through performing the joint minimization of the reconstruction errors over features and soft labels so that the graph weights can be joint-optimal for data representation and classification. Using the adaptive weights, we can easily estimate the unknown labels of samples. After that, our method returns the updated weights for further updating the manifold features. Extensive simulations on image classification and segmentation show that our proposed algorithm can deliver the state-of-the-art performance on several public datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Reducing the two-loop large-scale structure power spectrum to low-dimensional, radial integrals

    DOE PAGES

    Schmittfull, Marcel; Vlah, Zvonimir

    2016-11-28

    Modeling the large-scale structure of the universe on nonlinear scales has the potential to substantially increase the science return of upcoming surveys by increasing the number of modes available for model comparisons. One way to achieve this is to model nonlinear scales perturbatively. Unfortunately, this involves high-dimensional loop integrals that are cumbersome to evaluate. Here, trying to simplify this, we show how two-loop (next-to-next-to-leading order) corrections to the density power spectrum can be reduced to low-dimensional, radial integrals. Many of those can be evaluated with a one-dimensional fast Fourier transform, which is significantly faster than the five-dimensional Monte-Carlo integrals thatmore » are needed otherwise. The general idea of this fast fourier transform perturbation theory method is to switch between Fourier and position space to avoid convolutions and integrate over orientations, leaving only radial integrals. This reformulation is independent of the underlying shape of the initial linear density power spectrum and should easily accommodate features such as those from baryonic acoustic oscillations. We also discuss how to account for halo bias and redshift space distortions.« less

  15. Fast multi-dimensional NMR by minimal sampling

    NASA Astrophysics Data System (ADS)

    Kupče, Ēriks; Freeman, Ray

    2008-03-01

    A new scheme is proposed for very fast acquisition of three-dimensional NMR spectra based on minimal sampling, instead of the customary step-wise exploration of all of evolution space. The method relies on prior experiments to determine accurate values for the evolving frequencies and intensities from the two-dimensional 'first planes' recorded by setting t1 = 0 or t2 = 0. With this prior knowledge, the entire three-dimensional spectrum can be reconstructed by an additional measurement of the response at a single location (t1∗,t2∗) where t1∗ and t2∗ are fixed values of the evolution times. A key feature is the ability to resolve problems of overlap in the acquisition dimension. Applied to a small protein, agitoxin, the three-dimensional HNCO spectrum is obtained 35 times faster than systematic Cartesian sampling of the evolution domain. The extension to multi-dimensional spectroscopy is outlined.

  16. Direct k-space imaging of Mahan cones at clean and Bi-covered Cu(111) surfaces

    NASA Astrophysics Data System (ADS)

    Winkelmann, Aimo; Akin Ünal, A.; Tusche, Christian; Ellguth, Martin; Chiang, Cheng-Tien; Kirschner, Jürgen

    2012-08-01

    Using a specifically tailored experimental approach, we revisit the exemplary effect of photoemission from quasi-free electronic states in crystals. Applying a momentum microscope, we measure photoelectron momentum patterns emitted into the complete half-space above the sample after excitation from a linearly polarized laser light source. By the application of a fully three-dimensional (3D) geometrical model of direct optical transitions, we explain the characteristic intensity distributions that are formed by the photoelectrons in k-space under the combination of energy conservation and crystal momentum conservation in the 3D bulk as well as at the two-dimensional (2D) surface. For bismuth surface alloys on Cu(111), the energy-resolved photoelectron momentum patterns allow us to identify specific emission processes in which bulk excited electrons are subsequently diffracted by an atomic 2D surface grating. The polarization dependence of the observed intensity features in momentum space is explained based on the different relative orientations of characteristic reciprocal space directions with respect to the electric field vector of the incident light.

  17. Tool Wear Prediction in Ti-6Al-4V Machining through Multiple Sensor Monitoring and PCA Features Pattern Recognition.

    PubMed

    Caggiano, Alessandra

    2018-03-09

    Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features ( k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear ( VB max ) was achieved, with predicted values very close to the measured tool wear values.

  18. Tool Wear Prediction in Ti-6Al-4V Machining through Multiple Sensor Monitoring and PCA Features Pattern Recognition

    PubMed Central

    2018-01-01

    Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features (k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear (VBmax) was achieved, with predicted values very close to the measured tool wear values. PMID:29522443

  19. Effect of finite sample size on feature selection and classification: a simulation study.

    PubMed

    Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping

    2010-02-01

    The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.

  20. Feature space analysis of MRI

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Windham, Joe P.; Peck, Donald J.

    1997-04-01

    This paper presents development and performance evaluation of an MRI feature space method. The method is useful for: identification of tissue types; segmentation of tissues; and quantitative measurements on tissues, to obtain information that can be used in decision making (diagnosis, treatment planning, and evaluation of treatment). The steps of the work accomplished are as follows: (1) Four T2-weighted and two T1-weighted images (before and after injection of Gadolinium) were acquired for ten tumor patients. (2) Images were analyed by two image analysts according to the following algorithm. The intracranial brain tissues were segmented from the scalp and background. The additive noise was suppressed using a multi-dimensional non-linear edge- preserving filter which preserves partial volume information on average. Image nonuniformities were corrected using a modified lowpass filtering approach. The resulting images were used to generate and visualize an optimal feature space. Cluster centers were identified on the feature space. Then images were segmented into normal tissues and different zones of the tumor. (3) Biopsy samples were extracted from each patient and were subsequently analyzed by the pathology laboratory. (4) Image analysis results were compared to each other and to the biopsy results. Pre- and post-surgery feature spaces were also compared. The proposed algorithm made it possible to visualize the MRI feature space and to segment the image. In all cases, the operators were able to find clusters for normal and abnormal tissues. Also, clusters for different zones of the tumor were found. Based on the clusters marked for each zone, the method successfully segmented the image into normal tissues (white matter, gray matter, and CSF) and different zones of the lesion (tumor, cyst, edema, radiation necrosis, necrotic core, and infiltrated tumor). The results agreed with those obtained from the biopsy samples. Comparison of pre- to post-surgery and radiation feature spaces confirmed that the tumor was not present in the second study but radiation necrosis was generated as a result of radiation.

  1. Unconstrained handwritten numeral recognition based on radial basis competitive and cooperative networks with spatio-temporal feature representation.

    PubMed

    Lee, S; Pan, J J

    1996-01-01

    This paper presents a new approach to representation and recognition of handwritten numerals. The approach first transforms a two-dimensional (2-D) spatial representation of a numeral into a three-dimensional (3-D) spatio-temporal representation by identifying the tracing sequence based on a set of heuristic rules acting as transformation operators. A multiresolution critical-point segmentation method is then proposed to extract local feature points, at varying degrees of scale and coarseness. A new neural network architecture, referred to as radial-basis competitive and cooperative network (RCCN), is presented especially for handwritten numeral recognition. RCCN is a globally competitive and locally cooperative network with the capability of self-organizing hidden units to progressively achieve desired network performance, and functions as a universal approximator of arbitrary input-output mappings. Three types of RCCNs are explored: input-space RCCN (IRCCN), output-space RCCN (ORCCN), and bidirectional RCCN (BRCCN). Experiments against handwritten zip code numerals acquired by the U.S. Postal Service indicated that the proposed method is robust in terms of variations, deformations, transformations, and corruption, achieving about 97% recognition rate.

  2. Single Channel EEG Artifact Identification Using Two-Dimensional Multi-Resolution Analysis.

    PubMed

    Taherisadr, Mojtaba; Dehzangi, Omid; Parsaei, Hossein

    2017-12-13

    As a diagnostic monitoring approach, electroencephalogram (EEG) signals can be decoded by signal processing methodologies for various health monitoring purposes. However, EEG recordings are contaminated by other interferences, particularly facial and ocular artifacts generated by the user. This is specifically an issue during continuous EEG recording sessions, and is therefore a key step in using EEG signals for either physiological monitoring and diagnosis or brain-computer interface to identify such artifacts from useful EEG components. In this study, we aim to design a new generic framework in order to process and characterize EEG recording as a multi-component and non-stationary signal with the aim of localizing and identifying its component (e.g., artifact). In the proposed method, we gather three complementary algorithms together to enhance the efficiency of the system. Algorithms include time-frequency (TF) analysis and representation, two-dimensional multi-resolution analysis (2D MRA), and feature extraction and classification. Then, a combination of spectro-temporal and geometric features are extracted by combining key instantaneous TF space descriptors, which enables the system to characterize the non-stationarities in the EEG dynamics. We fit a curvelet transform (as a MRA method) to 2D TF representation of EEG segments to decompose the given space to various levels of resolution. Such a decomposition efficiently improves the analysis of the TF spaces with different characteristics (e.g., resolution). Our experimental results demonstrate that the combination of expansion to TF space, analysis using MRA, and extracting a set of suitable features and applying a proper predictive model is effective in enhancing the EEG artifact identification performance. We also compare the performance of the designed system with another common EEG signal processing technique-namely, 1D wavelet transform. Our experimental results reveal that the proposed method outperforms 1D wavelet.

  3. A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions

    DOE PAGES

    Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan; ...

    2017-04-24

    Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less

  4. A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan

    Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less

  5. Quantitative analysis of fetal facial morphology using 3D ultrasound and statistical shape modeling: a feasibility study.

    PubMed

    Dall'Asta, Andrea; Schievano, Silvia; Bruse, Jan L; Paramasivam, Gowrishankar; Kaihura, Christine Tita; Dunaway, David; Lees, Christoph C

    2017-07-01

    The antenatal detection of facial dysmorphism using 3-dimensional ultrasound may raise the suspicion of an underlying genetic condition but infrequently leads to a definitive antenatal diagnosis. Despite advances in array and noninvasive prenatal testing, not all genetic conditions can be ascertained from such testing. The aim of this study was to investigate the feasibility of quantitative assessment of fetal face features using prenatal 3-dimensional ultrasound volumes and statistical shape modeling. STUDY DESIGN: Thirteen normal and 7 abnormal stored 3-dimensional ultrasound fetal face volumes were analyzed, at a median gestation of 29 +4  weeks (25 +0 to 36 +1 ). The 20 3-dimensional surface meshes generated were aligned and served as input for a statistical shape model, which computed the mean 3-dimensional face shape and 3-dimensional shape variations using principal component analysis. Ten shape modes explained more than 90% of the total shape variability in the population. While the first mode accounted for overall size differences, the second highlighted shape feature changes from an overall proportionate toward a more asymmetric face shape with a wide prominent forehead and an undersized, posteriorly positioned chin. Analysis of the Mahalanobis distance in principal component analysis shape space suggested differences between normal and abnormal fetuses (median and interquartile range distance values, 7.31 ± 5.54 for the normal group vs 13.27 ± 9.82 for the abnormal group) (P = .056). This feasibility study demonstrates that objective characterization and quantification of fetal facial morphology is possible from 3-dimensional ultrasound. This technique has the potential to assist in utero diagnosis, particularly of rare conditions in which facial dysmorphology is a feature. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Automatic segmentation of brain MRI in high-dimensional local and non-local feature space based on sparse representation.

    PubMed

    Khalilzadeh, Mohammad Mahdi; Fatemizadeh, Emad; Behnam, Hamid

    2013-06-01

    Automatic extraction of the varying regions of magnetic resonance images is required as a prior step in a diagnostic intelligent system. The sparsest representation and high-dimensional feature are provided based on learned dictionary. The classification is done by employing the technique that computes the reconstruction error locally and non-locally of each pixel. The acquired results from the real and simulated images are superior to the best MRI segmentation method with regard to the stability advantages. In addition, it is segmented exactly through a formula taken from the distance and sparse factors. Also, it is done automatically taking sparse factor in unsupervised clustering methods whose results have been improved. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Inhomogeneous kinetic effects related to intermittent magnetic discontinuities

    NASA Astrophysics Data System (ADS)

    Greco, A.; Valentini, F.; Servidio, S.; Matthaeus, W. H.

    2012-12-01

    A connection between kinetic processes and two-dimensional intermittent plasma turbulence is observed using direct numerical simulations of a hybrid Vlasov-Maxwell model, in which the Vlasov equation is solved for protons, while the electrons are described as a massless fluid. During the development of turbulence, the proton distribution functions depart from the typical configuration of local thermodynamic equilibrium, displaying statistically significant non-Maxwellian features. In particular, temperature anisotropy and distortions are concentrated near coherent structures, generated as the result of the turbulent cascade, such as current sheets, which are nonuniformly distributed in space. Here, the partial variance of increments (PVI) method has been employed to identify high magnetic stress regions within a two-dimensional turbulent pattern. A quantitative association between non-Maxwellian features and coherent structures is established.

  8. Implementing quantum Ricci curvature

    NASA Astrophysics Data System (ADS)

    Klitgaard, N.; Loll, R.

    2018-05-01

    Quantum Ricci curvature has been introduced recently as a new, geometric observable characterizing the curvature properties of metric spaces, without the need for a smooth structure. Besides coordinate invariance, its key features are scalability, computability, and robustness. We demonstrate that these properties continue to hold in the context of nonperturbative quantum gravity, by evaluating the quantum Ricci curvature numerically in two-dimensional Euclidean quantum gravity, defined in terms of dynamical triangulations. Despite the well-known, highly nonclassical properties of the underlying quantum geometry, its Ricci curvature can be matched well to that of a five-dimensional round sphere.

  9. Recent developments of advanced structures for space optics at Astrium, Germany

    NASA Astrophysics Data System (ADS)

    Stute, Thomas; Wulz, Georg; Scheulen, Dietmar

    2003-12-01

    The mechanical division of EADS Astrium GmbH, Friedrichshafen Germany, the former Dornier Satellitensystem GmbH is currently engaged with the development, manufacturing and testing of three different advanced dimensionally stable composite and ceramic material structures for satellite borne optics: -CFRP Camera Structure -Planck Telescope Reflectors -NIRSpec Optical Bench Breadboard for James Web Space Telescope The paper gives an overview over the requirements and the main structural features how these requirements are met. Special production aspects and available test results are reported.

  10. AUTOGEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2003-05-29

    AUTOGEN computes collision-free sequences of robot motion instructions to permit traversal of three-dimensional space curves. Order and direction of curve traversal and orientation of end effector are constraided by a set of manufacturing rules. Input can be provided as a collection of solid models or in terms of wireframe objects and structural cross-section definitions. Entity juxtaposition can be inferred, with appropriate structural features automatically provided. Process control is asserted as a function of position and orientation along each space curve, and is currently implemented for welding processes.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grillot, L.R.; Anderton, P.W.; Haselton, T.M.

    The Espoir oil field, located approximately 13 km offshore Ivory Coast, was discovered in 1980 by a joint venture comprised of Phillips Petroleum Company, AGIP, SEDCO Energy, and PETROCI. Following the discovery, a three-dimensional seismic survey was recorded by GSI in 1981-1982 to provide detailed seismic coverage of Espoir field and adjacent features. The seismic program consisted of 7700 line-km of data acquired in a single survey area that is located on the edge of the continental shelf and extends into deep water. In comparison with previous two-dimensional seismic surveys, the three-dimensional data provided several improvements in interpretation and mappingmore » including: (1) sharper definition of structural features, (2) reliable correlations of horizons and fault traces between closely spaced tracks, (3) detailed time contour maps from time-slice sections, and (4) improved velocity model for depth conversion. The improved mapping helped us identify additional well locations; the results of these wells compared favorably with the interpretation made prior to drilling.« less

  12. Out-of-Sample Extrapolation utilizing Semi-Supervised Manifold Learning (OSE-SSL): Content Based Image Retrieval for Histopathology Images

    PubMed Central

    Sparks, Rachel; Madabhushi, Anant

    2016-01-01

    Content-based image retrieval (CBIR) retrieves database images most similar to the query image by (1) extracting quantitative image descriptors and (2) calculating similarity between database and query image descriptors. Recently, manifold learning (ML) has been used to perform CBIR in a low dimensional representation of the high dimensional image descriptor space to avoid the curse of dimensionality. ML schemes are computationally expensive, requiring an eigenvalue decomposition (EVD) for every new query image to learn its low dimensional representation. We present out-of-sample extrapolation utilizing semi-supervised ML (OSE-SSL) to learn the low dimensional representation without recomputing the EVD for each query image. OSE-SSL incorporates semantic information, partial class label, into a ML scheme such that the low dimensional representation co-localizes semantically similar images. In the context of prostate histopathology, gland morphology is an integral component of the Gleason score which enables discrimination between prostate cancer aggressiveness. Images are represented by shape features extracted from the prostate gland. CBIR with OSE-SSL for prostate histology obtained from 58 patient studies, yielded an area under the precision recall curve (AUPRC) of 0.53 ± 0.03 comparatively a CBIR with Principal Component Analysis (PCA) to learn a low dimensional space yielded an AUPRC of 0.44 ± 0.01. PMID:27264985

  13. An unsupervised classification approach for analysis of Landsat data to monitor land reclamation in Belmont county, Ohio

    NASA Technical Reports Server (NTRS)

    Brumfield, J. O.; Bloemer, H. H. L.; Campbell, W. J.

    1981-01-01

    Two unsupervised classification procedures for analyzing Landsat data used to monitor land reclamation in a surface mining area in east central Ohio are compared for agreement with data collected from the corresponding locations on the ground. One procedure is based on a traditional unsupervised-clustering/maximum-likelihood algorithm sequence that assumes spectral groupings in the Landsat data in n-dimensional space; the other is based on a nontraditional unsupervised-clustering/canonical-transformation/clustering algorithm sequence that not only assumes spectral groupings in n-dimensional space but also includes an additional feature-extraction technique. It is found that the nontraditional procedure provides an appreciable improvement in spectral groupings and apparently increases the level of accuracy in the classification of land cover categories.

  14. Application of Generative Topographic Mapping to Gear Failures Monitoring

    NASA Astrophysics Data System (ADS)

    Liao, Guanglan; Li, Weihua; Shi, Tielin; Rao, Raj B. K. N.

    2002-07-01

    The Generative Topographic Mapping (GTM) model is introduced as a probabilistic re-formation of the self-organizing map and has already been used in a variety of applications. This paper presents a study of the GTM in industrial gear failures monitoring. Vibration signals are analyzed using the GTM model, and the results show that gear feature data sets can be projected into a two-dimensional space and clustered in different areas according to their conditions, which can classify and identify clearly a gear work condition with cracked or broken tooth compared with the normal condition. With the trace of the image points in the two-dimensional space, the variation of gear work conditions can be observed visually, therefore, the occurrence and varying trend of gear failures can be monitored in time.

  15. Dimensional Reduction for the General Markov Model on Phylogenetic Trees.

    PubMed

    Sumner, Jeremy G

    2017-03-01

    We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.

  16. Tungsten fragmentation in nuclear reactions induced by high-energy cosmic-ray protons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chechenin, N. G., E-mail: chechenin@sinp.msu.ru; Chuvilskaya, T. V.; Shirokova, A. A.

    2015-01-15

    Tungsten fragmentation arising in nuclear reactions induced by cosmic-ray protons in space-vehicle electronics is considered. In modern technologies of integrated circuits featuring a three-dimensional layered architecture, tungsten is frequently used as a material for interlayer conducting connections. Within the preequilibrium model, tungsten-fragmentation features, including the cross sections for the elastic and inelastic scattering of protons of energy between 30 and 240 MeV; the yields of isotopes and isobars; their energy, charge, and mass distributions; and recoil energy spectra, are calculated on the basis of the TALYS and EMPIRE-II-19 codes. It is shown that tungsten fragmentation affects substantially forecasts of failuresmore » of space-vehicle electronics.« less

  17. Margin-maximizing feature elimination methods for linear and nonlinear kernel-based discriminant functions.

    PubMed

    Aksu, Yaman; Miller, David J; Kesidis, George; Yang, Qing X

    2010-05-01

    Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers." For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom--the hyperplane's intercept and its squared 2-norm--with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results.

  18. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  19. Functional feature embedded space mapping of fMRI data.

    PubMed

    Hu, Jin; Tian, Jie; Yang, Lei

    2006-01-01

    We have proposed a new method for fMRI data analysis which is called Functional Feature Embedded Space Mapping (FFESM). Our work mainly focuses on the experimental design with periodic stimuli which can be described by a number of Fourier coefficients in the frequency domain. A nonlinear dimension reduction technique Isomap is applied to the high dimensional features obtained from frequency domain of the fMRI data for the first time. Finally, the presence of activated time series is identified by the clustering method in which the information theoretic criterion of minimum description length (MDL) is used to estimate the number of clusters. The feasibility of our algorithm is demonstrated by real human experiments. Although we focus on analyzing periodic fMRI data, the approach can be extended to analyze non-periodic fMRI data (event-related fMRI) by replacing the Fourier analysis with a wavelet analysis.

  20. Particle systems for adaptive, isotropic meshing of CAD models

    PubMed Central

    Levine, Joshua A.; Whitaker, Ross T.

    2012-01-01

    We present a particle-based approach for generating adaptive triangular surface and tetrahedral volume meshes from computer-aided design models. Input shapes are treated as a collection of smooth, parametric surface patches that can meet non-smoothly on boundaries. Our approach uses a hierarchical sampling scheme that places particles on features in order of increasing dimensionality. These particles reach a good distribution by minimizing an energy computed in 3D world space, with movements occurring in the parametric space of each surface patch. Rather than using a pre-computed measure of feature size, our system automatically adapts to both curvature as well as a notion of topological separation. It also enforces a measure of smoothness on these constraints to construct a sizing field that acts as a proxy to piecewise-smooth feature size. We evaluate our technique with comparisons against other popular triangular meshing techniques for this domain. PMID:23162181

  1. Clustering of Multi-Temporal Fully Polarimetric L-Band SAR Data for Agricultural Land Cover Mapping

    NASA Astrophysics Data System (ADS)

    Tamiminia, H.; Homayouni, S.; Safari, A.

    2015-12-01

    Recently, the unique capabilities of Polarimetric Synthetic Aperture Radar (PolSAR) sensors make them an important and efficient tool for natural resources and environmental applications, such as land cover and crop classification. The aim of this paper is to classify multi-temporal full polarimetric SAR data using kernel-based fuzzy C-means clustering method, over an agricultural region. This method starts with transforming input data into the higher dimensional space using kernel functions and then clustering them in the feature space. Feature space, due to its inherent properties, has the ability to take in account the nonlinear and complex nature of polarimetric data. Several SAR polarimetric features extracted using target decomposition algorithms. Features from Cloude-Pottier, Freeman-Durden and Yamaguchi algorithms used as inputs for the clustering. This method was applied to multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Canada, during June and July in 2012. The results demonstrate the efficiency of this approach with respect to the classical methods. In addition, using multi-temporal data in the clustering process helped to investigate the phenological cycle of plants and significantly improved the performance of agricultural land cover mapping.

  2. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  3. Comparison of image features calculated in different dimensions for computer-aided diagnosis of lung nodules

    NASA Astrophysics Data System (ADS)

    Xu, Ye; Lee, Michael C.; Boroczky, Lilla; Cann, Aaron D.; Borczuk, Alain C.; Kawut, Steven M.; Powell, Charles A.

    2009-02-01

    Features calculated from different dimensions of images capture quantitative information of the lung nodules through one or multiple image slices. Previously published computer-aided diagnosis (CADx) systems have used either twodimensional (2D) or three-dimensional (3D) features, though there has been little systematic analysis of the relevance of the different dimensions and of the impact of combining different dimensions. The aim of this study is to determine the importance of combining features calculated in different dimensions. We have performed CADx experiments on 125 pulmonary nodules imaged using multi-detector row CT (MDCT). The CADx system computed 192 2D, 2.5D, and 3D image features of the lesions. Leave-one-out experiments were performed using five different combinations of features from different dimensions: 2D, 3D, 2.5D, 2D+3D, and 2D+3D+2.5D. The experiments were performed ten times for each group. Accuracy, sensitivity and specificity were used to evaluate the performance. Wilcoxon signed-rank tests were applied to compare the classification results from these five different combinations of features. Our results showed that 3D image features generate the best result compared with other combinations of features. This suggests one approach to potentially reducing the dimensionality of the CADx data space and the computational complexity of the system while maintaining diagnostic accuracy.

  4. Swarms: Optimum aggregations of spacecraft

    NASA Technical Reports Server (NTRS)

    Mayer, H. L.

    1980-01-01

    Swarms are aggregations of spacecraft or elements of a space system which are cooperative in function, but physically isolated or only loosely connected. For some missions the swarm configuration may be optimum compared to a group of completely independent spacecraft or a complex rigidly integrated spacecraft or space platform. General features of swarms are induced by considering an ensemble of 26 swarms, examples ranging from Earth centered swarms for commercial application to swarms for exploring minor planets. A concept for a low altitude swarm as a substitute for a space platform is proposed and a preliminary design studied. The salient design feature is the web of tethers holding the 30 km swarm in a rigid two dimensional array in the orbital plane. A mathematical discussion and tutorial in tether technology and in some aspects of the distribution of services (mass, energy, and information to swarm elements) are included.

  5. Wigner's quantum phase-space current in weakly-anharmonic weakly-excited two-state systems

    NASA Astrophysics Data System (ADS)

    Kakofengitis, Dimitris; Steuernagel, Ole

    2017-09-01

    There are no phase-space trajectories for anharmonic quantum systems, but Wigner's phase-space representation of quantum mechanics features Wigner current J . This current reveals fine details of quantum dynamics —finer than is ordinarily thought accessible according to quantum folklore invoking Heisenberg's uncertainty principle. Here, we focus on the simplest, most intuitive, and analytically accessible aspects of J. We investigate features of J for bound states of time-reversible, weakly-anharmonic one-dimensional quantum-mechanical systems which are weakly-excited. We establish that weakly-anharmonic potentials can be grouped into three distinct classes: hard, soft, and odd potentials. We stress connections between each other and the harmonic case. We show that their Wigner current fieldline patterns can be characterised by J's discrete stagnation points, how these arise and how a quantum system's dynamics is constrained by the stagnation points' topological charge conservation. We additionally show that quantum dynamics in phase space, in the case of vanishing Planck constant ℏ or vanishing anharmonicity, does not pointwise converge to classical dynamics.

  6. Distinctive Features of Spatial Perspective-Taking in the Elderly

    ERIC Educational Resources Information Center

    Watanabe, Masayuki

    2011-01-01

    This study aimed to ascertain the characteristics of spatial perspective-taking ability--assumed to be a form of imaginary body movement in three-dimensional space--in the elderly. A new task was devised to evaluate the development of this function: 20 children, 20 university students, and 20 elderly people (each group comprising 10 men and 10…

  7. Palm vein recognition based on directional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei

    2014-04-01

    Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.

  8. High-resolution Self-Organizing Maps for advanced visualization and dimension reduction.

    PubMed

    Saraswati, Ayu; Nguyen, Van Tuc; Hagenbuchner, Markus; Tsoi, Ah Chung

    2018-05-04

    Kohonen's Self Organizing feature Map (SOM) provides an effective way to project high dimensional input features onto a low dimensional display space while preserving the topological relationships among the input features. Recent advances in algorithms that take advantages of modern computing hardware introduced the concept of high resolution SOMs (HRSOMs). This paper investigates the capabilities and applicability of the HRSOM as a visualization tool for cluster analysis and its suitabilities to serve as a pre-processor in ensemble learning models. The evaluation is conducted on a number of established benchmarks and real-world learning problems, namely, the policeman benchmark, two web spam detection problems, a network intrusion detection problem, and a malware detection problem. It is found that the visualization resulted from an HRSOM provides new insights concerning these learning problems. It is furthermore shown empirically that broad benefits from the use of HRSOMs in both clustering and classification problems can be expected. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Unsupervised spike sorting based on discriminative subspace learning.

    PubMed

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-01-01

    Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. In this paper, we present two unsupervised spike sorting algorithms based on discriminative subspace learning. The first algorithm simultaneously learns the discriminative feature subspace and performs clustering. It uses histogram of features in the most discriminative projection to detect the number of neurons. The second algorithm performs hierarchical divisive clustering that learns a discriminative 1-dimensional subspace for clustering in each level of the hierarchy until achieving almost unimodal distribution in the subspace. The algorithms are tested on synthetic and in-vivo data, and are compared against two widely used spike sorting methods. The comparative results demonstrate that our spike sorting methods can achieve substantially higher accuracy in lower dimensional feature space, and they are highly robust to noise. Moreover, they provide significantly better cluster separability in the learned subspace than in the subspace obtained by principal component analysis or wavelet transform.

  10. Topological patterns of mesh textures in serpentinites

    NASA Astrophysics Data System (ADS)

    Miyazawa, M.; Suzuki, A.; Shimizu, H.; Okamoto, A.; Hiraoka, Y.; Obayashi, I.; Tsuji, T.; Ito, T.

    2017-12-01

    Serpentinization is a hydration process that forms serpentine minerals and magnetite within the oceanic lithosphere. Microfractures crosscut these minerals during the reactions, and the structures look like mesh textures. It has been known that the patterns of microfractures and the system evolutions are affected by the hydration reaction and fluid transport in fractures and within matrices. This study aims at quantifying the topological patterns of the mesh textures and understanding possible conditions of fluid transport and reaction during serpentinization in the oceanic lithosphere. Two-dimensional simulation by the distinct element method (DEM) generates fracture patterns due to serpentinization. The microfracture patterns are evaluated by persistent homology, which measures features of connected components of a topological space and encodes multi-scale topological features in the persistence diagrams. The persistence diagrams of the different mesh textures are evaluated by principal component analysis to bring out the strong patterns of persistence diagrams. This approach help extract feature values of fracture patterns from high-dimensional and complex datasets.

  11. Synthesis of Joint Volumes, Visualization of Paths, and Revision of Viewing Sequences in a Multi-dimensional Seismic Data Viewer

    NASA Astrophysics Data System (ADS)

    Chen, D. M.; Clapp, R. G.; Biondi, B.

    2006-12-01

    Ricksep is a freely-available interactive viewer for multi-dimensional data sets. The viewer is very useful for simultaneous display of multiple data sets from different viewing angles, animation of movement along a path through the data space, and selection of local regions for data processing and information extraction. Several new viewing features are added to enhance the program's functionality in the following three aspects. First, two new data synthesis algorithms are created to adaptively combine information from a data set with mostly high-frequency content, such as seismic data, and another data set with mainly low-frequency content, such as velocity data. Using the algorithms, these two data sets can be synthesized into a single data set which resembles the high-frequency data set on a local scale and at the same time resembles the low- frequency data set on a larger scale. As a result, the originally separated high and low-frequency details can now be more accurately and conveniently studied together. Second, a projection algorithm is developed to display paths through the data space. Paths are geophysically important because they represent wells into the ground. Two difficulties often associated with tracking paths are that they normally cannot be seen clearly inside multi-dimensional spaces and depth information is lost along the direction of projection when ordinary projection techniques are used. The new algorithm projects samples along the path in three orthogonal directions and effectively restores important depth information by using variable projection parameters which are functions of the distance away from the path. Multiple paths in the data space can be generated using different character symbols as positional markers, and users can easily create, modify, and view paths in real time. Third, a viewing history list is implemented which enables Ricksep's users to create, edit and save a recipe for the sequence of viewing states. Then, the recipe can be loaded into an active Ricksep session, after which the user can navigate to any state in the sequence and modify the sequence from that state. Typical uses of this feature are undoing and redoing viewing commands and animating a sequence of viewing states. The theoretical discussion are carried out and several examples using real seismic data are provided to show how these new Ricksep features provide more convenient, accurate ways to manipulate multi-dimensional data sets.

  12. [Features of PHITS and its application to medical physics].

    PubMed

    Hashimoto, Shintaro; Niita, Koji; Matsuda, Norihiro; Iwamoto, Yosuke; Iwase, Hiroshi; Sato, Tatsuhiko; Noda, Shusaku; Ogawa, Tatsuhiko; Nakashima, Hiroshi; Fukahori, Tokio; Furuta, Takuya; Chiba, Satoshi

    2013-01-01

    PHITS is a general purpose Monte Carlo particle transport simulation code to analyze the transport in three-dimensional phase space and collisions of nearly all particles, including heavy ions, over wide energy range up to 100 GeV/u. Various quantities, such as particle fluence and deposition energies in materials, can be deduced using estimator functions "tally". Recently, a microdosimetric tally function was also developed to apply PHITS to medical physics. Owing to these features, PHITS has been used for medical applications, such as radiation therapy and protection.

  13. Classification of Microarray Data Using Kernel Fuzzy Inference System

    PubMed Central

    Kumar Rath, Santanu

    2014-01-01

    The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function. PMID:27433543

  14. Electrostatic protection of the solar power satellite and rectenna. Part 1: Protection of the solar power satellite

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Several features of the interactions of the Solar Power Satellite (SPS) with its space environment are examined theoretically. The voltages produced at various surfaces due to space plasmas and the plasma leakage currents through the kapton and sapphire solar cell blankets are calculated. At geosynchronous orbit, this parasitic power loss is only 0.7%, and is easily compensated by oversizing. At low Earth orbit, the power loss is potentially much larger (3%), and anomalous arcing is expected for the EOTV high voltage negative surfaces. Preliminary results of a three dimensional self consistent plasma and electric field computer program are presented, confirming the validity of the predictions made from the one dimensional models. Lastly, magnetic shielding of the satellite is considered to reduce the power drain and to protect the solar cells from energetic electron and plasma ion bombardment. It is concluded that minor modifications can allow the SPS to operate safely and efficiently in its space environment. Subsequent design changes will substantially alter the basic conclusions.

  15. Moduli stabilising in heterotic nearly Kähler compactifications

    NASA Astrophysics Data System (ADS)

    Klaput, Michael; Lukas, Andre; Matti, Cyril; Svanes, Eirik E.

    2013-01-01

    We study heterotic string compactifications on nearly Kähler homogeneous spaces, including the gauge field effects which arise at order α'. Using Abelian gauge fields, we are able to solve the Bianchi identity and supersymmetry conditions to this order. The four-dimensional external space-time consists of a domain wall solution with moduli fields varying along the transverse direction. We find that the inclusion of α' corrections improves the moduli stabilization features of this solution. In this case, one of the dilaton and the volume modulus asymptotes to a constant value away from the domain wall. It is further shown that the inclusion of non-perturbative effects can stabilize the remaining modulus and "lift" the domain wall to an AdS vacuum. The coset SU(3)/U(1)2 is used as an explicit example to demonstrate the validity of this AdS vacuum. Our results show that heterotic nearly Kähler compactifications can lead to maximally symmetric four-dimensional space-times at the non-perturbative level.

  16. Face recognition by applying wavelet subband representation and kernel associative memory.

    PubMed

    Zhang, Bai-Ling; Zhang, Haihong; Ge, Shuzhi Sam

    2004-01-01

    In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2-D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.

  17. Human Activity Recognition in AAL Environments Using Random Projections.

    PubMed

    Damaševičius, Robertas; Vasiljevas, Mindaugas; Šalkevičius, Justas; Woźniak, Marcin

    2016-01-01

    Automatic human activity recognition systems aim to capture the state of the user and its environment by exploiting heterogeneous sensors attached to the subject's body and permit continuous monitoring of numerous physiological signals reflecting the state of human actions. Successful identification of human activities can be immensely useful in healthcare applications for Ambient Assisted Living (AAL), for automatic and intelligent activity monitoring systems developed for elderly and disabled people. In this paper, we propose the method for activity recognition and subject identification based on random projections from high-dimensional feature space to low-dimensional projection space, where the classes are separated using the Jaccard distance between probability density functions of projected data. Two HAR domain tasks are considered: activity identification and subject identification. The experimental results using the proposed method with Human Activity Dataset (HAD) data are presented.

  18. Human Activity Recognition in AAL Environments Using Random Projections

    PubMed Central

    Damaševičius, Robertas; Vasiljevas, Mindaugas; Šalkevičius, Justas; Woźniak, Marcin

    2016-01-01

    Automatic human activity recognition systems aim to capture the state of the user and its environment by exploiting heterogeneous sensors attached to the subject's body and permit continuous monitoring of numerous physiological signals reflecting the state of human actions. Successful identification of human activities can be immensely useful in healthcare applications for Ambient Assisted Living (AAL), for automatic and intelligent activity monitoring systems developed for elderly and disabled people. In this paper, we propose the method for activity recognition and subject identification based on random projections from high-dimensional feature space to low-dimensional projection space, where the classes are separated using the Jaccard distance between probability density functions of projected data. Two HAR domain tasks are considered: activity identification and subject identification. The experimental results using the proposed method with Human Activity Dataset (HAD) data are presented. PMID:27413392

  19. Caustic Skeleton & Cosmic Web

    NASA Astrophysics Data System (ADS)

    Feldbrugge, Job; van de Weygaert, Rien; Hidding, Johan; Feldbrugge, Joost

    2018-05-01

    We present a general formalism for identifying the caustic structure of a dynamically evolving mass distribution, in an arbitrary dimensional space. The identification of caustics in fluids with Hamiltonian dynamics, viewed in Lagrangian space, corresponds to the classification of singularities in Lagrangian catastrophe theory. On the basis of this formalism we develop a theoretical framework for the dynamics of the formation of the cosmic web, and specifically those aspects that characterize its unique nature: its complex topological connectivity and multiscale spinal structure of sheetlike membranes, elongated filaments and compact cluster nodes. Given the collisionless nature of the gravitationally dominant dark matter component in the universe, the presented formalism entails an accurate description of the spatial organization of matter resulting from the gravitationally driven formation of cosmic structure. The present work represents a significant extension of the work by Arnol'd et al. [1], who classified the caustics that develop in one- and two-dimensional systems that evolve according to the Zel'dovich approximation. His seminal work established the defining role of emerging singularities in the formation of nonlinear structures in the universe. At the transition from the linear to nonlinear structure evolution, the first complex features emerge at locations where different fluid elements cross to establish multistream regions. Involving a complex folding of the 6-D sheetlike phase-space distribution, it manifests itself in the appearance of infinite density caustic features. The classification and characterization of these mass element foldings can be encapsulated in caustic conditions on the eigenvalue and eigenvector fields of the deformation tensor field. In this study we introduce an alternative and transparent proof for Lagrangian catastrophe theory. This facilitates the derivation of the caustic conditions for general Lagrangian fluids, with arbitrary dynamics. Most important in the present context is that it allows us to follow and describe the full three-dimensional geometric and topological complexity of the purely gravitationally evolving nonlinear cosmic matter field. While generic and statistical results can be based on the eigenvalue characteristics, one of our key findings is that of the significance of the eigenvector field of the deformation field for outlining the entire spatial structure of the caustic skeleton emerging from a primordial density field. In this paper we explicitly consider the caustic conditions for the three-dimensional Zel'dovich approximation, extending earlier work on those for one- and two-dimensional fluids towards the full spatial richness of the cosmic web. In an accompanying publication, we apply this towards a full three-dimensional study of caustics in the formation of the cosmic web and evaluate in how far it manages to outline and identify the intricate skeletal features in the corresponding N-body simulations.

  20. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press. With a CD: data, software, guides. (2009). 2. Kanevski M. Spatial Predictions of Soil Contamination Using General Regression Neural Networks. Systems Research and Information Systems, Volume 8, number 4, 1999. 3. Robert S., Foresti L., Kanevski M. Spatial prediction of monthly wind speeds in complex terrain with adaptive general regression neural networks. International Journal of Climatology, 33 pp. 1793-1804, 2013.

  1. A Selective Overview of Variable Selection in High Dimensional Feature Space

    PubMed Central

    Fan, Jianqing

    2010-01-01

    High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods. PMID:21572976

  2. Mean apparent propagator (MAP) MRI: a novel diffusion imaging method for mapping tissue microstructure.

    PubMed

    Özarslan, Evren; Koay, Cheng Guan; Shepherd, Timothy M; Komlosh, Michal E; İrfanoğlu, M Okan; Pierpaoli, Carlo; Basser, Peter J

    2013-09-01

    Diffusion-weighted magnetic resonance (MR) signals reflect information about underlying tissue microstructure and cytoarchitecture. We propose a quantitative, efficient, and robust mathematical and physical framework for representing diffusion-weighted MR imaging (MRI) data obtained in "q-space," and the corresponding "mean apparent propagator (MAP)" describing molecular displacements in "r-space." We also define and map novel quantitative descriptors of diffusion that can be computed robustly using this MAP-MRI framework. We describe efficient analytical representation of the three-dimensional q-space MR signal in a series expansion of basis functions that accurately describes diffusion in many complex geometries. The lowest order term in this expansion contains a diffusion tensor that characterizes the Gaussian displacement distribution, equivalent to diffusion tensor MRI (DTI). Inclusion of higher order terms enables the reconstruction of the true average propagator whose projection onto the unit "displacement" sphere provides an orientational distribution function (ODF) that contains only the orientational dependence of the diffusion process. The representation characterizes novel features of diffusion anisotropy and the non-Gaussian character of the three-dimensional diffusion process. Other important measures this representation provides include the return-to-the-origin probability (RTOP), and its variants for diffusion in one- and two-dimensions-the return-to-the-plane probability (RTPP), and the return-to-the-axis probability (RTAP), respectively. These zero net displacement probabilities measure the mean compartment (pore) volume and cross-sectional area in distributions of isolated pores irrespective of the pore shape. MAP-MRI represents a new comprehensive framework to model the three-dimensional q-space signal and transform it into diffusion propagators. Experiments on an excised marmoset brain specimen demonstrate that MAP-MRI provides several novel, quantifiable parameters that capture previously obscured intrinsic features of nervous tissue microstructure. This should prove helpful for investigating the functional organization of normal and pathologic nervous tissue. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. The Cutplane - A tool for interactive solid modeling

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Kessler, William; Leifer, Larry

    1988-01-01

    A geometric modeling system which incorporates a new concept for intuitively and unambiguously specifying and manipulating points or features in three dimensional space is presented. The central concept, the Cutplane, consists of a plane that moves through space under control of a mouse or similar input device. The intersection of the plane and any object is highlighted, and only this highlighted section can be selected for manipulation. Selection is accomplished with a crosshair that is constrained to remain within the plane, so that the relationship between the crosshair and the feature of interest is immediately evident. Although the idea of a section view is not new, previously it has been used as a way to reveal hidden structure, not as a means of manipulating objects or indicating spatial position, as is proposed here.

  4. Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach

    NASA Astrophysics Data System (ADS)

    Liu, Wenyang; Sawant, Amit; Ruan, Dan

    2016-07-01

    The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.

  5. Distinction of Green Sweet Peppers by Using Various Color Space Models and Computation of 3 Dimensional Location Coordinates of Recognized Green Sweet Peppers Based on Parallel Stereovision System

    NASA Astrophysics Data System (ADS)

    Bachche, Shivaji; Oka, Koichi

    2013-06-01

    This paper presents the comparative study of various color space models to determine the suitable color space model for detection of green sweet peppers. The images were captured by using CCD cameras and infrared cameras and processed by using Halcon image processing software. The LED ring around the camera neck was used as an artificial lighting to enhance the feature parameters. For color images, CieLab, YIQ, YUV, HSI and HSV whereas for infrared images, grayscale color space models were selected for image processing. In case of color images, HSV color space model was found more significant with high percentage of green sweet pepper detection followed by HSI color space model as both provides information in terms of hue/lightness/chroma or hue/lightness/saturation which are often more relevant to discriminate the fruit from image at specific threshold value. The overlapped fruits or fruits covered by leaves can be detected in better way by using HSV color space model as the reflection feature from fruits had higher histogram than reflection feature from leaves. The IR 80 optical filter failed to distinguish fruits from images as filter blocks useful information on features. Computation of 3D coordinates of recognized green sweet peppers was also conducted in which Halcon image processing software provides location and orientation of the fruits accurately. The depth accuracy of Z axis was examined in which 500 to 600 mm distance between cameras and fruits was found significant to compute the depth distance precisely when distance between two cameras maintained to 100 mm.

  6. A meta-classifier for detecting prostate cancer by quantitative integration of in vivo magnetic resonance spectroscopy and magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Tiwari, Pallavi; Rosen, Mark; Madabhushi, Anant

    2008-03-01

    Recently, in vivo Magnetic Resonance Imaging (MRI) and Magnetic Resonance Spectroscopy (MRS) have emerged as promising new modalities to aid in prostate cancer (CaP) detection. MRI provides anatomic and structural information of the prostate while MRS provides functional data pertaining to biochemical concentrations of metabolites such as creatine, choline and citrate. We have previously presented a hierarchical clustering scheme for CaP detection on in vivo prostate MRS and have recently developed a computer-aided method for CaP detection on in vivo prostate MRI. In this paper we present a novel scheme to develop a meta-classifier to detect CaP in vivo via quantitative integration of multimodal prostate MRS and MRI by use of non-linear dimensionality reduction (NLDR) methods including spectral clustering and locally linear embedding (LLE). Quantitative integration of multimodal image data (MRI and PET) involves the concatenation of image intensities following image registration. However multimodal data integration is non-trivial when the individual modalities include spectral and image intensity data. We propose a data combination solution wherein we project the feature spaces (image intensities and spectral data) associated with each of the modalities into a lower dimensional embedding space via NLDR. NLDR methods preserve the relationships between the objects in the original high dimensional space when projecting them into the reduced low dimensional space. Since the original spectral and image intensity data are divorced from their original physical meaning in the reduced dimensional space, data at the same spatial location can be integrated by concatenating the respective embedding vectors. Unsupervised consensus clustering is then used to partition objects into different classes in the combined MRS and MRI embedding space. Quantitative results of our multimodal computer-aided diagnosis scheme on 16 sets of patient data obtained from the ACRIN trial, for which corresponding histological ground truth for spatial extent of CaP is known, show a marginally higher sensitivity, specificity, and positive predictive value compared to corresponding CAD results with the individual modalities.

  7. Age-related cerebral white matter changes and pulse-wave encephalopathy: observations with three-dimensional MRI.

    PubMed

    Henry Feugeas, Marie Cécile; De Marco, Giovanni; Peretti, Ilana Idy; Godon-Hardy, Sylvie; Fredy, Daniel; Claeys, Elisabeth Schouman

    2005-11-01

    Our purpose was to investigate leukoaraïosis (LA) using three-dimensional MR imaging combined with advanced image-processing technology to attempt to group signal abnormalities according to their etiology. Coronal T2-weighted fast fluid-attenuated inversion-recovery (FLAIR) sequences and three-dimensional T1-weighted fast spoiled gradient recalled echo sequences were used to examine cerebral white matter changes in 75 elderly people with memory complaint but no dementia. They were otherwise healthy, community-dwelling subjects. Three subtypes of LA were defined on the basis of their shape, geography and extent: the so-called subependymal/subpial LA, perivascular LA and "bands" along long white matter tracts. Subependymal changes were directly contiguous with ventricular spaces. They showed features of "water hammer" lesions with ventricular systematisation and a more frequent location around the frontal horns than around the bodies (P=.0008). The use of cerebrospinal fluid (CSF) contiguity criterion allowed a classification of splenial changes in the subpial group. Conversely, posterior periventricular lesions in the centrum ovale as well as irregular and extensive periventricular lesions were not directly contiguous with CSF spaces. The so-called perivascular changes showed features of small-vessel-associated disease; they surrounded linear CSF-like signals that followed the direction of perforating vessels. Distribution of these perivascular changes appeared heterogeneous (P ranging from .04 to 5.10(-16)). These findings suggest that subependymal/subpial LA and subcortical LA may be separate manifestations of a single underlying pulse-wave encephalopathy.

  8. Disease Prediction based on Functional Connectomes using a Scalable and Spatially-Informed Support Vector Machine

    PubMed Central

    Watanabe, Takanori; Kessler, Daniel; Scott, Clayton; Angstadt, Michael; Sripada, Chandra

    2014-01-01

    Substantial evidence indicates that major psychiatric disorders are associated with distributed neural dysconnectivity, leading to strong interest in using neuroimaging methods to accurately predict disorder status. In this work, we are specifically interested in a multivariate approach that uses features derived from whole-brain resting state functional connectomes. However, functional connectomes reside in a high dimensional space, which complicates model interpretation and introduces numerous statistical and computational challenges. Traditional feature selection techniques are used to reduce data dimensionality, but are blind to the spatial structure of the connectomes. We propose a regularization framework where the 6-D structure of the functional connectome (defined by pairs of points in 3-D space) is explicitly taken into account via the fused Lasso or the GraphNet regularizer. Our method only restricts the loss function to be convex and margin-based, allowing non-differentiable loss functions such as the hinge-loss to be used. Using the fused Lasso or GraphNet regularizer with the hinge-loss leads to a structured sparse support vector machine (SVM) with embedded feature selection. We introduce a novel efficient optimization algorithm based on the augmented Lagrangian and the classical alternating direction method, which can solve both fused Lasso and GraphNet regularized SVM with very little modification. We also demonstrate that the inner subproblems of the algorithm can be solved efficiently in analytic form by coupling the variable splitting strategy with a data augmentation scheme. Experiments on simulated data and resting state scans from a large schizophrenia dataset show that our proposed approach can identify predictive regions that are spatially contiguous in the 6-D “connectome space,” offering an additional layer of interpretability that could provide new insights about various disease processes. PMID:24704268

  9. Tensor networks from kinematic space

    DOE PAGES

    Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel; ...

    2016-07-20

    We point out that the MERA network for the ground state of a 1+1-dimensional conformal field theory has the same structural features as kinematic space — the geometry of CFT intervals. In holographic theories kinematic space becomes identified with the space of bulk geodesics studied in integral geometry. We argue that in these settings MERA is best viewed as a discretization of the space of bulk geodesics rather than of the bulk geometry itself. As a test of this kinematic proposal, we compare the MERA representation of the thermofield-double state with the space of geodesics in the two-sided BTZ geometry,more » obtaining a detailed agreement which includes the entwinement sector. In conclusion, we discuss how the kinematic proposal can be extended to excited states by generalizing MERA to a broader class of compression networks.« less

  10. Space-time topology and quantum gravity.

    NASA Astrophysics Data System (ADS)

    Friedman, J. L.

    Characteristic features are discussed of a theory of quantum gravity that allows space-time with a non-Euclidean topology. The review begins with a summary of the manifolds that can occur as classical vacuum space-times and as space-times with positive energy. Local structures with non-Euclidean topology - topological geons - collapse, and one may conjecture that in asymptotically flat space-times non-Euclidean topology is hiden from view. In the quantum theory, large diffeos can act nontrivially on the space of states, leading to state vectors that transform as representations of the corresponding symmetry group π0(Diff). In particular, in a quantum theory that, at energies E < EPlanck, is a theory of the metric alone, there appear to be ground states with half-integral spin, and in higher-dimensional gravity, with the kinematical quantum numbers of fundamental fermions.

  11. Detection of epileptiform activity in EEG signals based on time-frequency and non-linear analysis

    PubMed Central

    Gajic, Dragoljub; Djurovic, Zeljko; Gligorijevic, Jovan; Di Gennaro, Stefano; Savic-Gajic, Ivana

    2015-01-01

    We present a new technique for detection of epileptiform activity in EEG signals. After preprocessing of EEG signals we extract representative features in time, frequency and time-frequency domain as well as using non-linear analysis. The features are extracted in a few frequency sub-bands of clinical interest since these sub-bands showed much better discriminatory characteristics compared with the whole frequency band. Then we optimally reduce the dimension of feature space to two using scatter matrices. A decision about the presence of epileptiform activity in EEG signals is made by quadratic classifiers designed in the reduced two-dimensional feature space. The accuracy of the technique was tested on three sets of electroencephalographic (EEG) signals recorded at the University Hospital Bonn: surface EEG signals from healthy volunteers, intracranial EEG signals from the epilepsy patients during the seizure free interval from within the seizure focus and intracranial EEG signals of epileptic seizures also from within the seizure focus. An overall detection accuracy of 98.7% was achieved. PMID:25852534

  12. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    PubMed

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Visualizing and enhancing a deep learning framework using patients age and gender for chest x-ray image retrieval

    NASA Astrophysics Data System (ADS)

    Anavi, Yaron; Kogan, Ilya; Gelbart, Elad; Geva, Ofer; Greenspan, Hayit

    2016-03-01

    We explore the combination of text metadata, such as patients' age and gender, with image-based features, for X-ray chest pathology image retrieval. We focus on a feature set extracted from a pre-trained deep convolutional network shown in earlier work to achieve state-of-the-art results. Two distance measures are explored: a descriptor-based measure, which computes the distance between image descriptors, and a classification-based measure, which performed by a comparison of the corresponding SVM classification probabilities. We show that retrieval results increase once the age and gender information combined with the features extracted from the last layers of the network, with best results using the classification-based scheme. Visualization of the X-ray data is presented by embedding the high dimensional deep learning features in a 2-D dimensional space while preserving the pairwise distances using the t-SNE algorithm. The 2-D visualization gives the unique ability to find groups of X-ray images that are similar to the query image and among themselves, which is a characteristic we do not see in a 1-D traditional ranking.

  14. Zero-dimensional to three-dimensional nanojoining: current status and potential applications

    DOE PAGES

    Ma, Ying; Li, Hong; Bridges, Denzel; ...

    2016-08-01

    We report that the continuing miniaturization of microelectronics is pushing advanced manufacturing into nanomanufacturing. Nanojoining is a bottom-up assembly technique that enables functional nanodevice fabrication with dissimilar nanoscopic building blocks and/or molecular components. Various conventional joining techniques have been modified and re-invented for joining nanomaterials. Our review surveys recent progress in nanojoining methods, as compared to conventional joining processes. Examples of nanojoining are given and classified by the dimensionality of the joining materials. At each classification, nanojoining is reviewed and discussed according to materials specialties, low dimensional processing features, energy input mechanisms and potential applications. The preparation of new intermetallicmore » materials by reactive nanoscale multilayer foils based on self-propagating high-temperature synthesis is highlighted. This review will provide insight into nanojoining fundamentals and innovative applications in power electronics packaging, plasmonic devices, nanosoldering for printable electronics, 3D printing and space manufacturing.« less

  15. GridMass: a fast two-dimensional feature detection method for LC/MS.

    PubMed

    Treviño, Victor; Yañez-Garza, Irma-Luz; Rodriguez-López, Carlos E; Urrea-López, Rafael; Garza-Rodriguez, Maria-Lourdes; Barrera-Saldaña, Hugo-Alberto; Tamez-Peña, José G; Winkler, Robert; Díaz de-la-Garza, Rocío-Isabel

    2015-01-01

    One of the initial and critical procedures for the analysis of metabolomics data using liquid chromatography and mass spectrometry is feature detection. Feature detection is the process to detect boundaries of the mass surface from raw data. It consists of detected abundances arranged in a two-dimensional (2D) matrix of mass/charge and elution time. MZmine 2 is one of the leading software environments that provide a full analysis pipeline for these data. However, the feature detection algorithms provided in MZmine 2 are based mainly on the analysis of one-dimension at a time. We propose GridMass, an efficient algorithm for 2D feature detection. The algorithm is based on landing probes across the chromatographic space that are moved to find local maxima providing accurate boundary estimations. We tested GridMass on a controlled marker experiment, on plasma samples, on plant fruits, and in a proteome sample. Compared with other algorithms, GridMass is faster and may achieve comparable or better sensitivity and specificity. As a proof of concept, GridMass has been implemented in Java under the MZmine 2 environment and is available at http://www.bioinformatica.mty.itesm.mx/GridMass and MASSyPup. It has also been submitted to the MZmine 2 developing community. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Curved singular beams for three-dimensional particle manipulation.

    PubMed

    Zhao, Juanying; Chremmos, Ioannis D; Song, Daohong; Christodoulides, Demetrios N; Efremidis, Nikolaos K; Chen, Zhigang

    2015-07-13

    For decades, singular beams carrying angular momentum have been a topic of considerable interest. Their intriguing applications are ubiquitous in a variety of fields, ranging from optical manipulation to photon entanglement, and from microscopy and coronagraphy to free-space communications, detection of rotating black holes, and even relativistic electrons and strong-field physics. In most applications, however, singular beams travel naturally along a straight line, expanding during linear propagation or breaking up in nonlinear media. Here, we design and demonstrate diffraction-resisting singular beams that travel along arbitrary trajectories in space. These curved beams not only maintain an invariant dark "hole" in the center but also preserve their angular momentum, exhibiting combined features of optical vortex, Bessel, and Airy beams. Furthermore, we observe three-dimensional spiraling of microparticles driven by such fine-shaped dynamical beams. Our findings may open up new avenues for shaped light in various applications.

  17. A reward optimization method based on action subrewards in hierarchical reinforcement learning.

    PubMed

    Fu, Yuchen; Liu, Quan; Ling, Xionghong; Cui, Zhiming

    2014-01-01

    Reinforcement learning (RL) is one kind of interactive learning methods. Its main characteristics are "trial and error" and "related reward." A hierarchical reinforcement learning method based on action subrewards is proposed to solve the problem of "curse of dimensionality," which means that the states space will grow exponentially in the number of features and low convergence speed. The method can reduce state spaces greatly and choose actions with favorable purpose and efficiency so as to optimize reward function and enhance convergence speed. Apply it to the online learning in Tetris game, and the experiment result shows that the convergence speed of this algorithm can be enhanced evidently based on the new method which combines hierarchical reinforcement learning algorithm and action subrewards. The "curse of dimensionality" problem is also solved to a certain extent with hierarchical method. All the performance with different parameters is compared and analyzed as well.

  18. The Impact of The Fractal Paradigm on Geography

    NASA Astrophysics Data System (ADS)

    De Cola, L.

    2001-12-01

    Being itself somewhat fractal, Benoit Mandelbrot's magnum opus THE FRACTAL GEOMETRY OF NATURE may be deconstructed in many ways, including geometrically, systematically, and epistemologically. Viewed as a work of geography it may be used to organize the major topics of interest to scientists preoccupied with the understanding of real-world space in astronomy, geology, meteorology, hydrology, and biology. We shall use it to highlight such recent geographic accomplishments as automated feature detection, understanding urban growth, and modeling the spread of disease in space and time. However, several key challenges remain unsolved, among them: 1. It is still not possible to move continuously from one map scale to another so that objects change their dimension smoothly. I.e. as a viewer zooms in on a map the zero-dimensional location of a city should gradually become a 2-dimensional polygon, then a network of 1-dimensional streets, then 3-dimensional buildings, etc. 2. Spatial autocorrelation continues to be regarded more as an econometric challenge than as a problem of scaling. Similarities of values among closely-spaced observation is not so much a problem to be overcome as a source of information about spatial structure. 3. Although the fractal paradigm is a powerful model for data analysis, its ideas and techniques need to be brought to bear on the problems of understanding such hierarchies as ecosystems (the flow networks of energy and matter), taxonomies (biological classification), and knowledge (hierarchies of bureaucratic information, networks of linked data, etc).

  19. Fault-tolerant control of large space structures using the stable factorization approach

    NASA Technical Reports Server (NTRS)

    Razavi, H. C.; Mehra, R. K.; Vidyasagar, M.

    1986-01-01

    Large space structures are characterized by the following features: they are in general infinite-dimensional systems, and have large numbers of undamped or lightly damped poles. Any attempt to apply linear control theory to large space structures must therefore take into account these features. Phase I consisted of an attempt to apply the recently developed Stable Factorization (SF) design philosophy to problems of large space structures, with particular attention to the aspects of robustness and fault tolerance. The final report on the Phase I effort consists of four sections, each devoted to one task. The first three sections report theoretical results, while the last consists of a design example. Significant results were obtained in all four tasks of the project. More specifically, an innovative approach to order reduction was obtained, stabilizing controller structures for plants with an infinite number of unstable poles were determined under some conditions, conditions for simultaneous stabilizability of an infinite number of plants were explored, and a fault tolerance controller design that stabilizes a flexible structure model was obtained which is robust against one failure condition.

  20. Machine-learned cluster identification in high-dimensional data.

    PubMed

    Ultsch, Alfred; Lötsch, Jörn

    2017-02-01

    High-dimensional biomedical data are frequently clustered to identify subgroup structures pointing at distinct disease subtypes. It is crucial that the used cluster algorithm works correctly. However, by imposing a predefined shape on the clusters, classical algorithms occasionally suggest a cluster structure in homogenously distributed data or assign data points to incorrect clusters. We analyzed whether this can be avoided by using emergent self-organizing feature maps (ESOM). Data sets with different degrees of complexity were submitted to ESOM analysis with large numbers of neurons, using an interactive R-based bioinformatics tool. On top of the trained ESOM the distance structure in the high dimensional feature space was visualized in the form of a so-called U-matrix. Clustering results were compared with those provided by classical common cluster algorithms including single linkage, Ward and k-means. Ward clustering imposed cluster structures on cluster-less "golf ball", "cuboid" and "S-shaped" data sets that contained no structure at all (random data). Ward clustering also imposed structures on permuted real world data sets. By contrast, the ESOM/U-matrix approach correctly found that these data contain no cluster structure. However, ESOM/U-matrix was correct in identifying clusters in biomedical data truly containing subgroups. It was always correct in cluster structure identification in further canonical artificial data. Using intentionally simple data sets, it is shown that popular clustering algorithms typically used for biomedical data sets may fail to cluster data correctly, suggesting that they are also likely to perform erroneously on high dimensional biomedical data. The present analyses emphasized that generally established classical hierarchical clustering algorithms carry a considerable tendency to produce erroneous results. By contrast, unsupervised machine-learned analysis of cluster structures, applied using the ESOM/U-matrix method, is a viable, unbiased method to identify true clusters in the high-dimensional space of complex data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Signatures of extra dimensions in gravitational waves from black hole quasinormal modes

    NASA Astrophysics Data System (ADS)

    Chakraborty, Sumanta; Chakravarti, Kabir; Bose, Sukanta; SenGupta, Soumitra

    2018-05-01

    In this work, we have derived the evolution equation for gravitational perturbation in four-dimensional spacetime in the presence of a spatial extra dimension. The evolution equation is derived by perturbing the effective gravitational field equations on the four-dimensional spacetime, which inherits nontrivial higher-dimensional effects. Note that this is different from the perturbation of the five-dimensional gravitational field equations that exist in the literature and possess quantitatively new features. The gravitational perturbation has further been decomposed into a purely four-dimensional part and another piece that depends on extra dimensions. The four-dimensional gravitational perturbation now admits massive propagating degrees of freedom, owing to the existence of higher dimensions. We have also studied the influence of these massive propagating modes on the quasinormal mode frequencies, signaling the higher-dimensional nature of the spacetime, and have contrasted these massive modes with the massless modes in general relativity. Surprisingly, it turns out that the massive modes experience damping much smaller than that of the massless modes in general relativity and may even dominate over and above the general relativity contribution if one observes the ringdown phase of a black hole merger event at sufficiently late times. Furthermore, the whole analytical framework has been supplemented by the fully numerical Cauchy evolution problem, as well. In this context, we have shown that, except for minute details, the overall features of the gravitational perturbations are captured both in the Cauchy evolution as well as in the analysis of quasinormal modes. The implications on observations of black holes with LIGO and proposed space missions such as LISA are also discussed.

  2. TH-CD-207A-07: Prediction of High Dimensional State Subject to Respiratory Motion: A Manifold Learning Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  3. Global Interior Robot Localisation by a Colour Content Image Retrieval System

    NASA Astrophysics Data System (ADS)

    Chaari, A.; Lelandais, S.; Montagne, C.; Ahmed, M. Ben

    2007-12-01

    We propose a new global localisation approach to determine a coarse position of a mobile robot in structured indoor space using colour-based image retrieval techniques. We use an original method of colour quantisation based on the baker's transformation to extract a two-dimensional colour pallet combining as well space and vicinity-related information as colourimetric aspect of the original image. We conceive several retrieving approaches bringing to a specific similarity measure [InlineEquation not available: see fulltext.] integrating the space organisation of colours in the pallet. The baker's transformation provides a quantisation of the image into a space where colours that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image. Whereas the distance [InlineEquation not available: see fulltext.] provides for partial invariance to translation, sight point small changes, and scale factor. In addition to this study, we developed a hierarchical search module based on the logic classification of images following rooms. This hierarchical module reduces the searching indoor space and ensures an improvement of our system performances. Results are then compared with those brought by colour histograms provided with several similarity measures. In this paper, we focus on colour-based features to describe indoor images. A finalised system must obviously integrate other type of signature like shape and texture.

  4. Correction of Cardy–Verlinde formula for Fermions and Bosons with modified dispersion relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadatian, S. Davood, E-mail: sd-sadatian@um.ac.ir; Dareyni, H.

    Cardy–Verlinde formula links the entropy of conformal symmetry field to the total energy and its Casimir energy in a D-dimensional space. To correct black hole thermodynamics, modified dispersion relation can be used which is proposed as a general feature of quantum gravity approaches. In this paper, the thermodynamics of Schwarzschild four-dimensional black hole is corrected using the modified dispersion relation for Fermions and Bosons. Finally, using modified thermodynamics of Schwarzschild four-dimensional black hole, generalization for Cardy–Verlinde formula is obtained. - Highlights: • The modified Cardy–Verlinde formula obtained using MDR for Fermions and Bosons. • The modified entropy of the blackmore » hole used to correct the Cardy–Verlinde formula. • The modified entropy of the CFT has been obtained.« less

  5. A new clustering algorithm applicable to multispectral and polarimetric SAR images

    NASA Technical Reports Server (NTRS)

    Wong, Yiu-Fai; Posner, Edward C.

    1993-01-01

    We describe an application of a scale-space clustering algorithm to the classification of a multispectral and polarimetric SAR image of an agricultural site. After the initial polarimetric and radiometric calibration and noise cancellation, we extracted a 12-dimensional feature vector for each pixel from the scattering matrix. The clustering algorithm was able to partition a set of unlabeled feature vectors from 13 selected sites, each site corresponding to a distinct crop, into 13 clusters without any supervision. The cluster parameters were then used to classify the whole image. The classification map is much less noisy and more accurate than those obtained by hierarchical rules. Starting with every point as a cluster, the algorithm works by melting the system to produce a tree of clusters in the scale space. It can cluster data in any multidimensional space and is insensitive to variability in cluster densities, sizes and ellipsoidal shapes. This algorithm, more powerful than existing ones, may be useful for remote sensing for land use.

  6. Geometric features of workspace and joint-space paths of 3D reaching movements.

    PubMed

    Klein Breteler, M D; Meulenbroek, R G; Gielen, S C

    1998-11-01

    The present study focuses on geometric features of workspace and joint-space paths of three-dimensional reaching movements. Twelve subjects repeatedly performed a three-segment, triangular-shaped movement pattern in an approximately 60 degrees tilted horizontal plane. Task variables elicited movement patterns that varied in position, rotational direction and speed. Trunk, arm, hand and finger-tip movements were recorded by means of a 3D motion-tracking system. Angular excursions of the shoulder and elbow joints were extracted from position data. Analyses of the shape of 3D workspace and joint-space paths focused on the extent to which the submovements were produced in a plane, and on the curvature of the central parts of the submovements. A systematic tendency to produce movements in a plane was found in addition to an increase of finger-tip path curvature with increasing speed. The findings are discussed in relation to the role of optimization principles in trajectory-formation models.

  7. Three-dimensional modeling of tea-shoots using images and models.

    PubMed

    Wang, Jian; Zeng, Xianyin; Liu, Jianbing

    2011-01-01

    In this paper, a method for three-dimensional modeling of tea-shoots with images and calculation models is introduced. The process is as follows: the tea shoots are photographed with a camera, color space conversion is conducted, using an improved algorithm that is based on color and regional growth to divide the tea shoots in the images, and the edges of the tea shoots extracted with the help of edge detection; after that, using the divided tea-shoot images, the three-dimensional coordinates of the tea shoots are worked out and the feature parameters extracted, matching and calculation conducted according to the model database, and finally the three-dimensional modeling of tea-shoots is completed. According to the experimental results, this method can avoid a lot of calculations and has better visual effects and, moreover, performs better in recovering the three-dimensional information of the tea shoots, thereby providing a new method for monitoring the growth of and non-destructive testing of tea shoots.

  8. Coarse analysis of collective behaviors: Bifurcation analysis of the optimal velocity model for traffic jam formation

    NASA Astrophysics Data System (ADS)

    Miura, Yasunari; Sugiyama, Yuki

    2017-12-01

    We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.

  9. Higher Dimensional Spacetimes for Visualizing and Modeling Subluminal, Luminal and Superluminal Flight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Froning, H. David; Meholic, Gregory V.

    2010-01-28

    This paper briefly explores higher dimensional spacetimes that extend Meholic's visualizable, fluidic views of: subluminal-luminal-superluminal flight; gravity, inertia, light quanta, and electromagnetism from 2-D to 3-D representations. Although 3-D representations have the potential to better model features of Meholic's most fundamental entities (Transluminal Energy Quantum) and of the zero-point quantum vacuum that pervades all space, the more complex 3-D representations loose some of the clarity of Meholic's 2-D representations of subluminal and superlumimal realms. So, much new work would be needed to replace Meholic's 2-D views of reality with 3-D ones.

  10. Sublimation-Condensation of Multiscale Tellurium Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riley, Brian J.; Johnson, Bradley R.; Schaef, Herbert T.

    This paper presents a simple technique for making tellurium (Te) nano and microtubes of widely varying dimensions with Multi-Scale Processing (MSP). In this process, the Te metal is placed in a reaction vessel (e.g., borosilicate or fused quartz), the vessel is evacuated, and then sealed under vacuum with a torch. The vessel is heat-treated in a temperature gradient where a portion of the tube that can also contain an additional substrate, is under a decreasing temperature gradient. Scanning and transmission electron microscopies have shown that multifaceted crystalline tubes have been formed extending from nano- up to micron-scale with diameters rangingmore » from 51.2 ± 5.9 to 1042 ± 134 nm between temperatures of 157 and 224 °C, respectively. One-dimensional tubular features are seen at lower temperatures, while three-dimensional features, at the higher temperatures. These features have been characterized with X-ray diffraction and found to be trigonal Te with space group P3121. Our results show that the MSP can adequately be described using a simple Arrhenius equation.« less

  11. Neural representations of emotion are organized around abstract event features.

    PubMed

    Skerry, Amy E; Saxe, Rebecca

    2015-08-03

    Research on emotion attribution has tended to focus on the perception of overt expressions of at most five or six basic emotions. However, our ability to identify others' emotional states is not limited to perception of these canonical expressions. Instead, we make fine-grained inferences about what others feel based on the situations they encounter, relying on knowledge of the eliciting conditions for different emotions. In the present research, we provide convergent behavioral and neural evidence concerning the representations underlying these concepts. First, we find that patterns of activity in mentalizing regions contain information about subtle emotional distinctions conveyed through verbal descriptions of eliciting situations. Second, we identify a space of abstract situation features that well captures the emotion discriminations subjects make behaviorally and show that this feature space outperforms competing models in capturing the similarity space of neural patterns in these regions. Together, the data suggest that our knowledge of others' emotions is abstract and high dimensional, that brain regions selective for mental state reasoning support relatively subtle distinctions between emotion concepts, and that the neural representations in these regions are not reducible to more primitive affective dimensions such as valence and arousal. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Neural Representations of Emotion Are Organized around Abstract Event Features

    PubMed Central

    Skerry, Amy E.; Saxe, Rebecca

    2016-01-01

    Summary Research on emotion attribution has tended to focus on the perception of overt expressions of at most five or six basic emotions. However, our ability to identify others' emotional states is not limited to perception of these canonical expressions. Instead, we make fine-grained inferences about what others feel based on the situations they encounter, relying on knowledge of the eliciting conditions for different emotions. In the present research, we provide convergent behavioral and neural evidence concerning the representations underlying these concepts. First, we find that patterns of activity in mentalizing regions contain information about subtle emotional distinctions conveyed through verbal descriptions of eliciting situations. Second, we identify a space of abstract situation features that well captures the emotion discriminations subjects make behaviorally and show that this feature space outperforms competing models in capturing the similarity space of neural patterns in these regions. Together, the data suggest that our knowledge of others' emotions is abstract and high dimensional, that brain regions selective for mental state reasoning support relatively subtle distinctions between emotion concepts, and that the neural representations in these regions are not reducible to more primitive affective dimensions such as valence and arousal. PMID:26212878

  13. High-dimensional structured light coding/decoding for free-space optical communications free of obstructions.

    PubMed

    Du, Jing; Wang, Jian

    2015-11-01

    Bessel beams carrying orbital angular momentum (OAM) with helical phase fronts exp(ilφ)(l=0;±1;±2;…), where φ is the azimuthal angle and l corresponds to the topological number, are orthogonal with each other. This feature of Bessel beams provides a new dimension to code/decode data information on the OAM state of light, and the theoretical infinity of topological number enables possible high-dimensional structured light coding/decoding for free-space optical communications. Moreover, Bessel beams are nondiffracting beams having the ability to recover by themselves in the face of obstructions, which is important for free-space optical communications relying on line-of-sight operation. By utilizing the OAM and nondiffracting characteristics of Bessel beams, we experimentally demonstrate 12 m distance obstruction-free optical m-ary coding/decoding using visible Bessel beams in a free-space optical communication system. We also study the bit error rate (BER) performance of hexadecimal and 32-ary coding/decoding based on Bessel beams with different topological numbers. After receiving 500 symbols at the receiver side, a zero BER of hexadecimal coding/decoding is observed when the obstruction is placed along the propagation path of light.

  14. Fermion localization on a split brane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chumbes, A. E. R.; Vasquez, A. E. O.; Hott, M. B.

    2011-05-15

    In this work we analyze the localization of fermions on a brane embedded in five-dimensional, warped and nonwarped, space-time. In both cases we use the same nonlinear theoretical model with a nonpolynomial potential featuring a self-interacting scalar field whose minimum energy solution is a soliton (a kink) which can be continuously deformed into a two-kink. Thus a single brane splits into two branes. The behavior of spin 1/2 fermions wave functions on the split brane depends on the coupling of fermions to the scalar field and on the geometry of the space-time.

  15. A holographic model of the Kondo effect

    NASA Astrophysics Data System (ADS)

    Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Wu, Jackson

    2013-12-01

    We propose a model of the Kondo effect based on the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence, also known as holography. The Kondo effect is the screening of a magnetic impurity coupled anti-ferromagnetically to a bath of conduction electrons at low temperatures. In a (1+1)-dimensional CFT description, the Kondo effect is a renormalization group flow triggered by a marginally relevant (0+1)-dimensional operator between two fixed points with the same Kac-Moody current algebra. In the large- N limit, with spin SU( N) and charge U(1) symmetries, the Kondo effect appears as a (0+1)-dimensional second-order mean-field transition in which the U(1) charge symmetry is spontaneously broken. Our holographic model, which combines the CFT and large- N descriptions, is a Chern-Simons gauge field in (2+1)-dimensional AdS space, AdS 3, dual to the Kac-Moody current, coupled to a holographic superconductor along an AdS 2 sub-space. Our model exhibits several characteristic features of the Kondo effect, including a dynamically generated scale, a resistivity with power-law behavior in temperature at low temperatures, and a spectral flow producing a phase shift. Our holographic Kondo model may be useful for studying many open problems involving impurities, including for example the Kondo lattice problem.

  16. A series of three-dimensional lanthanide coordination polymers with rutile and unprecedented rutile-related topologies.

    PubMed

    Qin, Chao; Wang, Xin-Long; Wang, En-Bo; Su, Zhong-Min

    2005-10-03

    The complexes of formulas Ln(pydc)(Hpydc) (Ln = Sm (1), Eu (2), Gd (3); H2pydc = pyridine-2,5-dicarboxylic acid) and Ln(pydc)(bc)(H2O) (Ln = Sm (4), Gd (5); Hbc = benzenecarboxylic acid) have been synthesized under hydrothermal conditions and characterized by elemental analysis, IR, TG analysis, and single-crystal X-ray diffraction. Compounds 1-3 are isomorphous and crystallize in the orthorhombic system, space group Pbcn. Their final three-dimensional racemic frameworks can be considered as being constructed by helix-linked scalelike sheets. Compounds 4 and 5 are isostructural and crystallize in the monoclinic system, space group P2(1)/c. pydc ligands bridge dinuclear lanthanide centers to form the three-dimensional frameworks featuring hexagonal channels along the a-axis that are occupied by one-end-coordinated bc ligands. From the topological point of view, the five three-dimensional nets are binodal with six- and three-connected nodes, the former of which exhibit a rutile-related (4.6(2))(2)(4(2).6(9).8(4)) topology that is unprecedented within coordination frames, and the latter two species display a distorted rutile (4.6(2))(2)(4(2).6(10).8(3)) topology. Furthermore, the luminescent properties of 2 were studied.

  17. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  18. Research on the development of space target detecting system and three-dimensional reconstruction technology

    NASA Astrophysics Data System (ADS)

    Li, Dong; Wei, Zhen; Song, Dawei; Sun, Wenfeng; Fan, Xiaoyan

    2016-11-01

    With the development of space technology, the number of spacecrafts and debris are increasing year by year. The demand for detecting and identification of spacecraft is growing strongly, which provides support to the cataloguing, crash warning and protection of aerospace vehicles. The majority of existing approaches for three-dimensional reconstruction is scattering centres correlation, which is based on the radar high resolution range profile (HRRP). This paper proposes a novel method to reconstruct the threedimensional scattering centre structure of target from a sequence of radar ISAR images, which mainly consists of three steps. First is the azimuth scaling of consecutive ISAR images based on fractional Fourier transform (FrFT). The later is the extraction of scattering centres and matching between adjacent ISAR images using grid method. Finally, according to the coordinate matrix of scattering centres, the three-dimensional scattering centre structure is reconstructed using improved factorization method. The three-dimensional structure is featured with stable and intuitive characteristic, which provides a new way to improve the identification probability and reduce the complexity of the model matching library. A satellite model is reconstructed using the proposed method from four consecutive ISAR images. The simulation results prove that the method has gotten a satisfied consistency and accuracy.

  19. High-level intuitive features (HLIFs) for intuitive skin lesion description.

    PubMed

    Amelard, Robert; Glaister, Jeffrey; Wong, Alexander; Clausi, David A

    2015-03-01

    A set of high-level intuitive features (HLIFs) is proposed to quantitatively describe melanoma in standard camera images. Melanoma is the deadliest form of skin cancer. With rising incidence rates and subjectivity in current clinical detection methods, there is a need for melanoma decision support systems. Feature extraction is a critical step in melanoma decision support systems. Existing feature sets for analyzing standard camera images are comprised of low-level features, which exist in high-dimensional feature spaces and limit the system's ability to convey intuitive diagnostic rationale. The proposed HLIFs were designed to model the ABCD criteria commonly used by dermatologists such that each HLIF represents a human-observable characteristic. As such, intuitive diagnostic rationale can be conveyed to the user. Experimental results show that concatenating the proposed HLIFs with a full low-level feature set increased classification accuracy, and that HLIFs were able to separate the data better than low-level features with statistical significance. An example of a graphical interface for providing intuitive rationale is given.

  20. Three-dimensional spectral analysis of compositional heterogeneity at Arruntia crater on (4) Vesta using Dawn FC

    NASA Astrophysics Data System (ADS)

    Thangjam, Guneshwar; Nathues, Andreas; Mengel, Kurt; Schäfer, Michael; Hoffmann, Martin; Cloutis, Edward A.; Mann, Paul; Müller, Christian; Platz, Thomas; Schäfer, Tanja

    2016-03-01

    We introduce an innovative three-dimensional spectral approach (three band parameter space with polyhedrons) that can be used for both qualitative and quantitative analyzes improving the characterization of surface compositional heterogeneity of (4) Vesta. It is an advanced and more robust methodology compared to the standard two-dimensional spectral approach (two band parameter space). The Dawn Framing Camera (FC) color data obtained during High Altitude Mapping Orbit (resolution ∼ 60 m/pixel) is used. The main focus is on the howardite-eucrite-diogenite (HED) lithologies containing carbonaceous chondritic material, olivine, and impact-melt. The archived spectra of HEDs and their mixtures, from RELAB, HOSERLab and USGS databases as well as our laboratory-measured spectra are used for this study. Three-dimensional convex polyhedrons are defined using computed band parameter values of laboratory spectra. Polyhedrons based on the parameters of Band Tilt (R0.92μm/R0.96μm), Mid Ratio ((R0.75μm/R0.83μm)/(R0.83μm/R0.92μm)) and reflectance at 0.55 μm (R0.55μm) are chosen for the present analysis. An algorithm in IDL programming language is employed to assign FC data points to the respective polyhedrons. The Arruntia region in the northern hemisphere of Vesta is selected for a case study because of its geological and mineralogical importance. We observe that this region is eucrite-dominated howarditic in composition. The extent of olivine-rich exposures within an area of 2.5 crater radii is ∼12% larger than the previous finding (Thangjam, G. et al. [2014]. Meteorit. Planet. Sci. 49, 1831-1850). Lithologies of nearly pure CM2-chondrite, olivine, glass, and diogenite are not found in this region. Although there are no unambiguous spectral features of impact melt, the investigation of morphological features using FC clear filter data from Low Altitude Mapping Orbit (resolution ∼ 18 m/pixel) suggests potential impact-melt features inside and outside of the crater. Our spectral approach can be extended to the entire Vestan surface to study the heterogeneous surface composition and its geology.

  1. Electrostatic protection of the Solar Power Satellite and rectenna

    NASA Technical Reports Server (NTRS)

    Freeman, J. W.; Few, A. A., Jr.; Reiff, P. H.; Cooke, D.; Bohannon, J.; Haymes, B.

    1979-01-01

    Several features of the interactions of the solar power satellite (SPS) with its space environment were examined theoretically. The voltages produced at various surfaces due to space plasmas and the plasma leakage currents through the kapton and sapphire solar cell blankets were calculated. At geosynchronous orbit, this parasitic power loss is only 0.7%, and is easily compensated by oversizing. At low-Earth orbit, the power loss is potentially much larger (3%), and anomalous arcing is expected for the EOTV high voltage negative surfaces. Preliminary results of a three dimensional self-consistent plasma and electric field computer program are presented, confirming the validity of the predictions made from the one dimensional models. Magnetic shielding of the satellite, to reduce the power drain and to protect the solar cells from energetic electron and plasma ion bombardment is considered. It is concluded that minor modifications can allow the SPS to operate safely and efficiently in its space environment. The SPS design employed in this study is the 1978 MSFC baseline design utilizing GaAs solar cells at CR-2 and an aluminum structure.

  2. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  3. Multi-perspective analysis and spatiotemporal mapping of air pollution monitoring data.

    PubMed

    Kolovos, Alexander; Skupin, André; Jerrett, Michael; Christakos, George

    2010-09-01

    Space-time data analysis and assimilation techniques in atmospheric sciences typically consider input from monitoring measurements. The input is often processed in a manner that acknowledges characteristics of the measurements (e.g., underlying patterns, fluctuation features) under conditions of uncertainty; it also leads to the derivation of secondary information that serves study-oriented goals, and provides input to space-time prediction techniques. We present a novel approach that blends a rigorous space-time prediction model (Bayesian maximum entropy, BME) with a cognitively informed visualization of high-dimensional data (spatialization). The combined BME and spatialization approach (BME-S) is used to study monthly averaged NO2 and mean annual SO4 measurements in California over the 15-year period 1988-2002. Using the original scattered measurements of these two pollutants BME generates spatiotemporal predictions on a regular grid across the state. Subsequently, the prediction network undergoes the spatialization transformation into a lower-dimensional geometric representation, aimed at revealing patterns and relationships that exist within the input data. The proposed BME-S provides a powerful spatiotemporal framework to study a variety of air pollution data sources.

  4. Detection of relationships among multi-modal brain imaging meta-features via information flow.

    PubMed

    Miller, Robyn L; Vergara, Victor M; Calhoun, Vince D

    2018-01-15

    Neuroscientists and clinical researchers are awash in data from an ever-growing number of imaging and other bio-behavioral modalities. This flow of brain imaging data, taken under resting and various task conditions, combines with available cognitive measures, behavioral information, genetic data plus other potentially salient biomedical and environmental information to create a rich but diffuse data landscape. The conditions being studied with brain imaging data are often extremely complex and it is common for researchers to employ more than one imaging, behavioral or biological data modality (e.g., genetics) in their investigations. While the field has advanced significantly in its approach to multimodal data, the vast majority of studies still ignore joint information among two or more features or modalities. We propose an intuitive framework based on conditional probabilities for understanding information exchange between features in what we are calling a feature meta-space; that is, a space consisting of many individual featurae spaces. Features can have any dimension and can be drawn from any data source or modality. No a priori assumptions are made about the functional form (e.g., linear, polynomial, exponential) of captured inter-feature relationships. We demonstrate the framework's ability to identify relationships between disparate features of varying dimensionality by applying it to a large multi-site, multi-modal clinical dataset, balance between schizophrenia patients and controls. In our application it exposes both expected (previously observed) relationships, and novel relationships rarely considered investigated by clinical researchers. To the best of our knowledge there is not presently a comparably efficient way to capture relationships of indeterminate functional form between features of arbitrary dimension and type. We are introducing this method as an initial foray into a space that remains relatively underpopulated. The framework we propose is powerful, intuitive and very efficiently provides a high-level overview of a massive data space. In our application it exposes both expected relationships and relationships very rarely considered worth investigating by clinical researchers. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Machine Learning for Big Data: A Study to Understand Limits at Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R.; Del-Castillo-Negrete, Carlos Emilio

    This report aims to empirically understand the limits of machine learning when applied to Big Data. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical data mining and machine learning under more scrutiny, evaluation and application for gleaning insights from the data than ever before. Much is expected from algorithms without understanding their limitations at scale while dealing with massive datasets. In that context, we pose and address the following questions How does a machine learning algorithm perform on measuresmore » such as accuracy and execution time with increasing sample size and feature dimensionality? Does training with more samples guarantee better accuracy? How many features to compute for a given problem? Do more features guarantee better accuracy? Do efforts to derive and calculate more features and train on larger samples worth the effort? As problems become more complex and traditional binary classification algorithms are replaced with multi-task, multi-class categorization algorithms do parallel learners perform better? What happens to the accuracy of the learning algorithm when trained to categorize multiple classes within the same feature space? Towards finding answers to these questions, we describe the design of an empirical study and present the results. We conclude with the following observations (i) accuracy of the learning algorithm increases with increasing sample size but saturates at a point, beyond which more samples do not contribute to better accuracy/learning, (ii) the richness of the feature space dictates performance - both accuracy and training time, (iii) increased dimensionality often reflected in better performance (higher accuracy in spite of longer training times) but the improvements are not commensurate the efforts for feature computation and training and (iv) accuracy of the learning algorithms drop significantly with multi-class learners training on the same feature matrix and (v) learning algorithms perform well when categories in labeled data are independent (i.e., no relationship or hierarchy exists among categories).« less

  6. Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing.

    PubMed

    Ghesu, Florin C; Krubasik, Edward; Georgescu, Bogdan; Singh, Vivek; Yefeng Zheng; Hornegger, Joachim; Comaniciu, Dorin

    2016-05-01

    Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.

  7. Multiparticle dynamics in the E-phi tracking code ESME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James A. MacLachlan

    2002-06-21

    ESME has developed over a twenty year period from its origins as a program for modeling rf gymnastics to a rather general facility for that fraction of beam dynamics of synchrotrons and storage rings which can be properly treated in the two dimensional longitudinal phase space. The features of this program which serve particularly for multiparticle calculations are described, some underling principles are noted, and illustrative results are given.

  8. Multiparticle Dynamics in the E-φ Tracking Code ESME

    NASA Astrophysics Data System (ADS)

    MacLachlan, James A.

    2002-12-01

    ESME has developed over a twenty year period from its origins as a program for modeling rf gymnastics to a rather general facility for that fraction of beam dynamics of synchrotrons and storage rings which can be properly treated in the two dimensional longitudinal phase space. The features of this program which serve particularly for multiparticle calculations are described, some uderlying principles are noted, and illustrative results are given.

  9. Thermal Non-Equilibrium Flows in Three Space Dimensions

    NASA Astrophysics Data System (ADS)

    Zeng, Yanni

    2016-01-01

    We study the equations describing the motion of a thermal non-equilibrium gas in three space dimensions. It is a hyperbolic system of six equations with a relaxation term. The dissipation mechanism induced by the relaxation is weak in the sense that the Shizuta-Kawashima criterion is violated. This implies that a perturbation of a constant equilibrium state consists of two parts: one decays in time while the other stays. In fact, the entropy wave grows weakly along the particle path as the process is irreversible. We study thermal properties related to the well-posedness of the nonlinear system. We also obtain a detailed pointwise estimate on the Green's function for the Cauchy problem when the system is linearized around an equilibrium constant state. The Green's function provides a complete picture of the wave pattern, with an exact and explicit leading term. Comparing with existing results for one dimensional flows, our results reveal a new feature of three dimensional flows: not only does the entropy wave not decay, but the velocity also contains a non-decaying part, strongly coupled with its decaying one. The new feature is supported by the second order approximation via the Chapman-Enskog expansions, which are the Navier-Stokes equations with vanished shear viscosity and heat conductivity.

  10. Regional three-dimensional seismic velocity model of the crust and uppermost mantle of northern California

    USGS Publications Warehouse

    Thurber, C.; Zhang, H.; Brocher, T.; Langenheim, V.

    2009-01-01

    We present a three-dimensional (3D) tomographic model of the P wave velocity (Vp) structure of northern California. We employed a regional-scale double-difference tomography algorithm that incorporates a finite-difference travel time calculator and spatial smoothing constraints. Arrival times from earthquakes and travel times from controlled-source explosions, recorded at network and/or temporary stations, were inverted for Vp on a 3D grid with horizontal node spacing of 10 to 20 km and vertical node spacing of 3 to 8 km. Our model provides an unprecedented, comprehensive view of the regional-scale structure of northern California, putting many previously identified features into a broader regional context and improving the resolution of a number of them and revealing a number of new features, especially in the middle and lower crust, that have never before been reported. Examples of the former include the complex subducting Gorda slab, a steep, deeply penetrating fault beneath the Sacramento River Delta, crustal low-velocity zones beneath Geysers-Clear Lake and Long Valley, and the high-velocity ophiolite body underlying the Great Valley. Examples of the latter include mid-crustal low-velocity zones beneath Mount Shasta and north of Lake Tahoe. Copyright 2009 by the American Geophysical Union.

  11. D Object Classification Based on Thermal and Visible Imagery in Urban Area

    NASA Astrophysics Data System (ADS)

    Hasani, H.; Samadzadegan, F.

    2015-12-01

    The spatial distribution of land cover in the urban area especially 3D objects (buildings and trees) is a fundamental dataset for urban planning, ecological research, disaster management, etc. According to recent advances in sensor technologies, several types of remotely sensed data are available from the same area. Data fusion has been widely investigated for integrating different source of data in classification of urban area. Thermal infrared imagery (TIR) contains information on emitted radiation and has unique radiometric properties. However, due to coarse spatial resolution of thermal data, its application has been restricted in urban areas. On the other hand, visible image (VIS) has high spatial resolution and information in visible spectrum. Consequently, there is a complementary relation between thermal and visible imagery in classification of urban area. This paper evaluates the potential of aerial thermal hyperspectral and visible imagery fusion in classification of urban area. In the pre-processing step, thermal imagery is resampled to the spatial resolution of visible image. Then feature level fusion is applied to construct hybrid feature space include visible bands, thermal hyperspectral bands, spatial and texture features and moreover Principle Component Analysis (PCA) transformation is applied to extract PCs. Due to high dimensionality of feature space, dimension reduction method is performed. Finally, Support Vector Machines (SVMs) classify the reduced hybrid feature space. The obtained results show using thermal imagery along with visible imagery, improved the classification accuracy up to 8% respect to visible image classification.

  12. Efficient and robust computation of PDF features from diffusion MR signal.

    PubMed

    Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc

    2009-10-01

    We present a method for the estimation of various features of the tissue micro-architecture using the diffusion magnetic resonance imaging. The considered features are designed from the displacement probability density function (PDF). The estimation is based on two steps: first the approximation of the signal by a series expansion made of Gaussian-Laguerre and Spherical Harmonics functions; followed by a projection on a finite dimensional space. Besides, we propose to tackle the problem of the robustness to Rician noise corrupting in-vivo acquisitions. Our feature estimation is expressed as a variational minimization process leading to a variational framework which is robust to noise. This approach is very flexible regarding the number of samples and enables the computation of a large set of various features of the local tissues structure. We demonstrate the effectiveness of the method with results on both synthetic phantom and real MR datasets acquired in a clinical time-frame.

  13. CAFÉ-Map: Context Aware Feature Mapping for mining high dimensional biomedical data.

    PubMed

    Minhas, Fayyaz Ul Amir Afsar; Asif, Amina; Arif, Muhammad

    2016-12-01

    Feature selection and ranking is of great importance in the analysis of biomedical data. In addition to reducing the number of features used in classification or other machine learning tasks, it allows us to extract meaningful biological and medical information from a machine learning model. Most existing approaches in this domain do not directly model the fact that the relative importance of features can be different in different regions of the feature space. In this work, we present a context aware feature ranking algorithm called CAFÉ-Map. CAFÉ-Map is a locally linear feature ranking framework that allows recognition of important features in any given region of the feature space or for any individual example. This allows for simultaneous classification and feature ranking in an interpretable manner. We have benchmarked CAFÉ-Map on a number of toy and real world biomedical data sets. Our comparative study with a number of published methods shows that CAFÉ-Map achieves better accuracies on these data sets. The top ranking features obtained through CAFÉ-Map in a gene profiling study correlate very well with the importance of different genes reported in the literature. Furthermore, CAFÉ-Map provides a more in-depth analysis of feature ranking at the level of individual examples. CAFÉ-Map Python code is available at: http://faculty.pieas.edu.pk/fayyaz/software.html#cafemap . The CAFÉ-Map package supports parallelization and sparse data and provides example scripts for classification. This code can be used to reconstruct the results given in this paper. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A novel weld seam detection method for space weld seam of narrow butt joint in laser welding

    NASA Astrophysics Data System (ADS)

    Shao, Wen Jun; Huang, Yu; Zhang, Yong

    2018-02-01

    Structured light measurement is widely used for weld seam detection owing to its high measurement precision and robust. However, there is nearly no geometrical deformation of the stripe projected onto weld face, whose seam width is less than 0.1 mm and without misalignment. So, it's very difficult to ensure an exact retrieval of the seam feature. This issue is raised as laser welding for butt joint of thin metal plate is widely applied. Moreover, measurement for the seam width, seam center and the normal vector of the weld face at the same time during welding process is of great importance to the welding quality but rarely reported. Consequently, a seam measurement method based on vision sensor for space weld seam of narrow butt joint is proposed in this article. Three laser stripes with different wave length are project on the weldment, in which two red laser stripes are designed and used to measure the three dimensional profile of the weld face by the principle of optical triangulation, and the third green laser stripe is used as light source to measure the edge and the centerline of the seam by the principle of passive vision sensor. The corresponding image process algorithm is proposed to extract the centerline of the red laser stripes as well as the seam feature. All these three laser stripes are captured and processed in a single image so that the three dimensional position of the space weld seam can be obtained simultaneously. Finally, the result of experiment reveals that the proposed method can meet the precision demand of space narrow butt joint.

  15. n-SIFT: n-dimensional scale invariant feature transform.

    PubMed

    Cheung, Warren; Hamarneh, Ghassan

    2009-09-01

    We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.

  16. Entanglement by Path Identity.

    PubMed

    Krenn, Mario; Hochrainer, Armin; Lahiri, Mayukh; Zeilinger, Anton

    2017-02-24

    Quantum entanglement is one of the most prominent features of quantum mechanics and forms the basis of quantum information technologies. Here we present a novel method for the creation of quantum entanglement in multipartite and high-dimensional systems. The two ingredients are (i) superposition of photon pairs with different origins and (ii) aligning photons such that their paths are identical. We explain the experimentally feasible creation of various classes of multiphoton entanglement encoded in polarization as well as in high-dimensional Hilbert spaces-starting only from nonentangled photon pairs. For two photons, arbitrary high-dimensional entanglement can be created. The idea of generating entanglement by path identity could also apply to quantum entities other than photons. We discovered the technique by analyzing the output of a computer algorithm. This shows that computer designed quantum experiments can be inspirations for new techniques.

  17. A face and palmprint recognition approach based on discriminant DCT feature extraction.

    PubMed

    Jing, Xiao-Yuan; Zhang, David

    2004-12-01

    In the field of image processing and recognition, discrete cosine transform (DCT) and linear discrimination are two widely used techniques. Based on them, we present a new face and palmprint recognition approach in this paper. It first uses a two-dimensional separability judgment to select the DCT frequency bands with favorable linear separability. Then from the selected bands, it extracts the linear discriminative features by an improved Fisherface method and performs the classification by the nearest neighbor classifier. We detailedly analyze theoretical advantages of our approach in feature extraction. The experiments on face databases and palmprint database demonstrate that compared to the state-of-the-art linear discrimination methods, our approach obtains better classification performance. It can significantly improve the recognition rates for face and palmprint data and effectively reduce the dimension of feature space.

  18. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data.

    PubMed

    Song, Hongchao; Jiang, Zhuqing; Men, Aidong; Yang, Bo

    2017-01-01

    Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k -nearest neighbor graphs- ( K -NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  19. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    PubMed Central

    Jiang, Zhuqing; Men, Aidong; Yang, Bo

    2017-01-01

    Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k-nearest neighbor graphs- (K-NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity. PMID:29270197

  20. Multidimensional brain activity dictated by winner-take-all mechanisms.

    PubMed

    Tozzi, Arturo; Peters, James F

    2018-06-21

    A novel demon-based architecture is introduced to elucidate brain functions such as pattern recognition during human perception and mental interpretation of visual scenes. Starting from the topological concepts of invariance and persistence, we introduce a Selfridge pandemonium variant of brain activity that takes into account a novel feature, namely, demons that recognize short straight-line segments, curved lines and scene shapes, such as shape interior, density and texture. Low-level representations of objects can be mapped to higher-level views (our mental interpretations): a series of transformations can be gradually applied to a pattern in a visual scene, without affecting its invariant properties. This makes it possible to construct a symbolic multi-dimensional representation of the environment. These representations can be projected continuously to an object that we have seen and continue to see, thanks to the mapping from shapes in our memory to shapes in Euclidean space. Although perceived shapes are 3-dimensional (plus time), the evaluation of shape features (volume, color, contour, closeness, texture, and so on) leads to n-dimensional brain landscapes. Here we discuss the advantages of our parallel, hierarchical model in pattern recognition, computer vision and biological nervous system's evolution. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Quasiclassical analysis of Bloch oscillations in non-Hermitian tight-binding lattices

    NASA Astrophysics Data System (ADS)

    Graefe, E. M.; Korsch, H. J.; Rush, A.

    2016-07-01

    Many features of Bloch oscillations in one-dimensional quantum lattices with a static force can be described by quasiclassical considerations for example by means of the acceleration theorem, at least for Hermitian systems. Here the quasiclassical approach is extended to non-Hermitian lattices, which are of increasing interest. The analysis is based on a generalised non-Hermitian phase space dynamics developed recently. Applications to a single-band tight-binding system demonstrate that many features of the quantum dynamics can be understood from this classical description qualitatively and even quantitatively. Two non-Hermitian and PT-symmetric examples are studied, a Hatano-Nelson lattice with real coupling constants and a system with purely imaginary couplings, both for initially localised states in space or in momentum. It is shown that the time-evolution of the norm of the wave packet and the expectation values of position and momentum can be described in a classical picture.

  2. Data driven analysis of rain events: feature extraction, clustering, microphysical /macro physical relationship

    NASA Astrophysics Data System (ADS)

    Djallel Dilmi, Mohamed; Mallet, Cécile; Barthes, Laurent; Chazottes, Aymeric

    2017-04-01

    The study of rain time series records is mainly carried out using rainfall rate or rain accumulation parameters estimated on a fixed integration time (typically 1 min, 1 hour or 1 day). In this study we used the concept of rain event. In fact, the discrete and intermittent natures of rain processes make the definition of some features inadequate when defined on a fixed duration. Long integration times (hour, day) lead to mix rainy and clear air periods in the same sample. Small integration time (seconds, minutes) will lead to noisy data with a great sensibility to detector characteristics. The analysis on the whole rain event instead of individual short duration samples of a fixed duration allows to clarify relationships between features, in particular between macro physical and microphysical ones. This approach allows suppressing the intra-event variability partly due to measurement uncertainties and allows focusing on physical processes. An algorithm based on Genetic Algorithm (GA) and Self Organising Maps (SOM) is developed to obtain a parsimonious characterisation of rain events using a minimal set of variables. The use of self-organizing map (SOM) is justified by the fact that it allows to map a high dimensional data space in a two-dimensional space while preserving as much as possible the initial space topology in an unsupervised way. The obtained SOM allows providing the dependencies between variables and consequently removing redundant variables leading to a minimal subset of only five features (the event duration, the rain rate peak, the rain event depth, the event rain rate standard deviation and the absolute rain rate variation of order 0.5). To confirm relevance of the five selected features the corresponding SOM is analyzed. This analysis shows clearly the existence of relationships between features. It also shows the independence of the inter-event time (IETp) feature or the weak dependence of the Dry percentage in event (Dd%e) feature. This confirms that a rain time series can be considered by an alternation of independent rain event and no rain period. The five selected feature are used to perform a hierarchical clustering of the events. The well-known division between stratiform and convective events appears clearly. This classification into two classes is then refined in 5 fairly homogeneous subclasses. The data driven analysis performed on whole rain events instead of fixed length samples allows identifying strong relationships between macrophysics (based on rain rate) and microphysics (based on raindrops) features. We show that among the 5 identified subclasses some of them have specific microphysics characteristics. Obtaining information on microphysical characteristics of rainfall events from rain gauges measurement suggests many implications in development of the quantitative precipitation estimation (QPE), for the improvement of rain rate retrieval algorithm in remote sensing context.

  3. Interacting vector fields in relativity without relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Edward; Barbour, Julian

    2002-06-01

    Barbour, Foster and Ó Murchadha have recently developed a new framework, called here the 3-space approach, for the formulation of classical bosonic dynamics. Neither time nor a locally Minkowskian structure of spacetime are presupposed. Both arise as emergent features of the world from geodesic-type dynamics on a space of three-dimensional metric-matter configurations. In fact gravity, the universal light-cone and Abelian gauge theory minimally coupled to gravity all arise naturally through a single common mechanism. It yields relativity - and more - without presupposing relativity. This paper completes the recovery of the presently known bosonic sector within the 3-space approach. We show, for a rather general ansatz, that 3-vector fields can interact among themselves only as Yang-Mills fields minimally coupled to gravity.

  4. The Application of Support Vector Machine (svm) Using Cielab Color Model, Color Intensity and Color Constancy as Features for Ortho Image Classification of Benthic Habitats in Hinatuan, Surigao del Sur, Philippines

    NASA Astrophysics Data System (ADS)

    Cubillas, J. E.; Japitana, M.

    2016-06-01

    This study demonstrates the application of CIELAB, Color intensity, and One Dimensional Scalar Constancy as features for image recognition and classifying benthic habitats in an image with the coastal areas of Hinatuan, Surigao Del Sur, Philippines as the study area. The study area is composed of four datasets, namely: (a) Blk66L005, (b) Blk66L021, (c) Blk66L024, and (d) Blk66L0114. SVM optimization was performed in Matlab® software with the help of Parallel Computing Toolbox to hasten the SVM computing speed. The image used for collecting samples for SVM procedure was Blk66L0114 in which a total of 134,516 sample objects of mangrove, possible coral existence with rocks, sand, sea, fish pens and sea grasses were collected and processed. The collected samples were then used as training sets for the supervised learning algorithm and for the creation of class definitions. The learned hyper-planes separating one class from another in the multi-dimensional feature space can be thought of as a super feature which will then be used in developing the C (classifier) rule set in eCognition® software. The classification results of the sampling site yielded an accuracy of 98.85% which confirms the reliability of remote sensing techniques and analysis employed to orthophotos like the CIELAB, Color Intensity and One dimensional scalar constancy and the use of SVM classification algorithm in classifying benthic habitats.

  5. Pattern recognition invariant under changes of scale and orientation

    NASA Astrophysics Data System (ADS)

    Arsenault, Henri H.; Parent, Sebastien; Moisan, Sylvain

    1997-08-01

    We have used a modified method proposed by neiberg and Casasent to successfully classify five kinds of military vehicles. The method uses a wedge filter to achieve scale invariance, and lines in a multi-dimensional feature space correspond to each target with out-of-plane orientations over 360 degrees around a vertical axis. The images were not binarized, but were filtered in a preprocessing step to reduce aliasing. The feature vectors were normalized and orthogonalized by means of a neural network. Out-of-plane rotations of 360 degrees and scale changes of a factor of four were considered. Error-free classification was achieved.

  6. Magnetospheric Multiscale Observation of Plasma Velocity-Space Cascade: Hermite Representation and Theory.

    PubMed

    Servidio, S; Chasapis, A; Matthaeus, W H; Perrone, D; Valentini, F; Parashar, T N; Veltri, P; Gershman, D; Russell, C T; Giles, B; Fuselier, S A; Phan, T D; Burch, J

    2017-11-17

    Plasma turbulence is investigated using unprecedented high-resolution ion velocity distribution measurements by the Magnetospheric Multiscale mission (MMS) in the Earth's magnetosheath. This novel observation of a highly structured particle distribution suggests a cascadelike process in velocity space. Complex velocity space structure is investigated using a three-dimensional Hermite transform, revealing, for the first time in observational data, a power-law distribution of moments. In analogy to hydrodynamics, a Kolmogorov approach leads directly to a range of predictions for this phase-space transport. The scaling theory is found to be in agreement with observations. The combined use of state-of-the-art MMS data sets, novel implementation of a Hermite transform method, and scaling theory of the velocity cascade opens new pathways to the understanding of plasma turbulence and the crucial velocity space features that lead to dissipation in plasmas.

  7. Quantitative analysis of eyes and other optical systems in linear optics.

    PubMed

    Harris, William F; Evans, Tanya; van Gool, Radboud D

    2017-05-01

    To show that 14-dimensional spaces of augmented point P and angle Q characteristics, matrices obtained from the ray transference, are suitable for quantitative analysis although only the latter define an inner-product space and only on it can one define distances and angles. The paper examines the nature of the spaces and their relationships to other spaces including symmetric dioptric power space. The paper makes use of linear optics, a three-dimensional generalization of Gaussian optics. Symmetric 2 × 2 dioptric power matrices F define a three-dimensional inner-product space which provides a sound basis for quantitative analysis (calculation of changes, arithmetic means, etc.) of refractive errors and thin systems. For general systems the optical character is defined by the dimensionally-heterogeneous 4 × 4 symplectic matrix S, the transference, or if explicit allowance is made for heterocentricity, the 5 × 5 augmented symplectic matrix T. Ordinary quantitative analysis cannot be performed on them because matrices of neither of these types constitute vector spaces. Suitable transformations have been proposed but because the transforms are dimensionally heterogeneous the spaces are not naturally inner-product spaces. The paper obtains 14-dimensional spaces of augmented point P and angle Q characteristics. The 14-dimensional space defined by the augmented angle characteristics Q is dimensionally homogenous and an inner-product space. A 10-dimensional subspace of the space of augmented point characteristics P is also an inner-product space. The spaces are suitable for quantitative analysis of the optical character of eyes and many other systems. Distances and angles can be defined in the inner-product spaces. The optical systems may have multiple separated astigmatic and decentred refracting elements. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  8. Anisotropic fractal media by vector calculus in non-integer dimensional space

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2014-08-01

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.

  9. Aesthetics by Numbers: Links between Perceived Texture Qualities and Computed Visual Texture Properties.

    PubMed

    Jacobs, Richard H A H; Haak, Koen V; Thumfart, Stefan; Renken, Remco; Henson, Brian; Cornelissen, Frans W

    2016-01-01

    Our world is filled with texture. For the human visual system, this is an important source of information for assessing environmental and material properties. Indeed-and presumably for this reason-the human visual system has regions dedicated to processing textures. Despite their abundance and apparent relevance, only recently the relationships between texture features and high-level judgments have captured the interest of mainstream science, despite long-standing indications for such relationships. In this study, we explore such relationships, as these might be used to predict perceived texture qualities. This is relevant, not only from a psychological/neuroscience perspective, but also for more applied fields such as design, architecture, and the visual arts. In two separate experiments, observers judged various qualities of visual textures such as beauty, roughness, naturalness, elegance, and complexity. Based on factor analysis, we find that in both experiments, ~75% of the variability in the judgments could be explained by a two-dimensional space, with axes that are closely aligned to the beauty and roughness judgments. That a two-dimensional judgment space suffices to capture most of the variability in the perceived texture qualities suggests that observers use a relatively limited set of internal scales on which to base various judgments, including aesthetic ones. Finally, for both of these judgments, we determined the relationship with a large number of texture features computed for each of the texture stimuli. We find that the presence of lower spatial frequencies, oblique orientations, higher intensity variation, higher saturation, and redness correlates with higher beauty ratings. Features that captured image intensity and uniformity correlated with roughness ratings. Therefore, a number of computational texture features are predictive of these judgments. This suggests that perceived texture qualities-including the aesthetic appreciation-are sufficiently universal to be predicted-with reasonable accuracy-based on the computed feature content of the textures.

  10. Aesthetics by Numbers: Links between Perceived Texture Qualities and Computed Visual Texture Properties

    PubMed Central

    Jacobs, Richard H. A. H.; Haak, Koen V.; Thumfart, Stefan; Renken, Remco; Henson, Brian; Cornelissen, Frans W.

    2016-01-01

    Our world is filled with texture. For the human visual system, this is an important source of information for assessing environmental and material properties. Indeed—and presumably for this reason—the human visual system has regions dedicated to processing textures. Despite their abundance and apparent relevance, only recently the relationships between texture features and high-level judgments have captured the interest of mainstream science, despite long-standing indications for such relationships. In this study, we explore such relationships, as these might be used to predict perceived texture qualities. This is relevant, not only from a psychological/neuroscience perspective, but also for more applied fields such as design, architecture, and the visual arts. In two separate experiments, observers judged various qualities of visual textures such as beauty, roughness, naturalness, elegance, and complexity. Based on factor analysis, we find that in both experiments, ~75% of the variability in the judgments could be explained by a two-dimensional space, with axes that are closely aligned to the beauty and roughness judgments. That a two-dimensional judgment space suffices to capture most of the variability in the perceived texture qualities suggests that observers use a relatively limited set of internal scales on which to base various judgments, including aesthetic ones. Finally, for both of these judgments, we determined the relationship with a large number of texture features computed for each of the texture stimuli. We find that the presence of lower spatial frequencies, oblique orientations, higher intensity variation, higher saturation, and redness correlates with higher beauty ratings. Features that captured image intensity and uniformity correlated with roughness ratings. Therefore, a number of computational texture features are predictive of these judgments. This suggests that perceived texture qualities—including the aesthetic appreciation—are sufficiently universal to be predicted—with reasonable accuracy—based on the computed feature content of the textures. PMID:27493628

  11. Dialect Distance Assessment Based on 2-Dimensional Pitch Slope Features and Kullback Leibler Divergence

    DTIC Science & Technology

    2009-04-08

    to changes on input data is quantified. It is also shown in a perceptive evaluation that the presented objective approach of dialect distance...of Arabic dialects are discussed. We also show the repeatability of presented mea- sure, and its correlation with human perception . Conclusions are...in the strict sense of metric spaces. PREPRINT 1 2. Proposed Method Human perception tests indicate that prosodic cues, including pitch movements

  12. Detector Design Considerations in High-Dimensional Artificial Immune Systems

    DTIC Science & Technology

    2012-03-22

    a method known as randomized RNS [15]. In this approach, Monte Carlo integration is used to determine the size of self and non-self within the given...feature space, then a number of randomly placed detectors are chosen according to Monte Carlo integration calculations. Simulated annealing is then...detector is only counted once). This value is termed ‘actual content’ because it does not including overlapping content, but only that content that is

  13. Multiview Locally Linear Embedding for Effective Medical Image Retrieval

    PubMed Central

    Shen, Hualei; Tao, Dacheng; Ma, Dianfu

    2013-01-01

    Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277

  14. Analytic study of solutions for a (3 + 1) -dimensional generalized KP equation

    NASA Astrophysics Data System (ADS)

    Gao, Hui; Cheng, Wenguang; Xu, Tianzhou; Wang, Gangwei

    2018-03-01

    The (3 + 1) -dimensional generalized KP (gKP) equation is an important nonlinear partial differential equation in theoretical and mathematical physics which can be used to describe nonlinear wave motion. Through the Hirota bilinear method, one-solition, two-solition and N-solition solutions are derived via symbolic computation. Two classes of lump solutions, rationally localized in all directions in space, to the dimensionally reduced cases in (2 + 1)-dimensions, are constructed by using a direct method based on the Hirota bilinear form of the equation. It implies that we can derive the lump solutions of the reduced gKP equation from positive quadratic function solutions to the aforementioned bilinear equation. Meanwhile, we get interaction solutions between a lump and a kink of the gKP equation. The lump appears from a kink and is swallowed by it with the change of time. This work offers a possibility which can enrich the variety of the dynamical features of solutions for higher-dimensional nonlinear evolution equations.

  15. Dust Storm Feature Identification and Tracking from 4D Simulation Data

    NASA Astrophysics Data System (ADS)

    Yu, M.; Yang, C. P.

    2016-12-01

    Dust storms cause significant damage to health, property and the environment worldwide every year. To help mitigate the damage, dust forecasting models simulate and predict upcoming dust events, providing valuable information to scientists, decision makers, and the public. Normally, the model simulations are conducted in four-dimensions (i.e., latitude, longitude, elevation and time) and represent three-dimensional (3D), spatial heterogeneous features of the storm and its evolution over space and time. This research investigates and proposes an automatic multi-threshold, region-growing based identification algorithm to identify critical dust storm features, and track the evolution process of dust storm events through space and time. In addition, a spatiotemporal data model is proposed, which can support the characterization and representation of dust storm events and their dynamic patterns. Quantitative and qualitative evaluations for the algorithm are conducted to test the sensitivity, and capability of identify and track dust storm events. This study has the potential to assist a better early warning system for decision-makers and the public, thus making hazard mitigation plans more effective.

  16. AdS3 to dS3 transition in the near horizon of asymptotically de Sitter solutions

    NASA Astrophysics Data System (ADS)

    Sadeghian, S.; Vahidinia, M. H.

    2017-08-01

    We consider two solutions of Einstein-Λ theory which admit the extremal vanishing horizon (EVH) limit, odd-dimensional multispinning Kerr black hole (in the presence of cosmological constant) and cosmological soliton. We show that the near horizon EVH geometry of Kerr has a three-dimensional maximally symmetric subspace whose curvature depends on rotational parameters and the cosmological constant. In the Kerr-dS case, this subspace interpolates between AdS3 , three-dimensional flat and dS3 by varying rotational parameters, while the near horizon of the EVH cosmological soliton always has a dS3 . The feature of the EVH cosmological soliton is that it is regular everywhere on the horizon. In the near EVH case, these three-dimensional parts turn into the corresponding locally maximally symmetric spacetimes with a horizon: Kerr-dS3 , flat space cosmology or BTZ black hole. We show that their thermodynamics match with the thermodynamics of the original near EVH black holes. We also briefly discuss the holographic two-dimensional CFT dual to the near horizon of EVH solutions.

  17. Comparison of the effectiveness of alternative feature sets in shape retrieval of multicomponent images

    NASA Astrophysics Data System (ADS)

    Eakins, John P.; Edwards, Jonathan D.; Riley, K. Jonathan; Rosin, Paul L.

    2001-01-01

    Many different kinds of features have been used as the basis for shape retrieval from image databases. This paper investigates the relative effectiveness of several types of global shape feature, both singly and in combination. The features compared include well-established descriptors such as Fourier coefficients and moment invariants, as well as recently-proposed measures of triangularity and ellipticity. Experiments were conducted within the framework of the ARTISAN shape retrieval system, and retrieval effectiveness assessed on a database of over 10,000 images, using 24 queries and associated ground truth supplied by the UK Patent Office . Our experiments revealed only minor differences in retrieval effectiveness between different measures, suggesting that a wide variety of shape feature combinations can provide adequate discriminating power for effective shape retrieval in multi-component image collections such as trademark registries. Marked differences between measures were observed for some individual queries, suggesting that there could be considerable scope for improving retrieval effectiveness by providing users with an improved framework for searching multi-dimensional feature space.

  18. Comparison of the effectiveness of alternative feature sets in shape retrieval of multicomponent images

    NASA Astrophysics Data System (ADS)

    Eakins, John P.; Edwards, Jonathan D.; Riley, K. Jonathan; Rosin, Paul L.

    2000-12-01

    Many different kinds of features have been used as the basis for shape retrieval from image databases. This paper investigates the relative effectiveness of several types of global shape feature, both singly and in combination. The features compared include well-established descriptors such as Fourier coefficients and moment invariants, as well as recently-proposed measures of triangularity and ellipticity. Experiments were conducted within the framework of the ARTISAN shape retrieval system, and retrieval effectiveness assessed on a database of over 10,000 images, using 24 queries and associated ground truth supplied by the UK Patent Office . Our experiments revealed only minor differences in retrieval effectiveness between different measures, suggesting that a wide variety of shape feature combinations can provide adequate discriminating power for effective shape retrieval in multi-component image collections such as trademark registries. Marked differences between measures were observed for some individual queries, suggesting that there could be considerable scope for improving retrieval effectiveness by providing users with an improved framework for searching multi-dimensional feature space.

  19. An advanced scanning method for space-borne hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Wang, Yue-ming; Lang, Jun-Wei; Wang, Jian-Yu; Jiang, Zi-Qing

    2011-08-01

    Space-borne hyper-spectral imagery is an important means for the studies and applications of earth science. High cost efficiency could be acquired by optimized system design. In this paper, an advanced scanning method is proposed, which contributes to implement both high temporal and spatial resolution imaging system. Revisit frequency and effective working time of space-borne hyper-spectral imagers could be greatly improved by adopting two-axis scanning system if spatial resolution and radiometric accuracy are not harshly demanded. In order to avoid the quality degradation caused by image rotation, an idea of two-axis rotation has been presented based on the analysis and simulation of two-dimensional scanning motion path and features. Further improvement of the imagers' detection ability under the conditions of small solar altitude angle and low surface reflectance can be realized by the Ground Motion Compensation on pitch axis. The structure and control performance are also described. An intelligent integration technology of two-dimensional scanning and image motion compensation is elaborated in this paper. With this technology, sun-synchronous hyper-spectral imagers are able to pay quick visit to hot spots, acquiring both high spatial and temporal resolution hyper-spectral images, which enables rapid response of emergencies. The result has reference value for developing operational space-borne hyper-spectral imagers.

  20. Microstructured block copolymer surfaces for control of microbe capture and aggregation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Ryan R; Shubert, Katherine R; Morrell, Jennifer L.

    2014-01-01

    The capture and arrangement of surface-associated microbes is influenced by biochemical and physical properties of the substrate. In this report, we develop lectin-functionalized substrates containing patterned, three-dimensional polymeric structures of varied shapes and densities and use these to investigate the effects of topology and spatial confinement on lectin-mediated microbe capture. Films of poly(glycidyl methacrylate)-block-4,4-dimethyl-2-vinylazlactone (PGMA-b-PVDMA) were patterned on silicon surfaces into line or square grid patterns with 5 m wide features and varied edge spacing. The patterned films had three-dimensional geometries with 900 nm film thickness. After surface functionalization with wheat germ agglutinin, the size of Pseudomonas fluorescens aggregates capturedmore » was dependent on the pattern dimensions. Line patterns with edge spacing of 5 m or less led to the capture of individual microbes with minimal formation of aggregates, while grid patterns with the same spacing also captured individual microbes with further reduction in aggregation. Both geometries allowed for increases in aggregate size distribution with increased in edge spacing. These engineered surfaces combine spatial confinement with affinity-based microbe capture based on exopolysaccharide content to control the degree of microbe aggregation, and can also be used as a platform to investigate intercellular interactions and biofilm formation in microbial populations of controlled sizes.« less

  1. Trading spaces: building three-dimensional nets from two-dimensional tilings

    PubMed Central

    Castle, Toen; Evans, Myfanwy E.; Hyde, Stephen T.; Ramsden, Stuart; Robins, Vanessa

    2012-01-01

    We construct some examples of finite and infinite crystalline three-dimensional nets derived from symmetric reticulations of homogeneous two-dimensional spaces: elliptic (S2), Euclidean (E2) and hyperbolic (H2) space. Those reticulations are edges and vertices of simple spherical, planar and hyperbolic tilings. We show that various projections of the simplest symmetric tilings of those spaces into three-dimensional Euclidean space lead to topologically and geometrically complex patterns, including multiple interwoven nets and tangled nets that are otherwise difficult to generate ab initio in three dimensions. PMID:24098839

  2. A kinetic model of plasma turbulence

    NASA Astrophysics Data System (ADS)

    Servidio, S.; Valentini, F.; Perrone, D.; Greco, A.; Califano, F.; Matthaeus, W. H.; Veltri, P.

    2015-01-01

    A Hybrid Vlasov-Maxwell (HVM) model is presented and recent results about the link between kinetic effects and turbulence are reviewed. Using five-dimensional (2D in space and 3D in the velocity space) simulations of plasma turbulence, it is found that kinetic effects (or non-fluid effects) manifest through the deformation of the proton velocity distribution function (DF), with patterns of non-Maxwellian features being concentrated near regions of strong magnetic gradients. The direction of the proper temperature anisotropy, calculated in the main reference frame of the distribution itself, has a finite probability of being along or across the ambient magnetic field, in general agreement with the classical definition of anisotropy T ⊥/T ∥ (where subscripts refer to the magnetic field direction). Adopting the latter conventional definition, by varying the global plasma beta (β) and fluctuation level, simulations explore distinct regions of the space given by T ⊥/T ∥ and β∥, recovering solar wind observations. Moreover, as in the solar wind, HVM simulations suggest that proton anisotropy is not only associated with magnetic intermittent events, but also with gradient-type structures in the flow and in the density. The role of alpha particles is reviewed using multi-ion kinetic simulations, revealing a similarity between proton and helium non-Maxwellian effects. The techniques presented here are applied to 1D spacecraft-like analysis, establishing a link between non-fluid phenomena and solar wind magnetic discontinuities. Finally, the dimensionality of turbulence is investigated, for the first time, via 6D HVM simulations (3D in both spaces). These preliminary results provide support for several previously reported studies based on 2.5D simulations, confirming several basic conclusions. This connection between kinetic features and turbulence open a new path on the study of processes such as heating, particle acceleration, and temperature-anisotropy, commonly observed in space plasmas.

  3. S2PLOT: Three-dimensional (3D) Plotting Library

    NASA Astrophysics Data System (ADS)

    Barnes, D. G.; Fluke, C. J.; Bourke, P. D.; Parry, O. T.

    2011-03-01

    We present a new, three-dimensional (3D) plotting library with advanced features, and support for standard and enhanced display devices. The library - S2PLOT - is written in C and can be used by C, C++ and FORTRAN programs on GNU/Linux and Apple/OSX systems. S2PLOT draws objects in a 3D (x,y,z) Cartesian space and the user interactively controls how this space is rendered at run time. With a PGPLOT inspired interface, S2PLOT provides astronomers with elegant techniques for displaying and exploring 3D data sets directly from their program code, and the potential to use stereoscopic and dome display devices. The S2PLOT architecture supports dynamic geometry and can be used to plot time-evolving data sets, such as might be produced by simulation codes. In this paper, we introduce S2PLOT to the astronomical community, describe its potential applications, and present some example uses of the library.

  4. An Advanced, Three-Dimensional Plotting Library for Astronomy

    NASA Astrophysics Data System (ADS)

    Barnes, David G.; Fluke, Christopher J.; Bourke, Paul D.; Parry, Owen T.

    2006-07-01

    We present a new, three-dimensional (3D) plotting library with advanced features, and support for standard and enhanced display devices. The library - s2plot - is written in c and can be used by c, c++, and fortran programs on GNU/Linux and Apple/OSX systems. s2plot draws objects in a 3D (x,y,z) Cartesian space and the user interactively controls how this space is rendered at run time. With a pgplot-inspired interface, s2plot provides astronomers with elegant techniques for displaying and exploring 3D data sets directly from their program code, and the potential to use stereoscopic and dome display devices. The s2plot architecture supports dynamic geometry and can be used to plot time-evolving data sets, such as might be produced by simulation codes. In this paper, we introduce s2plot to the astronomical community, describe its potential applications, and present some example uses of the library.

  5. Probing RNA Native Conformational Ensembles with Structural Constraints.

    PubMed

    Fonseca, Rasmus; van den Bedem, Henry; Bernauer, Julie

    2016-05-01

    Noncoding ribonucleic acids (RNA) play a critical role in a wide variety of cellular processes, ranging from regulating gene expression to post-translational modification and protein synthesis. Their activity is modulated by highly dynamic exchanges between three-dimensional conformational substates, which are difficult to characterize experimentally and computationally. Here, we present an innovative, entirely kinematic computational procedure to efficiently explore the native ensemble of RNA molecules. Our procedure projects degrees of freedom onto a subspace of conformation space defined by distance constraints in the tertiary structure. The dimensionality reduction enables efficient exploration of conformational space. We show that the conformational distributions obtained with our method broadly sample the conformational landscape observed in NMR experiments. Compared to normal mode analysis-based exploration, our procedure diffuses faster through the experimental ensemble while also accessing conformational substates to greater precision. Our results suggest that conformational sampling with a highly reduced but fully atomistic representation of noncoding RNA expresses key features of their dynamic nature.

  6. Investigation of growth features in several hydraulic fractures

    NASA Astrophysics Data System (ADS)

    Bykov, Alexander; Galybin, Alexander; Evdokimov, Alexander; Zavialova, Natalia; Zavialov, Ivan; Negodiaev, Sergey; Perepechkin, Ilia

    2017-04-01

    In this paper we simulate the growth of three or more interacting hydraulic fractures in the horizontal well with a cross flow of fluid between them. Calculation of the dynamics of cracks is performed in three dimensional space. The computation of the movement of fracturing fluid with proppant is performed in the two-dimensional space (the flow was averaged along crack aperture). For determining the hydraulic pipe resistance coefficient we used a generalization of the Reynolds number for fluids with power rheology and a generalization of the von Karman equation made by Dodge and Meiner. The calculations showed that the first crack was developing faster than the rest in homogeneous medium. During the steady loading the outer cracks pinch the inner cracks and it was shown that only the first and last fracture develop in extreme case. It is also possible to simulate the parameters at which the two developing outer cracks pinch the central one in the horizontal direction. In this case, the central crack may grow in the vertical direction.

  7. Dissipative N-point-vortex Models in the Plane

    NASA Astrophysics Data System (ADS)

    Shashikanth, Banavara N.

    2010-02-01

    A method is presented for constructing point vortex models in the plane that dissipate the Hamiltonian function at any prescribed rate and yet conserve the level sets of the invariants of the Hamiltonian model arising from the SE (2) symmetries. The method is purely geometric in that it uses the level sets of the Hamiltonian and the invariants to construct the dissipative field and is based on elementary classical geometry in ℝ3. Extension to higher-dimensional spaces, such as the point vortex phase space, is done using exterior algebra. The method is in fact general enough to apply to any smooth finite-dimensional system with conserved quantities, and, for certain special cases, the dissipative vector field constructed can be associated with an appropriately defined double Nambu-Poisson bracket. The most interesting feature of this method is that it allows for an infinite sequence of such dissipative vector fields to be constructed by repeated application of a symmetric linear operator (matrix) at each point of the intersection of the level sets.

  8. Dental Space Deficiency Syndrome: An Anthropological Perspective.

    PubMed

    Richman, Colin S

    2017-03-01

    A new syndrome in dentistry, the dental space deficiency syndrome is proposed in this article. Signs and symptoms of this entity may include one or more of the following clinical dental features: tooth crowding, gingival recession, tooth impactions, rapid resorption of facial alveolar bony plates following premature tooth loss, dentally oriented sleep disorders, extended orthodontic treatment time, and malocclusion relapse following orthodontic therapy. These oral conditions, individually or collectively, seem to be associated with both genetic and functional factors. From an anthropological-functional perspective, the human jaws (basal bone and/or alveolar bone) have been shrinking. This results in a three-dimensional discrepancy between jawbone and tooth volumes, which are genetically determined. Consequently, the reduced volume of alveolar bone is not adequately able to accommodate the associated genetically determined dentition in functional and esthetic harmony. This paper describes the common etiology for the conditions listed above, namely the discrepancy between alveolar bone volume (essentially determined by functionality), and associated tooth volume (essentially determined by genetics), when considered in a three-dimensional perspective.

  9. A pepper-pot emittance meter for low-energy heavy-ion beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kremers, H. R.; Beijers, J. P. M.; Brandenburg, S.

    2013-02-15

    A novel emittance meter has been developed to measure the four-dimensional, transverse phase-space distribution of a low-energy ion beam using the pepper-pot technique. A characteristic feature of this instrument is that the pepper-pot plate, which has a linear array of holes in the vertical direction, is scanned horizontally through the ion beam. This has the advantage that the emittance can also be measured at locations along the beam line where the beam has a large horizontal divergence. A set of multi-channel plates, scintillation screen, and ccd camera is used as a position-sensitive ion detector allowing a large range of beammore » intensities that can be handled. This paper describes the design, construction, and operation of the instrument as well as the data analysis used to reconstruct the four-dimensional phase-space distribution of an ion beam. Measurements on a 15 keV He{sup +} beam are used as an example.« less

  10. Optical filter for highlighting spectral features part I: design and development of the filter for discrimination of human skin with and without an application of cosmetic foundation.

    PubMed

    Nishino, Ken; Nakamura, Mutsuko; Matsumoto, Masayuki; Tanno, Osamu; Nakauchi, Shigeki

    2011-03-28

    Light reflected from an object's surface contains much information about its physical and chemical properties. Changes in the physical properties of an object are barely detectable in spectra. Conventional trichromatic systems, on the other hand, cannot detect most spectral features because spectral information is compressively represented as trichromatic signals forming a three-dimensional subspace. We propose a method for designing a filter that optically modulates a camera's spectral sensitivity to find an alternative subspace highlighting an object's spectral features more effectively than the original trichromatic space. We designed and developed a filter that detects cosmetic foundations on human face. Results confirmed that the filter can visualize and nondestructively inspect the foundation distribution.

  11. Median filtering detection using variation of neighboring line pairs for image forensics

    NASA Astrophysics Data System (ADS)

    Rhee, Kang Hyeon

    2016-09-01

    Attention to tampering by median filtering (MF) has recently increased in digital image forensics. For the MF detection (MFD), this paper presents a feature vector that is extracted from two kinds of variations between the neighboring line pairs: the row and column directions. Of these variations in the proposed method, one is defined by a gradient difference of the intensity values between the neighboring line pairs, and the other is defined by a coefficient difference of the Fourier transform (FT) between the neighboring line pairs. Subsequently, the constructed 19-dimensional feature vector is composed of these two parts. One is the extracted 9-dimensional from the space domain of an image and the other is the 10-dimensional from the frequency domain of an image. The feature vector is trained in a support vector machine classifier for MFD in the altered images. As a result, in the measured performances of the experimental items, the area under the receiver operating characteristic curve (AUC, ROC) by the sensitivity (PTP: the true positive rate) and 1-specificity (PFP: the false-positive rate) are above 0.985 and the classification ratios are also above 0.979. Pe (a minimal average decision error) ranges from 0 to 0.024, and PTP at PFP=0.01 ranges from 0.965 to 0.996. It is confirmed that the grade evaluation of the proposed variation-based MF detection method is rated as "Excellent (A)" by AUC is above 0.9.

  12. Adaptive compressive learning for prediction of protein-protein interactions from primary sequence.

    PubMed

    Zhang, Ya-Nan; Pan, Xiao-Yong; Huang, Yan; Shen, Hong-Bin

    2011-08-21

    Protein-protein interactions (PPIs) play an important role in biological processes. Although much effort has been devoted to the identification of novel PPIs by integrating experimental biological knowledge, there are still many difficulties because of lacking enough protein structural and functional information. It is highly desired to develop methods based only on amino acid sequences for predicting PPIs. However, sequence-based predictors are often struggling with the high-dimensionality causing over-fitting and high computational complexity problems, as well as the redundancy of sequential feature vectors. In this paper, a novel computational approach based on compressed sensing theory is proposed to predict yeast Saccharomyces cerevisiae PPIs from primary sequence and has achieved promising results. The key advantage of the proposed compressed sensing algorithm is that it can compress the original high-dimensional protein sequential feature vector into a much lower but more condensed space taking the sparsity property of the original signal into account. What makes compressed sensing much more attractive in protein sequence analysis is its compressed signal can be reconstructed from far fewer measurements than what is usually considered necessary in traditional Nyquist sampling theory. Experimental results demonstrate that proposed compressed sensing method is powerful for analyzing noisy biological data and reducing redundancy in feature vectors. The proposed method represents a new strategy of dealing with high-dimensional protein discrete model and has great potentiality to be extended to deal with many other complicated biological systems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Fractional-dimensional Child-Langmuir law for a rough cathode

    NASA Astrophysics Data System (ADS)

    Zubair, M.; Ang, L. K.

    2016-07-01

    This work presents a self-consistent model of space charge limited current transport in a gap combined of free-space and fractional-dimensional space (Fα), where α is the fractional dimension in the range 0 < α ≤ 1. In this approach, a closed-form fractional-dimensional generalization of Child-Langmuir (CL) law is derived in classical regime which is then used to model the effect of cathode surface roughness in a vacuum diode by replacing the rough cathode with a smooth cathode placed in a layer of effective fractional-dimensional space. Smooth transition of CL law from the fractional-dimensional to integer-dimensional space is also demonstrated. The model has been validated by comparing results with an experiment.

  14. Vibration and stress analysis of soft-bonded shuttle insulation tiles. Modal analysis with compact widely space stringers

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Austin, F.; Levy, A.

    1974-01-01

    An efficient iterative procedure is described for the vibration and modal stress analysis of reusable surface insulation (RSI) of multi-tiled space shuttle panels. The method, which is quite general, is rapidly convergent and highly useful for this application. A user-oriented computer program based upon this procedure and titled RESIST (REusable Surface Insulation Stresses) has been prepared for the analysis of compact, widely spaced, stringer-stiffened panels. RESIST, which uses finite element methods, obtains three dimensional tile stresses in the isolator, arrestor (if any) and RSI materials. Two dimensional stresses are obtained in the tile coating and the stringer-stiffened primary structure plate. A special feature of the program is that all the usual detailed finite element grid data is generated internally from a minimum of input data. The program can accommodate tile idealizations with up to 850 nodes (2550 degrees-of-freedom) and primary structure idealizations with a maximum of 10,000 degrees-of-freedom. The primary structure vibration capability is achieved through the development of a new rapid eigenvalue program named ALARM (Automatic LArge Reduction of Matrices to tridiagonal form).

  15. Disordered topological wires in a momentum-space lattice

    NASA Astrophysics Data System (ADS)

    Meier, Eric; An, Fangzhao; Gadway, Bryce

    2017-04-01

    One of the most interesting aspects of topological systems is the presence of boundary modes which remain robust in the presence of weak disorder. We explore this feature in the context of one-dimensional (1D) topological wires where staggered tunneling strengths lead to the creation of a mid-gap state in the lattice band structure. Using Bose-condensed 87Rb atoms in a 1D momentum-space lattice, we probe the robust topological character of this model when subjected to both site energy and tunneling disorder. We observe a transition to a topologically trivial phase when tailored disorder is applied, which we detect through both charge-pumping and Hamiltonian-quenching protocols. In addition, we report on efforts to probe the influence of interactions in topological momentum-space lattices.

  16. Structural optimization via a design space hierarchy

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1976-01-01

    Mathematical programming techniques provide a general approach to automated structural design. An iterative method is proposed in which design is treated as a hierarchy of subproblems, one being locally constrained and the other being locally unconstrained. It is assumed that the design space is locally convex in the case of good initial designs and that the objective and constraint functions are continuous, with continuous first derivatives. A general design algorithm is outlined for finding a move direction which will decrease the value of the objective function while maintaining a feasible design. The case of one-dimensional search in a two-variable design space is discussed. Possible applications are discussed. A major feature of the proposed algorithm is its application to problems which are inherently ill-conditioned, such as design of structures for optimum geometry.

  17. Dyamical Systems Theory and Lagrangian Data Assimilation in 4D Geophysical Fluid Dynamics

    DTIC Science & Technology

    The long-term goal of our project (known as OCEAN 3D +1) was to better understand and predict ocean circulation features that are fundamentally three...dimensional in space and that vary in time. In particular, we sought to quantify the dynamical processes that govern the formation , evolution, and...predictability of 3D +1 transport pathways in the ocean. Our approach was to develop algorithms to thoroughly analyze a hierarchy of model and

  18. Crossing the dividing surface of transition state theory. IV. Dynamical regularity and dimensionality reduction as key features of reactive trajectories

    NASA Astrophysics Data System (ADS)

    Lorquet, J. C.

    2017-04-01

    The atom-diatom interaction is studied by classical mechanics using Jacobi coordinates (R, r, θ). Reactivity criteria that go beyond the simple requirement of transition state theory (i.e., PR* > 0) are derived in terms of specific initial conditions. Trajectories that exactly fulfill these conditions cross the conventional dividing surface used in transition state theory (i.e., the plane in configuration space passing through a saddle point of the potential energy surface and perpendicular to the reaction coordinate) only once. Furthermore, they are observed to be strikingly similar and to form a tightly packed bundle of perfectly collimated trajectories in the two-dimensional (R, r) configuration space, although their angular motion is highly specific for each one. Particular attention is paid to symmetrical transition states (i.e., either collinear or T-shaped with C2v symmetry) for which decoupling between angular and radial coordinates is observed, as a result of selection rules that reduce to zero Coriolis couplings between modes that belong to different irreducible representations. Liapunov exponents are equal to zero and Hamilton's characteristic function is planar in that part of configuration space that is visited by reactive trajectories. Detailed consideration is given to the concept of average reactive trajectory, which starts right from the saddle point and which is shown to be free of curvature-induced Coriolis coupling. The reaction path Hamiltonian model, together with a symmetry-based separation of the angular degree of freedom, provides an appropriate framework that leads to the formulation of an effective two-dimensional Hamiltonian. The success of the adiabatic approximation in this model is due to the symmetry of the transition state, not to a separation of time scales. Adjacent trajectories, i.e., those that do not exactly fulfill the reactivity conditions have similar characteristics, but the quality of the approximation is lower. At higher energies, these characteristics persist, but to a lesser degree. Recrossings of the dividing surface then become much more frequent and the phase space volumes of initial conditions that generate recrossing-free trajectories decrease. Altogether, one ends up with an additional illustration of the concept of reactive cylinder (or conduit) in phase space that reactive trajectories must follow. Reactivity is associated with dynamical regularity and dimensionality reduction, whatever the shape of the potential energy surface, no matter how strong its anharmonicity, and whatever the curvature of its reaction path. Both simplifying features persist during the entire reactive process, up to complete separation of fragments. The ergodicity assumption commonly assumed in statistical theories is inappropriate for reactive trajectories.

  19. Arbitrarily high-order time-stepping schemes based on the operator spectrum theory for high-dimensional nonlinear Klein-Gordon equations

    NASA Astrophysics Data System (ADS)

    Liu, Changying; Wu, Xinyuan

    2017-07-01

    In this paper we explore arbitrarily high-order Lagrange collocation-type time-stepping schemes for effectively solving high-dimensional nonlinear Klein-Gordon equations with different boundary conditions. We begin with one-dimensional periodic boundary problems and first formulate an abstract ordinary differential equation (ODE) on a suitable infinity-dimensional function space based on the operator spectrum theory. We then introduce an operator-variation-of-constants formula which is essential for the derivation of our arbitrarily high-order Lagrange collocation-type time-stepping schemes for the nonlinear abstract ODE. The nonlinear stability and convergence are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix under some suitable smoothness assumptions. With regard to the two dimensional Dirichlet or Neumann boundary problems, our new time-stepping schemes coupled with discrete Fast Sine / Cosine Transformation can be applied to simulate the two-dimensional nonlinear Klein-Gordon equations effectively. All essential features of the methodology are present in one-dimensional and two-dimensional cases, although the schemes to be analysed lend themselves with equal to higher-dimensional case. The numerical simulation is implemented and the numerical results clearly demonstrate the advantage and effectiveness of our new schemes in comparison with the existing numerical methods for solving nonlinear Klein-Gordon equations in the literature.

  20. Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Crockett, Thomas W.

    1999-01-01

    This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.

  1. A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

  2. High-order continuum kinetic method for modeling plasma dynamics in phase space

    DOE PAGES

    Vogman, G. V.; Colella, P.; Shumlak, U.

    2014-12-15

    Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,v x,v y) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuummore » finite volume algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,v r,v z) phase space are presented.« less

  3. A fast image matching algorithm based on key points

    NASA Astrophysics Data System (ADS)

    Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng

    2014-05-01

    Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.

  4. (3 + 1)-dimensional topological phases and self-dual quantum geometries encoded on Heegaard surfaces

    NASA Astrophysics Data System (ADS)

    Dittrich, Bianca

    2017-05-01

    We apply the recently suggested strategy to lift state spaces and operators for (2 + 1)-dimensional topological quantum field theories to state spaces and operators for a (3 + 1)-dimensional TQFT with defects. We start from the (2 + 1)-dimensional TuraevViro theory and obtain a state space, consistent with the state space expected from the Crane-Yetter model with line defects.

  5. Feature Matching of Historical Images Based on Geometry of Quadrilaterals

    NASA Astrophysics Data System (ADS)

    Maiwald, F.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.

    2018-05-01

    This contribution shows an approach to match historical images from the photo library of the Saxon State and University Library Dresden (SLUB) in the context of a historical three-dimensional city model of Dresden. In comparison to recent images, historical photography provides diverse factors which make an automatical image analysis (feature detection, feature matching and relative orientation of images) difficult. Due to e.g. film grain, dust particles or the digitalization process, historical images are often covered by noise interfering with the image signal needed for a robust feature matching. The presented approach uses quadrilaterals in image space as these are commonly available in man-made structures and façade images (windows, stones, claddings). It is explained how to generally detect quadrilaterals in images. Consequently, the properties of the quadrilaterals as well as the relationship to neighbouring quadrilaterals are used for the description and matching of feature points. The results show that most of the matches are robust and correct but still small in numbers.

  6. Unbiased feature selection in learning random forests for high-dimensional data.

    PubMed

    Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi

    2015-01-01

    Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.

  7. ProteinShader: illustrative rendering of macromolecules

    PubMed Central

    Weber, Joseph R

    2009-01-01

    Background Cartoon-style illustrative renderings of proteins can help clarify structural features that are obscured by space filling or balls and sticks style models, and recent advances in programmable graphics cards offer many new opportunities for improving illustrative renderings. Results The ProteinShader program, a new tool for macromolecular visualization, uses information from Protein Data Bank files to produce illustrative renderings of proteins that approximate what an artist might create by hand using pen and ink. A combination of Hermite and spherical linear interpolation is used to draw smooth, gradually rotating three-dimensional tubes and ribbons with a repeating pattern of texture coordinates, which allows the application of texture mapping, real-time halftoning, and smooth edge lines. This free platform-independent open-source program is written primarily in Java, but also makes extensive use of the OpenGL Shading Language to modify the graphics pipeline. Conclusion By programming to the graphics processor unit, ProteinShader is able to produce high quality images and illustrative rendering effects in real-time. The main feature that distinguishes ProteinShader from other free molecular visualization tools is its use of texture mapping techniques that allow two-dimensional images to be mapped onto the curved three-dimensional surfaces of ribbons and tubes with minimum distortion of the images. PMID:19331660

  8. Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data

    NASA Astrophysics Data System (ADS)

    Hung, C. H.; Chang, W. C.; Chen, L. C.

    2016-06-01

    With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.

  9. Velocity Field of the McMurdo Shear Zone from Annual Three-Dimensional Ground Penetrating Radar Imaging and Crevasse Matching

    NASA Astrophysics Data System (ADS)

    Ray, L.; Jordan, M.; Arcone, S. A.; Kaluzienski, L. M.; Koons, P. O.; Lever, J.; Walker, B.; Hamilton, G. S.

    2017-12-01

    The McMurdo Shear Zone (MSZ) is a narrow, intensely crevassed strip tens of km long separating the Ross and McMurdo ice shelves (RIS and MIS) and an important pinning feature for the RIS. We derive local velocity fields within the MSZ from two consecutive annual ground penetrating radar (GPR) datasets that reveal complex firn and marine ice crevassing; no englacial features are evident. The datasets were acquired in 2014 and 2015 using robot-towed 400 MHz and 200 MHz GPR over a 5 km x 5.7 km grid. 100 west-to-east transects at 50 m spacing provide three-dimensional maps that reveal the length of many firn crevasses, and their year-to-year structural evolution. Hand labeling of crevasse cross sections near the MSZ western and eastern boundaries reveal matching firn and marine ice crevasses, and more complex and chaotic features between these boundaries. By matching crevasse features from year to year both on the eastern and western boundaries and within the chaotic region, marine ice crevasses along the western and eastern boundaries are shown to align directly with firn crevasses, and the local velocity field is estimated and compared with data from strain rate surveys and remote sensing. While remote sensing provides global velocity fields, crevasse matching indicates greater local complexity attributed to faulting, folding, and rotation.

  10. Oligo kernels for datamining on biological sequences: a case study on prokaryotic translation initiation sites

    PubMed Central

    Meinicke, Peter; Tech, Maike; Morgenstern, Burkhard; Merkl, Rainer

    2004-01-01

    Background Kernel-based learning algorithms are among the most advanced machine learning methods and have been successfully applied to a variety of sequence classification tasks within the field of bioinformatics. Conventional kernels utilized so far do not provide an easy interpretation of the learnt representations in terms of positional and compositional variability of the underlying biological signals. Results We propose a kernel-based approach to datamining on biological sequences. With our method it is possible to model and analyze positional variability of oligomers of any length in a natural way. On one hand this is achieved by mapping the sequences to an intuitive but high-dimensional feature space, well-suited for interpretation of the learnt models. On the other hand, by means of the kernel trick we can provide a general learning algorithm for that high-dimensional representation because all required statistics can be computed without performing an explicit feature space mapping of the sequences. By introducing a kernel parameter that controls the degree of position-dependency, our feature space representation can be tailored to the characteristics of the biological problem at hand. A regularized learning scheme enables application even to biological problems for which only small sets of example sequences are available. Our approach includes a visualization method for transparent representation of characteristic sequence features. Thereby importance of features can be measured in terms of discriminative strength with respect to classification of the underlying sequences. To demonstrate and validate our concept on a biochemically well-defined case, we analyze E. coli translation initiation sites in order to show that we can find biologically relevant signals. For that case, our results clearly show that the Shine-Dalgarno sequence is the most important signal upstream a start codon. The variability in position and composition we found for that signal is in accordance with previous biological knowledge. We also find evidence for signals downstream of the start codon, previously introduced as transcriptional enhancers. These signals are mainly characterized by occurrences of adenine in a region of about 4 nucleotides next to the start codon. Conclusions We showed that the oligo kernel can provide a valuable tool for the analysis of relevant signals in biological sequences. In the case of translation initiation sites we could clearly deduce the most discriminative motifs and their positional variation from example sequences. Attractive features of our approach are its flexibility with respect to oligomer length and position conservation. By means of these two parameters oligo kernels can easily be adapted to different biological problems. PMID:15511290

  11. Theory of Space Charge Limited Current in Fractional Dimensional Space

    NASA Astrophysics Data System (ADS)

    Zubair, Muhammad; Ang, L. K.

    The concept of fractional dimensional space has been effectively applied in many areas of physics to describe the fractional effects on the physical systems. We will present some recent developments of space charge limited (SCL) current in free space and solid in the framework of fractional dimensional space which may account for the effect of imperfectness or roughness of the electrode surface. For SCL current in free space, the governing law is known as the Child-Langmuir (CL) law. Its analogy in a trap-free solid (or dielectric) is known as Mott-Gurney (MG) law. This work extends the one-dimensional CL Law and MG Law for the case of a D-dimensional fractional space with 0 < D <= 1 where parameter D defines the degree of roughness of the electrode surface. Such a fractional dimensional space generalization of SCL current theory can be used to characterize the charge injection by the imperfectness or roughness of the surface in applications related to high current cathode (CL law), and organic electronics (MG law). In terms of operating regime, the model has included the quantum effects when the spacing between the electrodes is small.

  12. Anisotropic fractal media by vector calculus in non-integer dimensional space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru

    2014-08-15

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensionalmore » space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.« less

  13. Structural and magnetic characterization of the one-dimensional S = 5/2 antiferromagnetic chain system SrMn(VO 4)(OH)

    DOE PAGES

    Sanjeewa, Liurukara D.; Garlea, Vasile O.; McGuire, Michael A.; ...

    2016-06-06

    The descloizite-type compound, SrMn(VO 4)(OH), was synthesized as large single crystals (1-2mm) using a high-temperature high-pressure hydrothermal technique. X-ray single crystal structure analysis reveals that the material crystallizes in the acentric orthorhombic space group of P2 12 12 1 (no. 19), Z = 4. The structure exhibits a one-dimensional feature, with [MnO 4] chains propagating along the a-axis which are interconnected by VO 4 tetrahedra. Raman and infrared spectra were obtained to identify the fundamental vanadate and hydroxide vibrational modes. Magnetization data reveal a broad maximum at approximately 80 K, arising from one-dimensional magnetic correlations with intrachain exchange constant ofmore » J/k B = 9.97(3) K between nearest Mn neighbors and a canted antiferromagnetic behavior below T N = 30 K. Single crystal neutron diffraction at 4 K yielded a magnetic structure solution in the lower symmetry of the magnetic space group P2 1 with two unique chains displaying antiferromagnetically ordered Mn moments oriented nearly perpendicular to the chain axis. Lastly, the presence of the Dzyaloshinskii Moriya antisymmetric exchange interaction leads to a slight canting of the spins and gives rise to a weak ferromagnetic component along the chain direction.« less

  14. Fractional-dimensional Child-Langmuir law for a rough cathode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zubair, M., E-mail: muhammad-zubair@sutd.edu.sg; Ang, L. K., E-mail: ricky-ang@sutd.edu.sg

    This work presents a self-consistent model of space charge limited current transport in a gap combined of free-space and fractional-dimensional space (F{sup α}), where α is the fractional dimension in the range 0 < α ≤ 1. In this approach, a closed-form fractional-dimensional generalization of Child-Langmuir (CL) law is derived in classical regime which is then used to model the effect of cathode surface roughness in a vacuum diode by replacing the rough cathode with a smooth cathode placed in a layer of effective fractional-dimensional space. Smooth transition of CL law from the fractional-dimensional to integer-dimensional space is also demonstrated. The model has beenmore » validated by comparing results with an experiment.« less

  15. Automated placement of interfaces in conformational kinetics calculations using machine learning

    NASA Astrophysics Data System (ADS)

    Grazioli, Gianmarc; Butts, Carter T.; Andricioaei, Ioan

    2017-10-01

    Several recent implementations of algorithms for sampling reaction pathways employ a strategy for placing interfaces or milestones across the reaction coordinate manifold. Interfaces can be introduced such that the full feature space describing the dynamics of a macromolecule is divided into Voronoi (or other) cells, and the global kinetics of the molecular motions can be calculated from the set of fluxes through the interfaces between the cells. Although some methods of this type are exact for an arbitrary set of cells, in practice, the calculations will converge fastest when the interfaces are placed in regions where they can best capture transitions between configurations corresponding to local minima. The aim of this paper is to introduce a fully automated machine-learning algorithm for defining a set of cells for use in kinetic sampling methodologies based on subdividing the dynamical feature space; the algorithm requires no intuition about the system or input from the user and scales to high-dimensional systems.

  16. Automated placement of interfaces in conformational kinetics calculations using machine learning.

    PubMed

    Grazioli, Gianmarc; Butts, Carter T; Andricioaei, Ioan

    2017-10-21

    Several recent implementations of algorithms for sampling reaction pathways employ a strategy for placing interfaces or milestones across the reaction coordinate manifold. Interfaces can be introduced such that the full feature space describing the dynamics of a macromolecule is divided into Voronoi (or other) cells, and the global kinetics of the molecular motions can be calculated from the set of fluxes through the interfaces between the cells. Although some methods of this type are exact for an arbitrary set of cells, in practice, the calculations will converge fastest when the interfaces are placed in regions where they can best capture transitions between configurations corresponding to local minima. The aim of this paper is to introduce a fully automated machine-learning algorithm for defining a set of cells for use in kinetic sampling methodologies based on subdividing the dynamical feature space; the algorithm requires no intuition about the system or input from the user and scales to high-dimensional systems.

  17. Spectral-Spatial Shared Linear Regression for Hyperspectral Image Classification.

    PubMed

    Haoliang Yuan; Yuan Yan Tang

    2017-04-01

    Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.

  18. Fractal electrodynamics via non-integer dimensional space approach

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2015-09-01

    Using the recently suggested vector calculus for non-integer dimensional space, we consider electrodynamics problems in isotropic case. This calculus allows us to describe fractal media in the framework of continuum models with non-integer dimensional space. We consider electric and magnetic fields of fractal media with charges and currents in the framework of continuum models with non-integer dimensional spaces. An application of the fractal Gauss's law, the fractal Ampere's circuital law, the fractal Poisson equation for electric potential, and equation for fractal stream of charges are suggested. Lorentz invariance and speed of light in fractal electrodynamics are discussed. An expression for effective refractive index of non-integer dimensional space is suggested.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levakhina, Y. M.; Mueller, J.; Buzug, T. M.

    Purpose: This paper introduces a nonlinear weighting scheme into the backprojection operation within the simultaneous algebraic reconstruction technique (SART). It is designed for tomosynthesis imaging of objects with high-attenuation features in order to reduce limited angle artifacts. Methods: The algorithm estimates which projections potentially produce artifacts in a voxel. The contribution of those projections into the updating term is reduced. In order to identify those projections automatically, a four-dimensional backprojected space representation is used. Weighting coefficients are calculated based on a dissimilarity measure, evaluated in this space. For each combination of an angular view direction and a voxel position anmore » individual weighting coefficient for the updating term is calculated. Results: The feasibility of the proposed approach is shown based on reconstructions of the following real three-dimensional tomosynthesis datasets: a mammography quality phantom, an apple with metal needles, a dried finger bone in water, and a human hand. Datasets have been acquired with a Siemens Mammomat Inspiration tomosynthesis device and reconstructed using SART with and without suggested weighting. Out-of-focus artifacts are described using line profiles and measured using standard deviation (STD) in the plane and below the plane which contains artifact-causing features. Artifacts distribution in axial direction is measured using an artifact spread function (ASF). The volumes reconstructed with the weighting scheme demonstrate the reduction of out-of-focus artifacts, lower STD (meaning reduction of artifacts), and narrower ASF compared to nonweighted SART reconstruction. It is achieved successfully for different kinds of structures: point-like structures such as phantom features, long structures such as metal needles, and fine structures such as trabecular bone structures. Conclusions: Results indicate the feasibility of the proposed algorithm to reduce typical tomosynthesis artifacts produced by high-attenuation features. The proposed algorithm assigns weighting coefficients automatically and no segmentation or tissue-classification steps are required. The algorithm can be included into various iterative reconstruction algorithms with an additive updating strategy. It can also be extended to computed tomography case with the complete set of angular data.« less

  20. A Query Expansion Framework in Image Retrieval Domain Based on Local and Global Analysis

    PubMed Central

    Rahman, M. M.; Antani, S. K.; Thoma, G. R.

    2011-01-01

    We present an image retrieval framework based on automatic query expansion in a concept feature space by generalizing the vector space model of information retrieval. In this framework, images are represented by vectors of weighted concepts similar to the keyword-based representation used in text retrieval. To generate the concept vocabularies, a statistical model is built by utilizing Support Vector Machine (SVM)-based classification techniques. The images are represented as “bag of concepts” that comprise perceptually and/or semantically distinguishable color and texture patches from local image regions in a multi-dimensional feature space. To explore the correlation between the concepts and overcome the assumption of feature independence in this model, we propose query expansion techniques in the image domain from a new perspective based on both local and global analysis. For the local analysis, the correlations between the concepts based on the co-occurrence pattern, and the metrical constraints based on the neighborhood proximity between the concepts in encoded images, are analyzed by considering local feedback information. We also analyze the concept similarities in the collection as a whole in the form of a similarity thesaurus and propose an efficient query expansion based on the global analysis. The experimental results on a photographic collection of natural scenes and a biomedical database of different imaging modalities demonstrate the effectiveness of the proposed framework in terms of precision and recall. PMID:21822350

  1. Online 3D Ear Recognition by Combining Global and Local Features.

    PubMed

    Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David

    2016-01-01

    The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%.

  2. Online 3D Ear Recognition by Combining Global and Local Features

    PubMed Central

    Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David

    2016-01-01

    The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%. PMID:27935955

  3. Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1986-01-01

    To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.

  4. Is time enough in order to know where you are?

    NASA Astrophysics Data System (ADS)

    Tartaglia, Angelo

    2013-09-01

    This talk discusses various aspects of the structure of space-time presenting mechanisms leading to the explanation of the "rigidity" of the manifold and to the emergence of time, i.e. of the Lorentzian signature. The proposed ingredient is the analog, in four dimensions, of the deformation energy associated with the common three-dimensional elasticity theory. The inclusion of this additional term in the Lagrangian of empty space-time accounts for gravity as an emergent feature from the microscopic structure of space-time. Once time has legitimately been introduced a global positioning method based on local measurements of proper times between the arrivals of electromagnetic pulses from independent distant sources is presented. The method considers both pulsars as well as artificial emitters located on celestial bodies of the solar system as pulsating beacons to be used for navigation and positioning.

  5. Visualization of 3-D tensor fields

    NASA Technical Reports Server (NTRS)

    Hesselink, L.

    1996-01-01

    Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.

  6. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  7. Multiscale statistics of trajectories with applications to fluid particles in turbulence and football players

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Kadoch, Benjamin; Bos, Wouter

    2017-11-01

    The angle between two subsequent particle displacement increments is evaluated as a function of the time lag. The directional change of particles can thus be quantified at different scales and multiscale statistics can be performed. Flow dependent and geometry dependent features can be distinguished. The mean angle satisfies scaling behaviors for short time lags based on the smoothness of the trajectories. For intermediate time lags a power law behavior can be observed for some turbulent flows, which can be related to Kolmogorov scaling. The long time behavior depends on the confinement geometry of the flow. We show that the shape of the probability distribution function of the directional change can be well described by a Fischer distribution. Results for two-dimensional (direct and inverse cascade) and three-dimensional turbulence with and without confinement, illustrate the properties of the proposed multiscale statistics. The presented Monte-Carlo simulations allow disentangling geometry dependent and flow independent features. Finally, we also analyze trajectories of football players, which are, in general, not randomly spaced on a field.

  8. Improving Mixed Variable Optimization of Computational and Model Parameters Using Multiple Surrogate Functions

    DTIC Science & Technology

    2008-03-01

    multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space

  9. Learning Efficient Spatial-Temporal Gait Features with Deep Learning for Human Identification.

    PubMed

    Liu, Wu; Zhang, Cheng; Ma, Huadong; Li, Shuangqun

    2018-02-06

    The integration of the latest breakthroughs in bioinformatics technology from one side and artificial intelligence from another side, enables remarkable advances in the fields of intelligent security guard computational biology, healthcare, and so on. Among them, biometrics based automatic human identification is one of the most fundamental and significant research topic. Human gait, which is a biometric features with the unique capability, has gained significant attentions as the remarkable characteristics of remote accessed, robust and security in the biometrics based human identification. However, the existed methods cannot well handle the indistinctive inter-class differences and large intra-class variations of human gait in real-world situation. In this paper, we have developed an efficient spatial-temporal gait features with deep learning for human identification. First of all, we proposed a gait energy image (GEI) based Siamese neural network to automatically extract robust and discriminative spatial gait features for human identification. Furthermore, we exploit the deep 3-dimensional convolutional networks to learn the human gait convolutional 3D (C3D) as the temporal gait features. Finally, the GEI and C3D gait features are embedded into the null space by the Null Foley-Sammon Transform (NFST). In the new space, the spatial-temporal features are sufficiently combined with distance metric learning to drive the similarity metric to be small for pairs of gait from the same person, and large for pairs from different persons. Consequently, the experiments on the world's largest gait database show our framework impressively outperforms state-of-the-art methods.

  10. Form drag in rivers due to small-scale natural topographic features: 2. Irregular sequences

    USGS Publications Warehouse

    Kean, J.W.; Smith, J.D.

    2006-01-01

    The size, shape, and spacing of small-scale topographic features found on the boundaries of natural streams, rivers, and floodplains can be quite variable. Consequently, a procedure for determining the form drag on irregular sequences of different-sized topographic features is essential for calculating near-boundary flows and sediment transport. A method for carrying out such calculations is developed in this paper. This method builds on the work of Kean and Smith (2006), which describes the flow field for the simpler case of a regular sequence of identical topographic features. Both approaches model topographic features as two-dimensional elements with Gaussian-shaped cross sections defined in terms of three parameters. Field measurements of bank topography are used to show that (1) the magnitude of these shape parameters can vary greatly between adjacent topographic features and (2) the variability of these shape parameters follows a lognormal distribution. Simulations using an irregular set of topographic roughness elements show that the drag on an individual element is primarily controlled by the size and shape of the feature immediately upstream and that the spatial average of the boundary shear stress over a large set of randomly ordered elements is relatively insensitive to the sequence of the elements. In addition, a method to transform the topography of irregular surfaces into an equivalently rough surface of regularly spaced, identical topographic elements also is given. The methods described in this paper can be used to improve predictions of flow resistance in rivers as well as quantify bank roughness.

  11. The study on spatial distribution features of radiological plume discharged from Nuclear Power Plant based on C4ISRE

    NASA Astrophysics Data System (ADS)

    Ma, Yunfeng; Shen, Yue; Feng, Bairun; Yang, Fan; Li, Qiangqiang; Du, Boying; Bian, Yushan; Hu, Qiongqong; Wang, Qi; Hu, Xiaomin; Yin, Hang

    2018-02-01

    When the nuclear emergency accident occurs, it is very important to estimate three-dimensional space feature of the radioactive plume discharged from the source term for the emergency organization, as well as for better understanding of atmospheric dispersion processes. So, taking the Hongyanhe Nuclear Power Plant for example, the study for three-dimensional space feature of the radioactive plume is accomplished by applying atmospheric transport model (coupling of WRF-HYSPLIT) driven by FNL meteorological data of NCEP (04/01/2014-04/02/2014) based on the C4ISRE (Command, Control, Communications, Computer, Intelligence, Surveillance, Reconnaissance, Environmental Impact Assessment).The results show that the whole shape of three-dimensional plume was about irregular cloth influenced by wind; In the spatial domain (height > 16000m),the distribution of radiological plume, which looked more like horseshoe-shaped, presented irregular polygons of which the total length was 2258.7km, where covered the area of 39151km2; In the airspace from 4000m to 16000m, the plume, covered the area of 116269 km2, showed a triangle and the perimeter of that was 2280.4km; The shape of the plume was more like irregular quadrilateral, its perimeter was 2941.8km and coverage area of the plume was 131534km2;The overall distribution of the wind field showed a rectangular shape; Within the area along the horizontal direction 400m from origin to east and under height (lower than 2000m),the closer the distance coordinate (0,0), the denser the plume particles; Within the area of horizontal distance(500m-1000m) and height (4000m- 16000m), the particle density were relatively sparse and the spread extent of the plume particles from west to East was relatively large and the plume particles were mainly in the suspended state without obvious dry sedimentation; Within the area of horizontal distance (800m-1100m) and height (>16000m), there were relatively gentle horizontal diffusion of plume particles with upward drift of particles In local area.

  12. Homogeneous, anisotropic three-manifolds of topologically massive gravity

    NASA Astrophysics Data System (ADS)

    Nutku, Y.; Baekler, P.

    1989-10-01

    We present a new class of exact solutions of Deser, Jackiw, and Templeton's theory (DJT) of topologically massive gravity which consists of homogeneous, anisotropic manifolds. In these solutions the coframe is given by the left-invariant 1-forms of 3-dimensional Lie algebras up to constant scale factors. These factors are fixed in terms of the DJT coupling constant μ which is the constant of proportionality between the Einstein and Cotton tensors in 3-dimensions. Differences between the scale factors result in anisotropy which is a common feature of topologically massive 3-manifolds. We have found that only Bianchi Types VI, VIII, and IX lead to nontrivial solutions. Among these, a Bianchi Type IX, squashed 3-sphere solution of the Euclideanized DJT theory has finite action. Bianchi Type VIII, IX solutions can variously be embedded in the de Sitter/anti-de Sitter space. That is, some DJT 3-manifolds that we shall present here can be regarded as the basic constituent of anti-de Sitter space which is the ground state solution in higher dimensional generalization of Einstein's general relativity.

  13. Homogeneous, anisotropic three-manifolds of topologically massive gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutku, Y.; Baekler, P.

    1989-10-01

    We present a new class of exact solutions of Deser, Jackiw, and Templeton's theory (DJT) of topologically massive gravity which consists of homogeneous, anisotropic manifolds. In these solutions the coframe is given by the left-invariant 1-forms of 3-dimensional Lie algebras up to constant scale factors. These factors are fixed in terms of the DJT coupling constant {mu}m which is the constant of proportionality between the Einstein and Cotton tensors in 3-dimensions. Differences between the scale factors result in anisotropy which is a common feature of topologically massive 3-manifolds. We have found that only Bianchi Types VI, VIII, and IX leadmore » to nontrivial solutions. Among these, a Bianchi Type IX, squashed 3-sphere solution of the Euclideanized DJT theory has finite action, Bianchi Type VIII, IX solutions can variously be embedded in the de Sitter/anti-de Sitter space. That is, some DJT 3-manifolds that we shall present here can be regarded as the basic constitent of anti-de Sitter space which is the ground state solution in higher dimensional generalizations of Einstein's general relativity. {copyright} 1989 Academic Press, Inc.« less

  14. Discriminant analysis for fast multiclass data classification through regularized kernel function approximation.

    PubMed

    Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K

    2010-06-01

    In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.

  15. Dynamic of consumer groups and response of commodity markets by principal component analysis

    NASA Astrophysics Data System (ADS)

    Nobi, Ashadun; Alam, Shafiqul; Lee, Jae Woo

    2017-09-01

    This study investigates financial states and group dynamics by applying principal component analysis to the cross-correlation coefficients of the daily returns of commodity futures. The eigenvalues of the cross-correlation matrix in the 6-month timeframe displays similar values during 2010-2011, but decline following 2012. A sharp drop in eigenvalue implies the significant change of the market state. Three commodity sectors, energy, metals and agriculture, are projected into two dimensional spaces consisting of two principal components (PC). We observe that they form three distinct clusters in relation to various sectors. However, commodities with distinct features have intermingled with one another and scattered during severe crises, such as the European sovereign debt crises. We observe the notable change of the position of two dimensional spaces of groups during financial crises. By considering the first principal component (PC1) within the 6-month moving timeframe, we observe that commodities of the same group change states in a similar pattern, and the change of states of one group can be used as a warning for other group.

  16. Non-reciprocal elastic wave propagation in 2D phononic membranes with spatiotemporally varying material properties

    NASA Astrophysics Data System (ADS)

    Attarzadeh, M. A.; Nouh, M.

    2018-05-01

    One-dimensional phononic materials with material fields traveling simultaneously in space and time have been shown to break elastodynamic reciprocity resulting in unique wave propagation features. In the present work, a comprehensive mathematical analysis is presented to characterize and fully predict the non-reciprocal wave dispersion in two-dimensional space. The analytical dispersion relations, in the presence of the spatiotemporal material variations, are validated numerically using finite 2D membranes with a prescribed number of cells. Using omnidirectional excitations at the membrane's center, wave propagations are shown to exhibit directional asymmetry that increases drastically in the direction of the material travel and vanishes in the direction perpendicular to it. The topological nature of the predicted dispersion in different propagation directions are evaluated using the computed Chern numbers. Finally, the degree of the 2D non-reciprocity is quantified using a non-reciprocity index (NRI) which confirms the theoretical dispersion predictions as well as the finite simulations. The presented framework can be extended to plate-type structures as well as 3D spatiotemporally modulated phononic crystals.

  17. Discrete space charge affected field emission: Flat and hemisphere emitters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Kevin L., E-mail: kevin.jensen@nrl.navy.mil; Shiffler, Donald A.; Tang, Wilkin

    Models of space-charge affected thermal-field emission from protrusions, able to incorporate the effects of both surface roughness and elongated field emitter structures in beam optics codes, are desirable but difficult. The models proposed here treat the meso-scale diode region separate from the micro-scale regions characteristic of the emission sites. The consequences of discrete emission events are given for both one-dimensional (sheets of charge) and three dimensional (rings of charge) models: in the former, results converge to steady state conditions found by theory (e.g., Rokhlenko et al. [J. Appl. Phys. 107, 014904 (2010)]) but show oscillatory structure as they do. Surfacemore » roughness or geometric features are handled using a ring of charge model, from which the image charges are found and used to modify the apex field and emitted current. The roughness model is shown to have additional constraints related to the discrete nature of electron charge. The ability of a unit cell model to treat field emitter structures and incorporate surface roughness effects inside a beam optics code is assessed.« less

  18. Signal decomposition for surrogate modeling of a constrained ultrasonic design space

    NASA Astrophysics Data System (ADS)

    Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.

    2018-04-01

    The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.

  19. Numerical generation of two-dimensional grids by the use of Poisson equations with grid control at boundaries

    NASA Technical Reports Server (NTRS)

    Sorenson, R. L.; Steger, J. L.

    1980-01-01

    A method for generating boundary-fitted, curvilinear, two dimensional grids by the use of the Poisson equations is presented. Grids of C-type and O-type were made about airfoils and other shapes, with circular, rectangular, cascade-type, and other outer boundary shapes. Both viscous and inviscid spacings were used. In all cases, two important types of grid control can be exercised at both inner and outer boundaries. First is arbitrary control of the distances between the boundaries and the adjacent lines of the same coordinate family, i.e., stand-off distances. Second is arbitrary control of the angles with which lines of the opposite coordinate family intersect the boundaries. Thus, both grid cell size (or aspect ratio) and grid cell skewness are controlled at boundaries. Reasonable cell size and shape are ensured even in cases wherein extreme boundary shapes would tend to cause skewness or poorly controlled grid spacing. An inherent feature of the Poisson equations is that lines in the interior of the grid smoothly connect the boundary points (the grid mapping functions are second order differentiable).

  20. SLLE for predicting membrane protein types.

    PubMed

    Wang, Meng; Yang, Jie; Xu, Zhi-Jie; Chou, Kuo-Chen

    2005-01-07

    Introduction of the concept of pseudo amino acid composition (PROTEINS: Structure, Function, and Genetics 43 (2001) 246; Erratum: ibid. 44 (2001) 60) has made it possible to incorporate a considerable amount of sequence-order effects by representing a protein sample in terms of a set of discrete numbers, and hence can significantly enhance the prediction quality of membrane protein type. As a continuous effort along such a line, the Supervised Locally Linear Embedding (SLLE) technique for nonlinear dimensionality reduction is introduced (Science 22 (2000) 2323). The advantage of using SLLE is that it can reduce the operational space by extracting the essential features from the high-dimensional pseudo amino acid composition space, and that the cluster-tolerant capacity can be increased accordingly. As a consequence by combining these two approaches, high success rates have been observed during the tests of self-consistency, jackknife and independent data set, respectively, by using the simplest nearest neighbour classifier. The current approach represents a new strategy to deal with the problems of protein attribute prediction, and hence may become a useful vehicle in the area of bioinformatics and proteomics.

  1. First Vlasiator results on foreshock ULF wave activity

    NASA Astrophysics Data System (ADS)

    Palmroth, M.; Eastwood, J. P.; Pokhotelov, D.; Hietala, H.; Kempf, Y.; Hoilijoki, S.; von Alfthan, S.; Vainio, R. O.

    2013-12-01

    For decades, a certain type of ultra low frequency waves with a period of about 30 seconds have been observed in the Earth's quasi-parallel foreshock. These waves, with a wavelength of about an Earth radius, are compressive and propagate obliquely with respect to the interplanetary magnetic field (IMF). The latter property has caused trouble to scientists as the growth rate for the instability causing the waves is maximized along the magnetic field. So far, these waves have been characterized by single or multi-spacecraft methods and 2-dimensional hybrid-PIC simulations, which have not fully reproduced the wave properties. Vlasiator is a newly developed, global hybrid-Vlasov simulation, which solves ions in the six-dimensional phase space using the Vlasov equation and electrons using magnetohydrodynamics (MHD). The outcome of the simulation is a global reproduction of ion-scale physics in a holistic manner where the generation of physical features can be followed in time and their consequences can be quantitatively characterized. Vlasiator produces the ion distribution functions and the related kinetic physics in unprecedented detail, in the global magnetospheric scale presently with a resolution of 0.13 RE in the ordinary space and 20 km/s in the velocity space. We run two simulations, where we use both a typical Parker-spiral and a radial IMF as an input to the code. The runs are carried out in the ecliptic 2-dimensional plane in the ordinary space, and with three dimensions in the velocity space. We observe the generation of the 30-second ULF waves, and characterize their evolution and physical properties in time, comparing to observations by Cluster spacecraft. We find that Vlasiator reproduces these waves in all reported observational aspects, i.e., they are of the observed size in wavelength and period, they are compressive and propagate obliquely to the IMF. In particular, we investigate the oblique propagation and discuss the issues related to the long-standing question of oblique propagation.

  2. Classification and Verification of Handwritten Signatures with Time Causal Information Theory Quantifiers.

    PubMed

    Rosso, Osvaldo A; Ospina, Raydonal; Frery, Alejandro C

    2016-01-01

    We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups.

  3. Exploration of the relationship between topology and designability of conformations

    NASA Astrophysics Data System (ADS)

    Leelananda, Sumudu P.; Towfic, Fadi; Jernigan, Robert L.; Kloczkowski, Andrzej

    2011-06-01

    Protein structures are evolutionarily more conserved than sequences, and sequences with very low sequence identity frequently share the same fold. This leads to the concept of protein designability. Some folds are more designable and lots of sequences can assume that fold. Elucidating the relationship between protein sequence and the three-dimensional (3D) structure that the sequence folds into is an important problem in computational structural biology. Lattice models have been utilized in numerous studies to model protein folds and predict the designability of certain folds. In this study, all possible compact conformations within a set of two-dimensional and 3D lattice spaces are explored. Complementary interaction graphs are then generated for each conformation and are described using a set of graph features. The full HP sequence space for each lattice model is generated and contact energies are calculated by threading each sequence onto all the possible conformations. Unique conformation giving minimum energy is identified for each sequence and the number of sequences folding to each conformation (designability) is obtained. Machine learning algorithms are used to predict the designability of each conformation. We find that the highly designable structures can be distinguished from other non-designable conformations based on certain graphical geometric features of the interactions. This finding confirms the fact that the topology of a conformation is an important determinant of the extent of its designability and suggests that the interactions themselves are important for determining the designability.

  4. Exploring the mammalian sensory space: co-operations and trade-offs among senses.

    PubMed

    Nummela, Sirpa; Pihlström, Henry; Puolamäki, Kai; Fortelius, Mikael; Hemilä, Simo; Reuter, Tom

    2013-12-01

    The evolution of a particular sensory organ is often discussed with no consideration of the roles played by other senses. Here, we treat mammalian vision, olfaction and hearing as an interconnected whole, a three-dimensional sensory space, evolving in response to ecological challenges. Until now, there has been no quantitative method for estimating how much a particular animal invests in its different senses. We propose an anatomical measure based on sensory organ sizes. Dimensions of functional importance are defined and measured, and normalized in relation to animal mass. For 119 taxonomically and ecologically diverse species, we can define the position of the species in a three-dimensional sensory space. Thus, we can ask questions related to possible trade-off vs. co-operation among senses. More generally, our method allows morphologists to identify sensory organ combinations that are characteristic of particular ecological niches. After normalization for animal size, we note that arboreal mammals tend to have larger eyes and smaller noses than terrestrial mammals. On the other hand, we observe a strong correlation between eyes and ears, indicating that co-operation between vision and hearing is a general mammalian feature. For some groups of mammals we note a correlation, and possible co-operation between olfaction and whiskers.

  5. Minimum Free Energy Path of Ligand-Induced Transition in Adenylate Kinase

    PubMed Central

    Matsunaga, Yasuhiro; Fujisaki, Hiroshi; Terada, Tohru; Furuta, Tadaomi; Moritsugu, Kei; Kidera, Akinori

    2012-01-01

    Large-scale conformational changes in proteins involve barrier-crossing transitions on the complex free energy surfaces of high-dimensional space. Such rare events cannot be efficiently captured by conventional molecular dynamics simulations. Here we show that, by combining the on-the-fly string method and the multi-state Bennett acceptance ratio (MBAR) method, the free energy profile of a conformational transition pathway in Escherichia coli adenylate kinase can be characterized in a high-dimensional space. The minimum free energy paths of the conformational transitions in adenylate kinase were explored by the on-the-fly string method in 20-dimensional space spanned by the 20 largest-amplitude principal modes, and the free energy and various kinds of average physical quantities along the pathways were successfully evaluated by the MBAR method. The influence of ligand binding on the pathways was characterized in terms of rigid-body motions of the lid-shaped ATP-binding domain (LID) and the AMP-binding (AMPbd) domains. It was found that the LID domain was able to partially close without the ligand, while the closure of the AMPbd domain required the ligand binding. The transition state ensemble of the ligand bound form was identified as those structures characterized by highly specific binding of the ligand to the AMPbd domain, and was validated by unrestrained MD simulations. It was also found that complete closure of the LID domain required the dehydration of solvents around the P-loop. These findings suggest that the interplay of the two different types of domain motion is an essential feature in the conformational transition of the enzyme. PMID:22685395

  6. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R. (Principal Investigator); Wiegand, C. L.; Richardson, A. J.; Johnson, M. P.

    1981-01-01

    Plotted transects made from south Texas daytime HCMM data show the effect of subvisible cirrus (SCI) clouds in the emissive (IR) band but the effect is unnoticable in the reflective (VIS) band. The depression of satellite indicated temperatures ws greatest in the center of SCi streamers and tapered off at the edges. Pixels of uncontaminated land and water features in the HCMM test area shared identical VIS and IR digital count combinations with other pixels representing similar features. A minimum of 0.015 percent repeats of identical VIS-IR combinations are characteristic of land and water features in a scene of 30 percent cloud cover. This increases to 0.021 percent of more when the scene is clear. Pixels having shared VIS-IR combinations less than these amounts are considered to be cloud contaminated in the cluster screening method. About twenty percent of SCi was machine indistinguishable from land features in two dimensional spectral space (VIS vs IR).

  7. Craters on Mars: Global Geometric Properties from Gridded MOLA Topography

    NASA Technical Reports Server (NTRS)

    Garvin, J. B.; Sakimoto, S. E. H.; Frawley, J. J.

    2003-01-01

    Impact craters serve as natural probes of the target properties of planetary crusts and the tremendous diversity of morphological expressions of such features on Mars attests to their importance for deciphering the history of crustal assembly, modification, and erosion. This paper summarizes the key findings associated with a five year long survey of the three-dimensional properties of approx. 6000 martian impact craters using finely gridded MOLA topography. Previous efforts have treated representative subpopulations, but this effort treats global properties from the largest survey of impact features from the perspective of their topography ever assimilated. With the Viking missions of the mid-1970 s, the most intensive and comprehensive robotic expeditions to any Deep Space location in the history of humanity were achieved, with scientifically stunning results associated with the morphology of impact craters. The relationships illustrated and suggest that martian impact features are remarkably sensitive to target properties and to the local depositional processes.

  8. The 3D morphology of the ejecta surrounding VY Canis Majoris

    NASA Astrophysics Data System (ADS)

    Jones, Terry Jay; Humphreys, Roberta M.; Helton, L. Andrew

    2007-03-01

    We use second epoch images taken with WFPC2 on the HST and imaging polarimetry taken with the HST/ACS/HRC to explore the three dimensional structure of the circumstellar dust distribution around the red supergiant VY Canis Majoris. Transverse motions, combined with radial velocities, provide a picture of the kinematics of the ejecta, including the total space motions. The fractional polarization and photometric colors provide an independent method of locating the physical position of the dust along the line-of-sight. Most of the individual arc-like features and clumps seen in the intensity image are also features in the fractional polarization map, and must be distinct geometric objects. The location of these features in the ejecta of VY CMa using kinematics and polarimetry agree well with each other, and strongly suggest they are the result of relatively massive ejections, probably associated with magnetic fields.

  9. Collaborated measurement of three-dimensional position and orientation errors of assembled miniature devices with two vision systems

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Wei; Luo, Yi; Yang, Weimin; Chen, Liang

    2013-01-01

    In assembly of miniature devices, the position and orientation of the parts to be assembled should be guaranteed during or after assembly. In some cases, the relative position or orientation errors among the parts can not be measured from only one direction using visual method, because of visual occlusion or for the features of parts located in a three-dimensional way. An automatic assembly system for precise miniature devices is introduced. In the modular assembly system, two machine vision systems were employed for measurement of the three-dimensionally distributed assembly errors. High resolution CCD cameras and high position repeatability precision stages were integrated to realize high precision measurement in large work space. The two cameras worked in collaboration in measurement procedure to eliminate the influence of movement errors of the rotational or translational stages. A set of templates were designed for calibration of the vision systems and evaluation of the system's measurement accuracy.

  10. Gene/protein name recognition based on support vector machine using dictionary as features.

    PubMed

    Mitsumori, Tomohiro; Fation, Sevrani; Murata, Masaki; Doi, Kouichi; Doi, Hirohumi

    2005-01-01

    Automated information extraction from biomedical literature is important because a vast amount of biomedical literature has been published. Recognition of the biomedical named entities is the first step in information extraction. We developed an automated recognition system based on the SVM algorithm and evaluated it in Task 1.A of BioCreAtIvE, a competition for automated gene/protein name recognition. In the work presented here, our recognition system uses the feature set of the word, the part-of-speech (POS), the orthography, the prefix, the suffix, and the preceding class. We call these features "internal resource features", i.e., features that can be found in the training data. Additionally, we consider the features of matching against dictionaries to be external resource features. We investigated and evaluated the effect of these features as well as the effect of tuning the parameters of the SVM algorithm. We found that the dictionary matching features contributed slightly to the improvement in the performance of the f-score. We attribute this to the possibility that the dictionary matching features might overlap with other features in the current multiple feature setting. During SVM learning, each feature alone had a marginally positive effect on system performance. This supports the fact that the SVM algorithm is robust on the high dimensionality of the feature vector space and means that feature selection is not required.

  11. Stressed photoconductive detector for far-infrared space applications

    NASA Technical Reports Server (NTRS)

    Wang, J.-Q.; Richards, P. L.; Beeman, J. W.; Haller, E. E.

    1987-01-01

    An optimized leaf-spring apparatus for applying uniaxial stress to a Ge:Ga far-IR photoconductor has been designed and tested. This design has significant advantages for space applications which require high quantum efficiency and stable operation over long periods of time. The important features include adequate spring deflection with relatively small overall size, torque-free stress, easy measurement of applied stress, and a detector configuration with high responsivity. One-dimensional arrays of stressed photoconductors can be constructed using this design. A peak responsivity of 38 A/W is achieved in a detector with a cutoff wavelength of 200 microns, which was operated at a temperature of 2.0 K and a bias voltage equal to one-half of the breakdown voltage.

  12. Integrating Satellite, Radar and Surface Observation with Time and Space Matching

    NASA Astrophysics Data System (ADS)

    Ho, Y.; Weber, J.

    2015-12-01

    The Integrated Data Viewer (IDV) from Unidata is a Java™-based software framework for analyzing and visualizing geoscience data. It brings together the ability to display and work with satellite imagery, gridded data, surface observations, balloon soundings, NWS WSR-88D Level II and Level III RADAR data, and NOAA National Profiler Network data, all within a unified interface. Applying time and space matching on the satellite, radar and surface observation datasets will automatically synchronize the display from different data sources and spatially subset to match the display area in the view window. These features allow the IDV users to effectively integrate these observations and provide 3 dimensional views of the weather system to better understand the underlying dynamics and physics of weather phenomena.

  13. Cross diffusion and exponential space dependent heat source impacts in radiated three-dimensional (3D) flow of Casson fluid by heated surface

    NASA Astrophysics Data System (ADS)

    Zaigham Zia, Q. M.; Ullah, Ikram; Waqas, M.; Alsaedi, A.; Hayat, T.

    2018-03-01

    This research intends to elaborate Soret-Dufour characteristics in mixed convective radiated Casson liquid flow by exponentially heated surface. Novel features of exponential space dependent heat source are introduced. Appropriate variables are implemented for conversion of partial differential frameworks into a sets of ordinary differential expressions. Homotopic scheme is employed for construction of analytic solutions. Behavior of various embedding variables on velocity, temperature and concentration distributions are plotted graphically and analyzed in detail. Besides, skin friction coefficients and heat and mass transfer rates are also computed and interpreted. The results signify the pronounced characteristics of temperature corresponding to convective and radiation variables. Concentration bears opposite response for Soret and Dufour variables.

  14. Computing the scalar field couplings in 6D supergravity

    NASA Astrophysics Data System (ADS)

    Saidi, El Hassan

    2008-11-01

    Using non-chiral supersymmetry in 6D space-time, we compute the explicit expression of the metric the scalar manifold SO(1,1)×{SO(4,20)}/{SO(4)×SO(20)} of the ten-dimensional type IIA superstring on generic K3. We consider as well the scalar field self-couplings in the general case where the non-chiral 6D supergravity multiplet is coupled to generic n vector supermultiplets with moduli space SO(1,1)×{SO(4,n)}/{SO(4)×SO(n)}. We also work out a dictionary giving a correspondence between hyper-Kähler geometry and the Kähler geometry of the Coulomb branch of 10D type IIA on Calabi-Yau threefolds. Others features are also discussed.

  15. Modeling Three-Dimensional Flow in Confined Aquifers by Superposition of Both Two- and Three-Dimensional Analytic Functions

    NASA Astrophysics Data System (ADS)

    Haitjema, Henk M.

    1985-10-01

    A technique is presented to incorporate three-dimensional flow in a Dupuit-Forchheimer model. The method is based on superposition of approximate analytic solutions to both two- and three-dimensional flow features in a confined aquifer of infinite extent. Three-dimensional solutions are used in the domain of interest, while farfield conditions are represented by two-dimensional solutions. Approximate three- dimensional solutions have been derived for a partially penetrating well and a shallow creek. Each of these solutions satisfies the condition that no flow occurs across the confining layers of the aquifer. Because of this condition, the flow at some distance of a three-dimensional feature becomes nearly horizontal. Consequently, remotely from a three-dimensional feature, its three-dimensional solution is replaced by a corresponding two-dimensional one. The latter solution is trivial as compared to its three-dimensional counterpart, and its use greatly enhances the computational efficiency of the model. As an example, the flow is modeled between a partially penetrating well and a shallow creek that occur in a regional aquifer system.

  16. On the geometry of the space-time and motion of the spinning bodies

    NASA Astrophysics Data System (ADS)

    Trenčevski, Kostadin

    2013-03-01

    In this paper an alternative theory about space-time is given. First some preliminaries about 3-dimensional time and the reasons for its introduction are presented. Alongside the 3-dimensional space (S) the 3-dimensional space of spatial rotations (SR) is considered independently from the 3-dimensional space. Then it is given a model of the universe, based on the Lie groups of real and complex orthogonal 3 × 3 matrices in this 3+3+3-dimensional space. Special attention is dedicated for introduction and study of the space S × SR, which appears to be isomorphic to SO(3,ℝ) × SO(3,ℝ) or S 3 × S 3. The influence of the gravitational acceleration to the spinning bodies is considered. Some important applications of these results about spinning bodies are given, which naturally lead to violation of Newton's third law in its classical formulation. The precession of the spinning axis is also considered.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Tzu-Ling; Yang, Chen-I., E-mail: ciyang@thu.edu.tw

    The preparations and properties of three new homochiral three-dimensional (3D) coordination polymers, [M(D-cam)(pyz)(H{sub 2}O){sub 2}]{sub n} (M=Co (1) and Ni (2); D-H{sub 2}cam=(+) D-camphoric acid; pyz=pyrazine) and [Mn{sub 2}(D-cam){sub 2}(H{sub 2}O){sub 2}] (3), under solvothermal conditions is described. Single-crystal X-ray diffraction analyses revealed that all of compounds are homochiral 3D structure. 1 and 2 are isostructural and crystallize in the trigonal space group P3{sub 2}21, while 3 crystallizes in monoclinic space group P2{sub 1}. The structure of 1 and 2 consists of metal-D-cam helical chains which are pillared with pyrazine ligands into a 3D framework structure and 3 features amore » 3D homochiral framework involving one-dimensional manganese-carboxylate chains that are aligned parallel to the b axis. Magnetic susceptibility data of all compounds were collected. The findings indicate that μ{sub 2}-pyrazine dominate weak antiferromagnetic coupling within 1 and 2, while 3 exhibits antiferromagnetic behavior through the carboxylate groups of D-cam ligand. -- Graphical abstract: The preparations and properties of three new homochiral three-dimensional (3D) coordination polymers, [M(D-cam)(pyz)(H{sub 2}O){sub 2}]{sub n} (M=Co (1) and Ni (2); D-H{sub 2}cam=(+) D-camphoric acid; pyz=pyrazine) and [Mn{sub 2}(D-cam){sub 2}(H{sub 2}O){sub 2}] (3), under solvothermal conditions is described. Single-crystal X-ray diffraction analyses revealed that all of compounds are homochiral 3D structure. 1 and 2 are isostructural and crystallize in the trigonal space group P3{sub 2}21, while 3 crystallizes in monoclinic space group P2{sub 1}. The structure of 1 and 2 consists of metal-D-cam helical chains which are pillared with pyrazine ligands into a 3D framework structure and 3 features a 3D homochiral framework involving one-dimensional manganese-carboxylate chains that are aligned parallel to the b axis. Magnetic susceptibility data of all compounds were collected. The findings indicate that μ{sub 2}-pyrazine dominate weak antiferromagnetic coupling within 1 and 2, while 3 exhibits antiferromagnetic behavior through the carboxylate groups of D-cam ligand. Highlights: • Three homochiral 3D coordination polymers were synthesized. • 1 and 2 are 3D structure with metal-D-cam helical chains pillared by pyrazine. • 3 shows a 3D homochiral framework involving 1D manganese-carboxylate chains. • Magnetic data analysis indicates that 1–3 exhibit weak antiferromagnetic coupling.« less

  18. Control-group feature normalization for multivariate pattern analysis of structural MRI data using the support vector machine.

    PubMed

    Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T

    2016-05-15

    Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. A nonlinear controlling function of geological features on magmatic–hydrothermal mineralization

    PubMed Central

    Zuo, Renguang

    2016-01-01

    This paper reports a nonlinear controlling function of geological features on magmatic–hydrothermal mineralization, and proposes an alternative method to measure the spatial relationships between geological features and mineral deposits using multifractal singularity theory. It was observed that the greater the proximity to geological controlling features, the greater the number of mineral deposits developed, indicating a nonlinear spatial relationship between these features and mineral deposits. This phenomenon can be quantified using the relationship between the numbers of mineral deposits N(ε) of a D-dimensional set and the scale of ε. The density of mineral deposits can be expressed as ρ(ε) = Cε−(De−a), where ε is the buffer width of geological controlling features, De is Euclidean dimension of space (=2 in this case), a is singularity index, and C is a constant. The expression can be rewritten as ρ = Cεa−2. When a < 2, there is a significant spatial correlation between specific geological features and mineral deposits; lower a values indicate a more significant spatial correlation. This nonlinear relationship and the advantages of this method were illustrated using a case study from Fujian Province in China and a case study from Baguio district in Philippines. PMID:27255794

  20. A nonlinear controlling function of geological features on magmatic-hydrothermal mineralization.

    PubMed

    Zuo, Renguang

    2016-06-03

    This paper reports a nonlinear controlling function of geological features on magmatic-hydrothermal mineralization, and proposes an alternative method to measure the spatial relationships between geological features and mineral deposits using multifractal singularity theory. It was observed that the greater the proximity to geological controlling features, the greater the number of mineral deposits developed, indicating a nonlinear spatial relationship between these features and mineral deposits. This phenomenon can be quantified using the relationship between the numbers of mineral deposits N(ε) of a D-dimensional set and the scale of ε. The density of mineral deposits can be expressed as ρ(ε) = Cε(-(De-a)), where ε is the buffer width of geological controlling features, De is Euclidean dimension of space (=2 in this case), a is singularity index, and C is a constant. The expression can be rewritten as ρ = Cε(a-2). When a < 2, there is a significant spatial correlation between specific geological features and mineral deposits; lower a values indicate a more significant spatial correlation. This nonlinear relationship and the advantages of this method were illustrated using a case study from Fujian Province in China and a case study from Baguio district in Philippines.

  1. GENIE: a hybrid genetic algorithm for feature classification in multispectral images

    NASA Astrophysics Data System (ADS)

    Perkins, Simon J.; Theiler, James P.; Brumby, Steven P.; Harvey, Neal R.; Porter, Reid B.; Szymanski, John J.; Bloch, Jeffrey J.

    2000-10-01

    We consider the problem of pixel-by-pixel classification of a multi- spectral image using supervised learning. Conventional spuervised classification techniques such as maximum likelihood classification and less conventional ones s uch as neural networks, typically base such classifications solely on the spectral components of each pixel. It is easy to see why: the color of a pixel provides a nice, bounded, fixed dimensional space in which these classifiers work well. It is often the case however, that spectral information alone is not sufficient to correctly classify a pixel. Maybe spatial neighborhood information is required as well. Or maybe the raw spectral components do not themselves make for easy classification, but some arithmetic combination of them would. In either of these cases we have the problem of selecting suitable spatial, spectral or spatio-spectral features that allow the classifier to do its job well. The number of all possible such features is extremely large. How can we select a suitable subset? We have developed GENIE, a hybrid learning system that combines a genetic algorithm that searches a space of image processing operations for a set that can produce suitable feature planes, and a more conventional classifier which uses those feature planes to output a final classification. In this paper we show that the use of a hybrid GA provides significant advantages over using either a GA alone or more conventional classification methods alone. We present results using high-resolution IKONOS data, looking for regions of burned forest and for roads.

  2. Assessing efficiency of spatial sampling using combined coverage analysis in geographical and feature spaces

    NASA Astrophysics Data System (ADS)

    Hengl, Tomislav

    2015-04-01

    Efficiency of spatial sampling largely determines success of model building. This is especially important for geostatistical mapping where an initial sampling plan should provide a good representation or coverage of both geographical (defined by the study area mask map) and feature space (defined by the multi-dimensional covariates). Otherwise the model will need to extrapolate and, hence, the overall uncertainty of the predictions will be high. In many cases, geostatisticians use point data sets which are produced using unknown or inconsistent sampling algorithms. Many point data sets in environmental sciences suffer from spatial clustering and systematic omission of feature space. But how to quantify these 'representation' problems and how to incorporate this knowledge into model building? The author has developed a generic function called 'spsample.prob' (Global Soil Information Facilities package for R) and which simultaneously determines (effective) inclusion probabilities as an average between the kernel density estimation (geographical spreading of points; analysed using the spatstat package in R) and MaxEnt analysis (feature space spreading of points; analysed using the MaxEnt software used primarily for species distribution modelling). The output 'iprob' map indicates whether the sampling plan has systematically missed some important locations and/or features, and can also be used as an input for geostatistical modelling e.g. as a weight map for geostatistical model fitting. The spsample.prob function can also be used in combination with the accessibility analysis (cost of field survey are usually function of distance from the road network, slope and land cover) to allow for simultaneous maximization of average inclusion probabilities and minimization of total survey costs. The author postulates that, by estimating effective inclusion probabilities using combined geographical and feature space analysis, and by comparing survey costs to representation efficiency, an optimal initial sampling plan can be produced which satisfies both criteria: (a) good representation (i.e. within a tolerance threshold), and (b) minimized survey costs. This sampling analysis framework could become especially interesting for generating sampling plans in new areas e.g. for which no previous spatial prediction model exists. The presentation includes data processing demos with standard soil sampling data sets Ebergotzen (Germany) and Edgeroi (Australia), also available via the GSIF package.

  3. Genome U-Plot: a whole genome visualization.

    PubMed

    Gaitatzes, Athanasios; Johnson, Sarah H; Smadbeck, James B; Vasmatzis, George

    2018-05-15

    The ability to produce and analyze whole genome sequencing (WGS) data from samples with structural variations (SV) generated the need to visualize such abnormalities in simplified plots. Conventional two-dimensional representations of WGS data frequently use either circular or linear layouts. There are several diverse advantages regarding both these representations, but their major disadvantage is that they do not use the two-dimensional space very efficiently. We propose a layout, termed the Genome U-Plot, which spreads the chromosomes on a two-dimensional surface and essentially quadruples the spatial resolution. We present the Genome U-Plot for producing clear and intuitive graphs that allows researchers to generate novel insights and hypotheses by visualizing SVs such as deletions, amplifications, and chromoanagenesis events. The main features of the Genome U-Plot are its layered layout, its high spatial resolution and its improved aesthetic qualities. We compare conventional visualization schemas with the Genome U-Plot using visualization metrics such as number of line crossings and crossing angle resolution measures. Based on our metrics, we improve the readability of the resulting graph by at least 2-fold, making apparent important features and making it easy to identify important genomic changes. A whole genome visualization tool with high spatial resolution and improved aesthetic qualities. An implementation and documentation of the Genome U-Plot is publicly available at https://github.com/gaitat/GenomeUPlot. vasmatzis.george@mayo.edu. Supplementary data are available at Bioinformatics online.

  4. Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun

    2014-11-01

    Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.

  5. Response monitoring using quantitative ultrasound methods and supervised dictionary learning in locally advanced breast cancer

    NASA Astrophysics Data System (ADS)

    Gangeh, Mehrdad J.; Fung, Brandon; Tadayyon, Hadi; Tran, William T.; Czarnota, Gregory J.

    2016-03-01

    A non-invasive computer-aided-theragnosis (CAT) system was developed for the early assessment of responses to neoadjuvant chemotherapy in patients with locally advanced breast cancer. The CAT system was based on quantitative ultrasound spectroscopy methods comprising several modules including feature extraction, a metric to measure the dissimilarity between "pre-" and "mid-treatment" scans, and a supervised learning algorithm for the classification of patients to responders/non-responders. One major requirement for the successful design of a high-performance CAT system is to accurately measure the changes in parametric maps before treatment onset and during the course of treatment. To this end, a unified framework based on Hilbert-Schmidt independence criterion (HSIC) was used for the design of feature extraction from parametric maps and the dissimilarity measure between the "pre-" and "mid-treatment" scans. For the feature extraction, HSIC was used to design a supervised dictionary learning (SDL) method by maximizing the dependency between the scans taken from "pre-" and "mid-treatment" with "dummy labels" given to the scans. For the dissimilarity measure, an HSIC-based metric was employed to effectively measure the changes in parametric maps as an indication of treatment effectiveness. The HSIC-based feature extraction and dissimilarity measure used a kernel function to nonlinearly transform input vectors into a higher dimensional feature space and computed the population means in the new space, where enhanced group separability was ideally obtained. The results of the classification using the developed CAT system indicated an improvement of performance compared to a CAT system with basic features using histogram of intensity.

  6. Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-01-01

    The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance.

  7. Multimaterial magnetically assisted 3D printing of composite materials.

    PubMed

    Kokkinis, Dimitri; Schaffner, Manuel; Studart, André R

    2015-10-23

    3D printing has become commonplace for the manufacturing of objects with unusual geometries. Recent developments that enabled printing of multiple materials indicate that the technology can potentially offer a much wider design space beyond unusual shaping. Here we show that a new dimension in this design space can be exploited through the control of the orientation of anisotropic particles used as building blocks during a direct ink-writing process. Particle orientation control is demonstrated by applying low magnetic fields on deposited inks pre-loaded with magnetized stiff platelets. Multimaterial dispensers and a two-component mixing unit provide additional control over the local composition of the printed material. The five-dimensional design space covered by the proposed multimaterial magnetically assisted 3D printing platform (MM-3D printing) opens the way towards the manufacturing of functional heterogeneous materials with exquisite microstructural features thus far only accessible by biological materials grown in nature.

  8. The Verriest Lecture: Color lessons from space, time, and motion

    PubMed Central

    Shevell, Steven K.

    2012-01-01

    The appearance of a chromatic stimulus depends on more than the wavelengths composing it. The scientific literature has countless examples showing that spatial and temporal features of light influence the colors we see. Studying chromatic stimuli that vary over space, time or direction of motion has a further benefit beyond predicting color appearance: the unveiling of otherwise concealed neural processes of color vision. Spatial or temporal stimulus variation uncovers multiple mechanisms of brightness and color perception at distinct levels of the visual pathway. Spatial variation in chromaticity and luminance can change perceived three-dimensional shape, an example of chromatic signals that affect a percept other than color. Chromatic objects in motion expose the surprisingly weak link between the chromaticity of objects and their physical direction of motion, and the role of color in inducing an illusory motion direction. Space, time and motion – color’s colleagues – reveal the richness of chromatic neural processing. PMID:22330398

  9. Direct imaging of atomic-scale ripples in few-layer graphene.

    PubMed

    Wang, Wei L; Bhandari, Sagar; Yi, Wei; Bell, David C; Westervelt, Robert; Kaxiras, Efthimios

    2012-05-09

    Graphene has been touted as the prototypical two-dimensional solid of extraordinary stability and strength. However, its very existence relies on out-of-plane ripples as predicted by theory and confirmed by experiments. Evidence of the intrinsic ripples has been reported in the form of broadened diffraction spots in reciprocal space, in which all spatial information is lost. Here we show direct real-space images of the ripples in a few-layer graphene (FLG) membrane resolved at the atomic scale using monochromated aberration-corrected transmission electron microscopy (TEM). The thickness of FLG amplifies the weak local effects of the ripples, resulting in spatially varying TEM contrast that is unique up to inversion symmetry. We compare the characteristic TEM contrast with simulated images based on accurate first-principles calculations of the scattering potential. Our results characterize the ripples in real space and suggest that such features are likely common in ultrathin materials, even in the nanometer-thickness range.

  10. A complex systems analysis of stick-slip dynamics of a laboratory fault

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, David M.; Tordesillas, Antoinette, E-mail: atordesi@unimelb.edu.au; Small, Michael

    2014-03-15

    We study the stick-slip behavior of a granular bed of photoelastic disks sheared by a rough slider pulled along the surface. Time series of a proxy for granular friction are examined using complex systems methods to characterize the observed stick-slip dynamics of this laboratory fault. Nonlinear surrogate time series methods show that the stick-slip behavior appears more complex than a periodic dynamics description. Phase space embedding methods show that the dynamics can be locally captured within a four to six dimensional subspace. These slider time series also provide an experimental test for recent complex network methods. Phase space networks, constructedmore » by connecting nearby phase space points, proved useful in capturing the key features of the dynamics. In particular, network communities could be associated to slip events and the ranking of small network subgraphs exhibited a heretofore unreported ordering.« less

  11. Resolving runaway electron distributions in space, time, and energy

    NASA Astrophysics Data System (ADS)

    Paz-Soldan, C.; Cooper, C. M.; Aleynikov, P.; Eidietis, N. W.; Lvovskiy, A.; Pace, D. C.; Brennan, D. P.; Hollmann, E. M.; Liu, C.; Moyer, R. A.; Shiraki, D.

    2018-05-01

    Areas of agreement and disagreement with present-day models of runaway electron (RE) evolution are revealed by measuring MeV-level bremsstrahlung radiation from runaway electrons (REs) with a pinhole camera. Spatially resolved measurements localize the RE beam, reveal energy-dependent RE transport, and can be used to perform full two-dimensional (energy and pitch-angle) inversions of the RE phase-space distribution. Energy-resolved measurements find qualitative agreement with modeling on the role of collisional and synchrotron damping in modifying the RE distribution shape. Measurements are consistent with predictions of phase-space attractors that accumulate REs, with non-monotonic features observed in the distribution. Temporally resolved measurements find qualitative agreement with modeling on the impact of collisional and synchrotron damping in varying the RE growth and decay rate. Anomalous RE loss is observed and found to be largest at low energy. Possible roles for kinetic instability or spatial transport to resolve these anomalies are discussed.

  12. Driven phase space vortices in plasmas with nonextensive velocity distribution

    NASA Astrophysics Data System (ADS)

    Trivedi, Pallavi; Ganesh, Rajaraman

    2017-03-01

    The evolution of chirp-driven electrostatic waves in unmagnetized plasmas is numerically investigated by using a one-dimensional (1D) Vlasov-poisson solver with periodic boundary conditions. The initial velocity distribution of the 1D plasma is assumed to be governed by nonextensive q distribution [C. Tsallis, J. Stat. Phys. 52, 479 (1988)]. For an infinitesimal amplitude of an external drive, we investigate the effects of chirp driven dynamics that leads to the formation of giant phase space vortices (PSV) for both Maxwellian (q = 1) and non-Maxwellian ( q ≠ 1 ) plasmas. For non-Maxwellian plasmas, the formation of giant PSV with multiple extrema and phase velocities is shown to be dependent on the strength of "q". Novel features such as "shark"-like and transient "honeycomb"-like structures in phase space are discussed. Wherever relevant, we compare our results with previous work.

  13. Cultural ethology as a new approach of interplanetary crew's behavior

    NASA Astrophysics Data System (ADS)

    Tafforin, Carole; Giner Abati, Francisco

    2017-10-01

    From an evolutionary perspective, during short-term and medium-term orbital flights, human beings developed new spatial and motor behaviors to compensate for the lack of terrestrial gravity. Past space ethological studies have shown adaptive strategies to the tri-dimensional environment, with the goal of optimizing relationships between the astronaut and unusual sensorial-motor conditions. During a long-term interplanetary journey, crewmembers will have to develop new individual and social behaviors to adapt, far from earth, to isolation and confinement and as a result to extreme conditions of living and working together. Recent space psychological studies pointed out that heterogeneity is a feature of interplanetary crews, based on personality, gender mixing, internationality and diversity of backgrounds. Intercultural issues could arise between space voyagers. As a new approach we propose to emphasize the behavioral strategies of human groups' adaptation to this new multicultural dimension of the environment.

  14. Multimaterial magnetically assisted 3D printing of composite materials

    NASA Astrophysics Data System (ADS)

    Kokkinis, Dimitri; Schaffner, Manuel; Studart, André R.

    2015-10-01

    3D printing has become commonplace for the manufacturing of objects with unusual geometries. Recent developments that enabled printing of multiple materials indicate that the technology can potentially offer a much wider design space beyond unusual shaping. Here we show that a new dimension in this design space can be exploited through the control of the orientation of anisotropic particles used as building blocks during a direct ink-writing process. Particle orientation control is demonstrated by applying low magnetic fields on deposited inks pre-loaded with magnetized stiff platelets. Multimaterial dispensers and a two-component mixing unit provide additional control over the local composition of the printed material. The five-dimensional design space covered by the proposed multimaterial magnetically assisted 3D printing platform (MM-3D printing) opens the way towards the manufacturing of functional heterogeneous materials with exquisite microstructural features thus far only accessible by biological materials grown in nature.

  15. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    PubMed

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  16. Vector calculus in non-integer dimensional space and its applications to fractal media

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2015-02-01

    We suggest a generalization of vector calculus for the case of non-integer dimensional space. The first and second orders operations such as gradient, divergence, the scalar and vector Laplace operators for non-integer dimensional space are defined. For simplification we consider scalar and vector fields that are independent of angles. We formulate a generalization of vector calculus for rotationally covariant scalar and vector functions. This generalization allows us to describe fractal media and materials in the framework of continuum models with non-integer dimensional space. As examples of application of the suggested calculus, we consider elasticity of fractal materials (fractal hollow ball and fractal cylindrical pipe with pressure inside and outside), steady distribution of heat in fractal media, electric field of fractal charged cylinder. We solve the correspondent equations for non-integer dimensional space models.

  17. Advancing three-dimensional MEMS by complimentary laser micro manufacturing

    NASA Astrophysics Data System (ADS)

    Palmer, Jeremy A.; Williams, John D.; Lemp, Tom; Lehecka, Tom M.; Medina, Francisco; Wicker, Ryan B.

    2006-01-01

    This paper describes improvements that enable engineers to create three-dimensional MEMS in a variety of materials. It also provides a means for selectively adding three-dimensional, high aspect ratio features to pre-existing PMMA micro molds for subsequent LIGA processing. This complimentary method involves in situ construction of three-dimensional micro molds in a stand-alone configuration or directly adjacent to features formed by x-ray lithography. Three-dimensional micro molds are created by micro stereolithography (MSL), an additive rapid prototyping technology. Alternatively, three-dimensional features may be added by direct femtosecond laser micro machining. Parameters for optimal femtosecond laser micro machining of PMMA at 800 nanometers are presented. The technical discussion also includes strategies for enhancements in the context of material selection and post-process surface finish. This approach may lead to practical, cost-effective 3-D MEMS with the surface finish and throughput advantages of x-ray lithography. Accurate three-dimensional metal microstructures are demonstrated. Challenges remain in process planning for micro stereolithography and development of buried features following femtosecond laser micro machining.

  18. Feature Integration Theory Revisited: Dissociating Feature Detection and Attentional Guidance in Visual Search

    ERIC Educational Resources Information Center

    Chan, Louis K. H.; Hayward, William G.

    2009-01-01

    In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed…

  19. Nuclear Potential Clustering As a New Tool to Detect Patterns in High Dimensional Datasets

    NASA Astrophysics Data System (ADS)

    Tonkova, V.; Paulus, D.; Neeb, H.

    2013-02-01

    We present a new approach for the clustering of high dimensional data without prior assumptions about the structure of the underlying distribution. The proposed algorithm is based on a concept adapted from nuclear physics. To partition the data, we model the dynamic behaviour of nucleons interacting in an N-dimensional space. An adaptive nuclear potential, comprised of a short-range attractive (strong interaction) and a long-range repulsive term (Coulomb force) is assigned to each data point. By modelling the dynamics, nucleons that are densely distributed in space fuse to build nuclei (clusters) whereas single point clusters repel each other. The formation of clusters is completed when the system reaches the state of minimal potential energy. The data are then grouped according to the particles' final effective potential energy level. The performance of the algorithm is tested with several synthetic datasets showing that the proposed method can robustly identify clusters even when complex configurations are present. Furthermore, quantitative MRI data from 43 multiple sclerosis patients were analyzed, showing a reasonable splitting into subgroups according to the individual patients' disease grade. The good performance of the algorithm on such highly correlated non-spherical datasets, which are typical for MRI derived image features, shows that Nuclear Potential Clustering is a valuable tool for automated data analysis, not only in the MRI domain.

  20. Hybrid Discrete-Continuous Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich

    2003-01-01

    This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.

  1. Evolutionary multidimensional access architecture featuring cost-reduced components

    NASA Astrophysics Data System (ADS)

    Farjady, Farsheed; Parker, Michael C.; Walker, Stuart D.

    1998-12-01

    We describe a three-stage wavelength-routed optical access network, utilizing coarse passband-flattened arrayed- waveguide grating routers. An N-dimensional addressing strategy enables 6912 customers to be bi-directionally addressed with multi-Gb/s data using only 24 wavelengths spaced by 1.6 nm. Coarse wavelength separation allows use of increased tolerance WDM components at the exchange and customer premises. The architecture is designed to map onto standard access network topologies, allowing elegant upgradability from legacy PON infrastructures at low cost. Passband-flattening of the routers is achieved through phase apodization.

  2. Six-State Quantum Key Distribution Using Photons with Orbital Angular Momentum

    NASA Astrophysics Data System (ADS)

    Li, Jun-Lin; Wang, Chuan

    2010-11-01

    A new implementation of high-dimensional quantum key distribution (QKD) protocol is discussed. Using three mutual unbiased bases, we present a d-level six-state QKD protocol that exploits the orbital angular momentum with the spatial mode of the light beam. The protocol shows that the feature of a high capacity since keys are encoded using photon modes in d-level Hilbert space. The devices for state preparation and measurement are also discussed. This protocol has high security and the alignment of shared reference frames is not needed between sender and receiver.

  3. Coherent Structures and Spectral Energy Transfer in Turbulent Plasma: A Space-Filter Approach.

    PubMed

    Camporeale, E; Sorriso-Valvo, L; Califano, F; Retinò, A

    2018-03-23

    Plasma turbulence at scales of the order of the ion inertial length is mediated by several mechanisms, including linear wave damping, magnetic reconnection, the formation and dissipation of thin current sheets, and stochastic heating. It is now understood that the presence of localized coherent structures enhances the dissipation channels and the kinetic features of the plasma. However, no formal way of quantifying the relationship between scale-to-scale energy transfer and the presence of spatial structures has been presented so far. In the Letter we quantify such a relationship analyzing the results of a two-dimensional high-resolution Hall magnetohydrodynamic simulation. In particular, we employ the technique of space filtering to derive a spectral energy flux term which defines, in any point of the computational domain, the signed flux of spectral energy across a given wave number. The characterization of coherent structures is performed by means of a traditional two-dimensional wavelet transformation. By studying the correlation between the spectral energy flux and the wavelet amplitude, we demonstrate the strong relationship between scale-to-scale transfer and coherent structures. Furthermore, by conditioning one quantity with respect to the other, we are able for the first time to quantify the inhomogeneity of the turbulence cascade induced by topological structures in the magnetic field. Taking into account the low space-filling factor of coherent structures (i.e., they cover a small portion of space), it emerges that 80% of the spectral energy transfer (both in the direct and inverse cascade directions) is localized in about 50% of space, and 50% of the energy transfer is localized in only 25% of space.

  4. Coherent Structures and Spectral Energy Transfer in Turbulent Plasma: A Space-Filter Approach

    NASA Astrophysics Data System (ADS)

    Camporeale, E.; Sorriso-Valvo, L.; Califano, F.; Retinò, A.

    2018-03-01

    Plasma turbulence at scales of the order of the ion inertial length is mediated by several mechanisms, including linear wave damping, magnetic reconnection, the formation and dissipation of thin current sheets, and stochastic heating. It is now understood that the presence of localized coherent structures enhances the dissipation channels and the kinetic features of the plasma. However, no formal way of quantifying the relationship between scale-to-scale energy transfer and the presence of spatial structures has been presented so far. In the Letter we quantify such a relationship analyzing the results of a two-dimensional high-resolution Hall magnetohydrodynamic simulation. In particular, we employ the technique of space filtering to derive a spectral energy flux term which defines, in any point of the computational domain, the signed flux of spectral energy across a given wave number. The characterization of coherent structures is performed by means of a traditional two-dimensional wavelet transformation. By studying the correlation between the spectral energy flux and the wavelet amplitude, we demonstrate the strong relationship between scale-to-scale transfer and coherent structures. Furthermore, by conditioning one quantity with respect to the other, we are able for the first time to quantify the inhomogeneity of the turbulence cascade induced by topological structures in the magnetic field. Taking into account the low space-filling factor of coherent structures (i.e., they cover a small portion of space), it emerges that 80% of the spectral energy transfer (both in the direct and inverse cascade directions) is localized in about 50% of space, and 50% of the energy transfer is localized in only 25% of space.

  5. Stochastic simulation of spatially correlated geo-processes

    USGS Publications Warehouse

    Christakos, G.

    1987-01-01

    In this study, developments in the theory of stochastic simulation are discussed. The unifying element is the notion of Radon projection in Euclidean spaces. This notion provides a natural way of reconstructing the real process from a corresponding process observable on a reduced dimensionality space, where analysis is theoretically easier and computationally tractable. Within this framework, the concept of space transformation is defined and several of its properties, which are of significant importance within the context of spatially correlated processes, are explored. The turning bands operator is shown to follow from this. This strengthens considerably the theoretical background of the geostatistical method of simulation, and some new results are obtained in both the space and frequency domains. The inverse problem is solved generally and the applicability of the method is extended to anisotropic as well as integrated processes. Some ill-posed problems of the inverse operator are discussed. Effects of the measurement error and impulses at origin are examined. Important features of the simulated process as described by geomechanical laws, the morphology of the deposit, etc., may be incorporated in the analysis. The simulation may become a model-dependent procedure and this, in turn, may provide numerical solutions to spatial-temporal geologic models. Because the spatial simu??lation may be technically reduced to unidimensional simulations, various techniques of generating one-dimensional realizations are reviewed. To link theory and practice, an example is computed in detail. ?? 1987 International Association for Mathematical Geology.

  6. Features in chemical kinetics. I. Signatures of self-emerging dimensional reduction from a general format of the evolution law

    NASA Astrophysics Data System (ADS)

    Nicolini, Paolo; Frezzato, Diego

    2013-06-01

    Simplification of chemical kinetics description through dimensional reduction is particularly important to achieve an accurate numerical treatment of complex reacting systems, especially when stiff kinetics are considered and a comprehensive picture of the evolving system is required. To this aim several tools have been proposed in the past decades, such as sensitivity analysis, lumping approaches, and exploitation of time scales separation. In addition, there are methods based on the existence of the so-called slow manifolds, which are hyper-surfaces of lower dimension than the one of the whole phase-space and in whose neighborhood the slow evolution occurs after an initial fast transient. On the other hand, all tools contain to some extent a degree of subjectivity which seems to be irremovable. With reference to macroscopic and spatially homogeneous reacting systems under isothermal conditions, in this work we shall adopt a phenomenological approach to let self-emerge the dimensional reduction from the mathematical structure of the evolution law. By transforming the original system of polynomial differential equations, which describes the chemical evolution, into a universal quadratic format, and making a direct inspection of the high-order time-derivatives of the new dynamic variables, we then formulate a conjecture which leads to the concept of an "attractiveness" region in the phase-space where a well-defined state-dependent rate function ω has the simple evolution dot{ω }= - ω ^2 along any trajectory up to the stationary state. This constitutes, by itself, a drastic dimensional reduction from a system of N-dimensional equations (being N the number of chemical species) to a one-dimensional and universal evolution law for such a characteristic rate. Step-by-step numerical inspections on model kinetic schemes are presented. In the companion paper [P. Nicolini and D. Frezzato, J. Chem. Phys. 138, 234102 (2013)], 10.1063/1.4809593 this outcome will be naturally related to the appearance (and hence, to the definition) of the slow manifolds.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Ying; Li, Hong; Bridges, Denzel

    We report that the continuing miniaturization of microelectronics is pushing advanced manufacturing into nanomanufacturing. Nanojoining is a bottom-up assembly technique that enables functional nanodevice fabrication with dissimilar nanoscopic building blocks and/or molecular components. Various conventional joining techniques have been modified and re-invented for joining nanomaterials. Our review surveys recent progress in nanojoining methods, as compared to conventional joining processes. Examples of nanojoining are given and classified by the dimensionality of the joining materials. At each classification, nanojoining is reviewed and discussed according to materials specialties, low dimensional processing features, energy input mechanisms and potential applications. The preparation of new intermetallicmore » materials by reactive nanoscale multilayer foils based on self-propagating high-temperature synthesis is highlighted. This review will provide insight into nanojoining fundamentals and innovative applications in power electronics packaging, plasmonic devices, nanosoldering for printable electronics, 3D printing and space manufacturing.« less

  8. Unique Zigzag-Shaped Buckling Zn2C Monolayer with Strain-Tunable Band Gap and Negative Poisson Ratio.

    PubMed

    Meng, Lingbiao; Zhang, Yingjuan; Zhou, Minjie; Zhang, Jicheng; Zhou, Xiuwen; Ni, Shuang; Wu, Weidong

    2018-02-19

    Designing new materials with reduced dimensionality and distinguished properties has continuously attracted intense interest for materials innovation. Here we report a novel two-dimensional (2D) Zn 2 C monolayer nanomaterial with exceptional structure and properties by means of first-principles calculations. This new Zn 2 C monolayer is composed of quasi-tetrahedral tetracoordinate carbon and quasi-linear bicoordinate zinc, featuring a peculiar zigzag-shaped buckling configuration. The unique coordinate topology endows this natural 2D semiconducting monolayer with strongly strain tunable band gap and unusual negative Poisson ratios. The monolayer has good dynamic and thermal stabilities and is also the lowest-energy structure of 2D space indicated by the particle-swarm optimization (PSO) method, implying its synthetic feasibility. With these intriguing properties the material may find applications in nanoelectronics and micromechanics.

  9. Stern-Gerlach dynamics with quantum propagators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, Bailey C.; Berrondo, Manuel; Van Huele, Jean-Francois S.

    2011-01-15

    We study the quantum dynamics of a nonrelativistic neutral particle with spin in inhomogeneous external magnetic fields. We first consider fields with one-dimensional inhomogeneities, both unphysical and physical, and construct the corresponding analytic propagators. We then consider fields with two-dimensional inhomogeneities and develop an appropriate numerical propagation method. We propagate initial states exhibiting different degrees of space localization and various initial spin configurations, including both pure and mixed spin states. We study the evolution of their spin densities and identify characteristic features of spin density dynamics, such as the spatial separation of spin components, and spin localization or accumulation. Wemore » compare our approach and our results with the coverage of the Stern-Gerlach effect in the literature, and we focus on nonstandard Stern-Gerlach outcomes, such as radial separation, spin focusing, spin oscillation, and spin flipping.« less

  10. Hierarchical Protein Free Energy Landscapes from Variationally Enhanced Sampling.

    PubMed

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-12-13

    In recent work, we demonstrated that it is possible to obtain approximate representations of high-dimensional free energy surfaces with variationally enhanced sampling ( Shaffer, P.; Valsson, O.; Parrinello, M. Proc. Natl. Acad. Sci. , 2016 , 113 , 17 ). The high-dimensional spaces considered in that work were the set of backbone dihedral angles of a small peptide, Chignolin, and the high-dimensional free energy surface was approximated as the sum of many two-dimensional terms plus an additional term which represents an initial estimate. In this paper, we build on that work and demonstrate that we can calculate high-dimensional free energy surfaces of very high accuracy by incorporating additional terms. The additional terms apply to a set of collective variables which are more coarse than the base set of collective variables. In this way, it is possible to build hierarchical free energy surfaces, which are composed of terms that act on different length scales. We test the accuracy of these free energy landscapes for the proteins Chignolin and Trp-cage by constructing simple coarse-grained models and comparing results from the coarse-grained model to results from atomistic simulations. The approach described in this paper is ideally suited for problems in which the free energy surface has important features on different length scales or in which there is some natural hierarchy.

  11. Neural encoding of large-scale three-dimensional space-properties and constraints.

    PubMed

    Jeffery, Kate J; Wilson, Jonathan J; Casali, Giulio; Hayman, Robin M

    2015-01-01

    How the brain represents represent large-scale, navigable space has been the topic of intensive investigation for several decades, resulting in the discovery that neurons in a complex network of cortical and subcortical brain regions co-operatively encode distance, direction, place, movement etc. using a variety of different sensory inputs. However, such studies have mainly been conducted in simple laboratory settings in which animals explore small, two-dimensional (i.e., flat) arenas. The real world, by contrast, is complex and three dimensional with hills, valleys, tunnels, branches, and-for species that can swim or fly-large volumetric spaces. Adding an additional dimension to space adds coding challenges, a primary reason for which is that several basic geometric properties are different in three dimensions. This article will explore the consequences of these challenges for the establishment of a functional three-dimensional metric map of space, one of which is that the brains of some species might have evolved to reduce the dimensionality of the representational space and thus sidestep some of these problems.

  12. Static stability of a three-dimensional space truss. M.S. Thesis - Case Western Reserve Univ., 1994

    NASA Technical Reports Server (NTRS)

    Shaker, John F.

    1995-01-01

    In order to deploy large flexible space structures it is necessary to develop support systems that are strong and lightweight. The most recent example of this aerospace design need is vividly evident in the space station solar array assembly. In order to accommodate both weight limitations and strength performance criteria, ABLE Engineering has developed the Folding Articulating Square Truss (FASTMast) support structure. The FASTMast is a space truss/mechanism hybrid that can provide system support while adhering to stringent packaging demands. However, due to its slender nature and anticipated loading, stability characterization is a critical part of the design process. Furthermore, the dire consequences surely to result from a catastrophic instability quickly provide the motivation for careful examination of this problem. The fundamental components of the space station solar array system are the (1) solar array blanket system, (2) FASTMast support structure, and (3) mast canister assembly. The FASTMast once fully deployed from the canister will provide support to the solar array blankets. A unique feature of this structure is that the system responds linearly within a certain range of operating loads and nonlinearly when that range is exceeded. The source of nonlinear behavior in this case is due to a changing stiffness state resulting from an inability of diagonal members to resist applied loads. The principal objective of this study was to establish the failure modes involving instability of the FASTMast structure. Also of great interest during this effort was to establish a reliable analytical approach capable of effectively predicting critical values at which the mast becomes unstable. Due to the dual nature of structural response inherent to this problem, both linear and nonlinear analyses are required to characterize the mast in terms of stability. The approach employed herein is one that can be considered systematic in nature. The analysis begins with one and two-dimensional failure models of the system and its important components. From knowledge gained through preliminary analyses a foundation is developed for three-dimensional analyses of the FASTMast structure. The three-dimensional finite element (FE) analysis presented here involves a FASTMast system one-tenth the size of the actual flight unit. Although this study does not yield failure analysis results that apply directly to the flight article, it does establish a method by which the full-scale mast can be evaluated.

  13. Static stability of a three-dimensional space truss

    NASA Astrophysics Data System (ADS)

    Shaker, John F.

    1995-05-01

    In order to deploy large flexible space structures it is necessary to develop support systems that are strong and lightweight. The most recent example of this aerospace design need is vividly evident in the space station solar array assembly. In order to accommodate both weight limitations and strength performance criteria, ABLE Engineering has developed the Folding Articulating Square Truss (FASTMast) support structure. The FASTMast is a space truss/mechanism hybrid that can provide system support while adhering to stringent packaging demands. However, due to its slender nature and anticipated loading, stability characterization is a critical part of the design process. Furthermore, the dire consequences surely to result from a catastrophic instability quickly provide the motivation for careful examination of this problem. The fundamental components of the space station solar array system are the (1) solar array blanket system, (2) FASTMast support structure, and (3) mast canister assembly. The FASTMast once fully deployed from the canister will provide support to the solar array blankets. A unique feature of this structure is that the system responds linearly within a certain range of operating loads and nonlinearly when that range is exceeded. The source of nonlinear behavior in this case is due to a changing stiffness state resulting from an inability of diagonal members to resist applied loads. The principal objective of this study was to establish the failure modes involving instability of the FASTMast structure. Also of great interest during this effort was to establish a reliable analytical approach capable of effectively predicting critical values at which the mast becomes unstable. Due to the dual nature of structural response inherent to this problem, both linear and nonlinear analyses are required to characterize the mast in terms of stability. The approach employed herein is one that can be considered systematic in nature. The analysis begins with one and two-dimensional failure models of the system and its important components. From knowledge gained through preliminary analyses a foundation is developed for three-dimensional analyses of the FASTMast structure. The three-dimensional finite element (FE) analysis presented here involves a FASTMast system one-tenth the size of the actual flight unit. Although this study does not yield failure analysis results that apply directly to the flight article, it does establish a method by which the full-scale mast can be evaluated.

  14. An efficient semi-implicit method for three-dimensional non-hydrostatic flows in compliant arterial vessels.

    PubMed

    Fambri, Francesco; Dumbser, Michael; Casulli, Vincenzo

    2014-11-01

    Blood flow in arterial systems can be described by the three-dimensional Navier-Stokes equations within a time-dependent spatial domain that accounts for the elasticity of the arterial walls. In this article, blood is treated as an incompressible Newtonian fluid that flows through compliant vessels of general cross section. A three-dimensional semi-implicit finite difference and finite volume model is derived so that numerical stability is obtained at a low computational cost on a staggered grid. The key idea of the method consists in a splitting of the pressure into a hydrostatic and a non-hydrostatic part, where first a small quasi-one-dimensional nonlinear system is solved for the hydrostatic pressure and only in a second step the fully three-dimensional non-hydrostatic pressure is computed from a three-dimensional nonlinear system as a correction to the hydrostatic one. The resulting algorithm is robust, efficient, locally and globally mass conservative, and applies to hydrostatic and non-hydrostatic flows in one, two and three space dimensions. These features are illustrated on nontrivial test cases for flows in tubes with circular or elliptical cross section where the exact analytical solution is known. Test cases of steady and pulsatile flows in uniformly curved rigid and elastic tubes are presented. Wherever possible, axial velocity development and secondary flows are shown and compared with previously published results. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Automated Analysis, Classification, and Display of Waveforms

    NASA Technical Reports Server (NTRS)

    Kwan, Chiman; Xu, Roger; Mayhew, David; Zhang, Frank; Zide, Alan; Bonggren, Jeff

    2004-01-01

    A computer program partly automates the analysis, classification, and display of waveforms represented by digital samples. In the original application for which the program was developed, the raw waveform data to be analyzed by the program are acquired from space-shuttle auxiliary power units (APUs) at a sampling rate of 100 Hz. The program could also be modified for application to other waveforms -- for example, electrocardiograms. The program begins by performing principal-component analysis (PCA) of 50 normal-mode APU waveforms. Each waveform is segmented. A covariance matrix is formed by use of the segmented waveforms. Three eigenvectors corresponding to three principal components are calculated. To generate features, each waveform is then projected onto the eigenvectors. These features are displayed on a three-dimensional diagram, facilitating the visualization of the trend of APU operations.

  16. Classification and Verification of Handwritten Signatures with Time Causal Information Theory Quantifiers

    PubMed Central

    Ospina, Raydonal; Frery, Alejandro C.

    2016-01-01

    We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups. PMID:27907014

  17. Stability of Internal Space in Kaluza-Klein Theory

    NASA Astrophysics Data System (ADS)

    Maeda, K.; Soda, J.

    1998-12-01

    We extend a model studied by Li and Gott III to investigate a stability of internal space in Kaluza-Klein theory. Our model is a four-dimensional de-Sitter space plus a n-dimensional compactified internal space. We introduce a solution of the semi-classical Einstein equation which shows us the fact that a n-dimensional compactified internal space can be stable by the Casimir effect. The self-consistency of this solution is checked. One may apply this solution to study the issue of the Black Hole singularity.

  18. Three-dimensional marginal separation

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.

    1988-01-01

    The three dimensional marginal separation of a boundary layer along a line of symmetry is considered. The key equation governing the displacement function is derived, and found to be a nonlinear integral equation in two space variables. This is solved iteratively using a pseudo-spectral approach, based partly in double Fourier space, and partly in physical space. Qualitatively, the results are similar to previously reported two dimensional results (which are also computed to test the accuracy of the numerical scheme); however quantitatively the three dimensional results are much different.

  19. Exploring space-time structure of human mobility in urban space

    NASA Astrophysics Data System (ADS)

    Sun, J. B.; Yuan, J.; Wang, Y.; Si, H. B.; Shan, X. M.

    2011-03-01

    Understanding of human mobility in urban space benefits the planning and provision of municipal facilities and services. Due to the high penetration of cell phones, mobile cellular networks provide information for urban dynamics with a large spatial extent and continuous temporal coverage in comparison with traditional approaches. The original data investigated in this paper were collected by cellular networks in a southern city of China, recording the population distribution by dividing the city into thousands of pixels. The space-time structure of urban dynamics is explored by applying Principal Component Analysis (PCA) to the original data, from temporal and spatial perspectives between which there is a dual relation. Based on the results of the analysis, we have discovered four underlying rules of urban dynamics: low intrinsic dimensionality, three categories of common patterns, dominance of periodic trends, and temporal stability. It implies that the space-time structure can be captured well by remarkably few temporal or spatial predictable periodic patterns, and the structure unearthed by PCA evolves stably over time. All these features play a critical role in the applications of forecasting and anomaly detection.

  20. a Probabilistic Embedding Clustering Method for Urban Structure Detection

    NASA Astrophysics Data System (ADS)

    Lin, X.; Li, H.; Zhang, Y.; Gao, L.; Zhao, L.; Deng, M.

    2017-09-01

    Urban structure detection is a basic task in urban geography. Clustering is a core technology to detect the patterns of urban spatial structure, urban functional region, and so on. In big data era, diverse urban sensing datasets recording information like human behaviour and human social activity, suffer from complexity in high dimension and high noise. And unfortunately, the state-of-the-art clustering methods does not handle the problem with high dimension and high noise issues concurrently. In this paper, a probabilistic embedding clustering method is proposed. Firstly, we come up with a Probabilistic Embedding Model (PEM) to find latent features from high dimensional urban sensing data by "learning" via probabilistic model. By latent features, we could catch essential features hidden in high dimensional data known as patterns; with the probabilistic model, we can also reduce uncertainty caused by high noise. Secondly, through tuning the parameters, our model could discover two kinds of urban structure, the homophily and structural equivalence, which means communities with intensive interaction or in the same roles in urban structure. We evaluated the performance of our model by conducting experiments on real-world data and experiments with real data in Shanghai (China) proved that our method could discover two kinds of urban structure, the homophily and structural equivalence, which means clustering community with intensive interaction or under the same roles in urban space.

  1. Dimension- and space-based intertrial effects in visual pop-out search: modulation by task demands for focal-attentional processing.

    PubMed

    Krummenacher, Joseph; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas

    2009-03-01

    Two experiments compared reaction times (RTs) in visual search for singleton feature targets defined, variably across trials, in either the color or the orientation dimension. Experiment 1 required observers to simply discern target presence versus absence (simple-detection task); Experiment 2 required them to respond to a detection-irrelevant form attribute of the target (compound-search task). Experiment 1 revealed a marked dimensional intertrial effect of 34 ms for an target defined in a changed versus a repeated dimension, and an intertrial target distance effect, with an 4-ms increase in RTs (per unit of distance) as the separation of the current relative to the preceding target increased. Conversely, in Experiment 2, the dimension change effect was markedly reduced (11 ms), while the intertrial target distance effect was markedly increased (11 ms per unit of distance). The results suggest that dimension change/repetition effects are modulated by the amount of attentional focusing required by the task, with space-based attention altering the integration of dimension-specific feature contrast signals at the level of the overall-saliency map.

  2. The Gamma-Ray Burst ToolSHED is Open for Business

    NASA Astrophysics Data System (ADS)

    Giblin, Timothy W.; Hakkila, Jon; Haglin, David J.; Roiger, Richard J.

    2004-09-01

    The GRB ToolSHED, a Gamma-Ray Burst SHell for Expeditions in Data-Mining, is now online and available via a web browser to all in the scientific community. The ToolSHED is an online web utility that contains pre-processed burst attributes of the BATSE catalog and a suite of induction-based machine learning and statistical tools for classification and cluster analysis. Users create their own login account and study burst properties within user-defined multi-dimensional parameter spaces. Although new GRB attributes are periodically added to the database for user selection, the ToolSHED has a feature that allows users to upload their own burst attributes (e.g. spectral parameters, etc.) so that additional parameter spaces can be explored. A data visualization feature using GNUplot and web-based IDL has also been implemented to provide interactive plotting of user-selected session output. In an era in which GRB observations and attributes are becoming increasingly more complex, a utility such as the GRB ToolSHED may play an important role in deciphering GRB classes and understanding intrinsic burst properties.

  3. Online Condition Monitoring of Bearings to Support Total Productive Maintenance in the Packaging Materials Industry.

    PubMed

    Gligorijevic, Jovan; Gajic, Dragoljub; Brkovic, Aleksandar; Savic-Gajic, Ivana; Georgieva, Olga; Di Gennaro, Stefano

    2016-03-01

    The packaging materials industry has already recognized the importance of Total Productive Maintenance as a system of proactive techniques for improving equipment reliability. Bearing faults, which often occur gradually, represent one of the foremost causes of failures in the industry. Therefore, detection of their faults in an early stage is quite important to assure reliable and efficient operation. We present a new automated technique for early fault detection and diagnosis in rolling-element bearings based on vibration signal analysis. Following the wavelet decomposition of vibration signals into a few sub-bands of interest, the standard deviation of obtained wavelet coefficients is extracted as a representative feature. Then, the feature space dimension is optimally reduced to two using scatter matrices. In the reduced two-dimensional feature space the fault detection and diagnosis is carried out by quadratic classifiers. Accuracy of the technique has been tested on four classes of the recorded vibrations signals, i.e., normal, with the fault of inner race, outer race, and ball operation. The overall accuracy of 98.9% has been achieved. The new technique can be used to support maintenance decision-making processes and, thus, to increase reliability and efficiency in the industry by preventing unexpected faulty operation of bearings.

  4. Online Condition Monitoring of Bearings to Support Total Productive Maintenance in the Packaging Materials Industry

    PubMed Central

    Gligorijevic, Jovan; Gajic, Dragoljub; Brkovic, Aleksandar; Savic-Gajic, Ivana; Georgieva, Olga; Di Gennaro, Stefano

    2016-01-01

    The packaging materials industry has already recognized the importance of Total Productive Maintenance as a system of proactive techniques for improving equipment reliability. Bearing faults, which often occur gradually, represent one of the foremost causes of failures in the industry. Therefore, detection of their faults in an early stage is quite important to assure reliable and efficient operation. We present a new automated technique for early fault detection and diagnosis in rolling-element bearings based on vibration signal analysis. Following the wavelet decomposition of vibration signals into a few sub-bands of interest, the standard deviation of obtained wavelet coefficients is extracted as a representative feature. Then, the feature space dimension is optimally reduced to two using scatter matrices. In the reduced two-dimensional feature space the fault detection and diagnosis is carried out by quadratic classifiers. Accuracy of the technique has been tested on four classes of the recorded vibrations signals, i.e., normal, with the fault of inner race, outer race, and ball operation. The overall accuracy of 98.9% has been achieved. The new technique can be used to support maintenance decision-making processes and, thus, to increase reliability and efficiency in the industry by preventing unexpected faulty operation of bearings. PMID:26938541

  5. Merged or monolithic? Using machine-learning to reconstruct the dynamical history of simulated star clusters

    NASA Astrophysics Data System (ADS)

    Pasquato, Mario; Chung, Chul

    2016-05-01

    Context. Machine-learning (ML) solves problems by learning patterns from data with limited or no human guidance. In astronomy, ML is mainly applied to large observational datasets, e.g. for morphological galaxy classification. Aims: We apply ML to gravitational N-body simulations of star clusters that are either formed by merging two progenitors or evolved in isolation, planning to later identify globular clusters (GCs) that may have a history of merging from observational data. Methods: We create mock-observations from simulated GCs, from which we measure a set of parameters (also called features in the machine-learning field). After carrying out dimensionality reduction on the feature space, the resulting datapoints are fed in to various classification algorithms. Using repeated random subsampling validation, we check whether the groups identified by the algorithms correspond to the underlying physical distinction between mergers and monolithically evolved simulations. Results: The three algorithms we considered (C5.0 trees, k-nearest neighbour, and support-vector machines) all achieve a test misclassification rate of about 10% without parameter tuning, with support-vector machines slightly outperforming the others. The first principal component of feature space correlates with cluster concentration. If we exclude it from the regression, the performance of the algorithms is only slightly reduced.

  6. Higher-dimensional Bianchi type-VIh cosmologies

    NASA Astrophysics Data System (ADS)

    Lorenz-Petzold, D.

    1985-09-01

    The higher-dimensional perfect fluid equations of a generalization of the (1 + 3)-dimensional Bianchi type-VIh space-time are discussed. Bianchi type-V and Bianchi type-III space-times are also included as special cases. It is shown that the Chodos-Detweiler (1980) mechanism of cosmological dimensional-reduction is possible in these cases.

  7. Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2014-07-01

    Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.

  8. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    PubMed

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  9. THE GENERALIZATION OF SIERPINSKI CARPET AND MENGER SPONGE IN n-DIMENSIONAL SPACE

    NASA Astrophysics Data System (ADS)

    Yang, Yun; Feng, Yuting; Yu, Yanhua

    In this paper, we generalize Sierpinski carpet and Menger sponge in n-dimensional space, by using the generations and characterizations of affinely-equivalent Sierpinski carpet and Menger sponge. Exactly, Menger sponge in 4-dimensional space could be drawn out clearly under an affine transformation. Furthermore, the method could be used to a much broader class in fractals.

  10. Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm.

    PubMed

    Han, Soohee; Kim, Junghwan; Park, Choung-Hwan; Yoon, Hee-Cheon; Heo, Joon

    2009-01-01

    Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN) algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space.

  11. CLICK: The new USGS center for LIDAR information coordination and knowledge

    USGS Publications Warehouse

    Stoker, Jason M.; Greenlee, Susan K.; Gesch, Dean B.; Menig, Jordan C.

    2006-01-01

    Elevation data is rapidly becoming an important tool for the visualization and analysis of geographic information. The creation and display of three-dimensional models representing bare earth, vegetation, and structures have become major requirements for geographic research in the past few years. Light Detection and Ranging (lidar) has been increasingly accepted as an effective and accurate technology for acquiring high-resolution elevation data for bare earth, vegetation, and structures. Lidar is an active remote sensing system that records the distance, or range, of a laser fi red from an airborne or space borne platform such as an airplane, helicopter or satellite to objects or features on the Earth’s surface. By converting lidar data into bare ground topography and vegetation or structural morphologic information, extremely accurate, high-resolution elevation models can be derived to visualize and quantitatively represent scenes in three dimensions. In addition to high-resolution digital elevation models (Evans et al., 2001), other lidar-derived products include quantitative estimates of vegetative features such as canopy height, canopy closure, and biomass (Lefsky et al., 2002), and models of urban areas such as building footprints and three-dimensional city models (Maas, 2001).

  12. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  13. One Shot Detection with Laplacian Object and Fast Matrix Cosine Similarity.

    PubMed

    Biswas, Sujoy Kumar; Milanfar, Peyman

    2016-03-01

    One shot, generic object detection involves searching for a single query object in a larger target image. Relevant approaches have benefited from features that typically model the local similarity patterns. In this paper, we combine local similarity (encoded by local descriptors) with a global context (i.e., a graph structure) of pairwise affinities among the local descriptors, embedding the query descriptors into a low dimensional but discriminatory subspace. Unlike principal components that preserve global structure of feature space, we actually seek a linear approximation to the Laplacian eigenmap that permits us a locality preserving embedding of high dimensional region descriptors. Our second contribution is an accelerated but exact computation of matrix cosine similarity as the decision rule for detection, obviating the computationally expensive sliding window search. We leverage the power of Fourier transform combined with integral image to achieve superior runtime efficiency that allows us to test multiple hypotheses (for pose estimation) within a reasonably short time. Our approach to one shot detection is training-free, and experiments on the standard data sets confirm the efficacy of our model. Besides, low computation cost of the proposed (codebook-free) object detector facilitates rather straightforward query detection in large data sets including movie videos.

  14. Controlling rogue waves in inhomogeneous Bose-Einstein condensates.

    PubMed

    Loomba, Shally; Kaur, Harleen; Gupta, Rama; Kumar, C N; Raju, Thokala Soloman

    2014-05-01

    We present the exact rogue wave solutions of the quasi-one-dimensional inhomogeneous Gross-Pitaevskii equation by using similarity transformation. Then, by employing the exact analytical solutions we have studied the controllable behavior of rogue waves in the Bose-Einstein condensates context for the experimentally relevant systems. Additionally, we have also investigated the nonlinear tunneling of rogue waves through a conventional hyperbolic barrier and periodic barrier. We have found that, for the conventional nonlinearity barrier case, rogue waves are localized in space and time and get amplified near the barrier, while for the dispersion barrier case rogue waves are localized in space and propagating in time and their amplitude is reduced at the barrier location. In the case of the periodic barrier, the interesting dynamical features of rogue waves are obtained and analyzed analytically.

  15. Space plasma contractor research, 1988

    NASA Technical Reports Server (NTRS)

    Williams, John D.; Wilbur, Paul J.

    1989-01-01

    Results of experiments conducted on hollow cathode-based plasma contractors are reported. Specific tests in which attempts were made to vary plasma conditions in the simulated ionospheric plasma are described. Experimental results showing the effects of contractor flowrate and ion collecting surface size on contactor performance and contactor plasma plume geometry are presented. In addition to this work, one-dimensional solutions to spherical and cylindircal space-charge limited double-sheath problems are developed. A technique is proposed that can be used to apply these solutions to the problem of current flow through elongated double-sheaths that separate two cold plasmas. Two conference papers which describe the essential features of the plasma contacting process and present data that should facilitate calibration of comprehensive numerical models of the plasma contacting process are also included.

  16. Generalized Weierstrass-Mandelbrot Function Model for Actual Stocks Markets Indexes with Nonlinear Characteristics

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Yu, C.; Sun, J. Q.

    2015-03-01

    It is difficult to simulate the dynamical behavior of actual financial markets indexes effectively, especially when they have nonlinear characteristics. So it is significant to propose a mathematical model with these characteristics. In this paper, we investigate a generalized Weierstrass-Mandelbrot function (WMF) model with two nonlinear characteristics: fractal dimension D where 2 > D > 1.5 and Hurst exponent (H) where 1 > H > 0.5 firstly. And then we study the dynamical behavior of H for WMF as D and the spectrum of the time series γ change in three-dimensional space, respectively. Because WMF and the actual stock market indexes have two common features: fractal behavior using fractal dimension and long memory effect by Hurst exponent, we study the relationship between WMF and the actual stock market indexes. We choose a random value of γ and fixed value of D for WMF to simulate the S&P 500 indexes at different time ranges. As shown in the simulation results of three-dimensional space, we find that γ is important in WMF model and different γ may have the same effect for the nonlinearity of WMF. Then we calculate the skewness and kurtosis of actual Daily S&P 500 index in different time ranges which can be used to choose the value of γ. Based on these results, we choose appropriate γ, D and initial value into WMF to simulate Daily S&P 500 indexes. Using the fit line method in two-dimensional space for the simulated values, we find that the generalized WMF model is effective for simulating different actual stock market indexes in different time ranges. It may be useful for understanding the dynamical behavior of many different financial markets.

  17. Hofstadter butterfly evolution in the space of two-dimensional Bravais lattices

    NASA Astrophysics Data System (ADS)

    Yılmaz, F.; Oktel, M. Ö.

    2017-06-01

    The self-similar energy spectrum of a particle in a periodic potential under a magnetic field, known as the Hofstadter butterfly, is determined by the lattice geometry as well as the external field. Recent realizations of artificial gauge fields and adjustable optical lattices in cold-atom experiments necessitate the consideration of these self-similar spectra for the most general two-dimensional lattice. In a previous work [F. Yılmaz et al., Phys. Rev. A 91, 063628 (2015), 10.1103/PhysRevA.91.063628], we investigated the evolution of the spectrum for an experimentally realized lattice which was tuned by changing the unit-cell structure but keeping the square Bravais lattice fixed. We now consider all possible Bravais lattices in two dimensions and investigate the structure of the Hofstadter butterfly as the lattice is deformed between lattices with different point-symmetry groups. We model the optical lattice with a sinusoidal real-space potential and obtain the tight-binding model for any lattice geometry by calculating the Wannier functions. We introduce the magnetic field via Peierls substitution and numerically calculate the energy spectrum. The transition between the two most symmetric lattices, i.e., the triangular and the square lattices, displays the importance of bipartite symmetry featuring deformation as well as closing of some of the major energy gaps. The transitions from the square to rectangular lattice and from the triangular to centered rectangular lattices are analyzed in terms of coupling of one-dimensional chains. We calculate the Chern numbers of the major gaps and Chern number transfer between bands during the transitions. We use gap Chern numbers to identify distinct topological regions in the space of Bravais lattices.

  18. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality.

    PubMed

    Wang, Xueyi

    2012-02-08

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.

  19. The use of virtual reality to reimagine two-dimensional representations of three-dimensional spaces

    NASA Astrophysics Data System (ADS)

    Fath, Elaine

    2015-03-01

    A familiar realm in the world of two-dimensional art is the craft of taking a flat canvas and creating, through color, size, and perspective, the illusion of a three-dimensional space. Using well-explored tricks of logic and sight, impossible landscapes such as those by surrealists de Chirico or Salvador Dalí seem to be windows into new and incredible spaces which appear to be simultaneously feasible and utterly nonsensical. As real-time 3D imaging becomes increasingly prevalent as an artistic medium, this process takes on an additional layer of depth: no longer is two-dimensional space restricted to strategies of light, color, line and geometry to create the impression of a three-dimensional space. A digital interactive environment is a space laid out in three dimensions, allowing the user to explore impossible environments in a way that feels very real. In this project, surrealist two-dimensional art was researched and reimagined: what would stepping into a de Chirico or a Magritte look and feel like, if the depth and distance created by light and geometry were not simply single-perspective illusions, but fully formed and explorable spaces? 3D environment-building software is allowing us to step into these impossible spaces in ways that 2D representations leave us yearning for. This art project explores what we gain--and what gets left behind--when these impossible spaces become doors, rather than windows. Using sketching, Maya 3D rendering software, and the Unity Engine, surrealist art was reimagined as a fully navigable real-time digital environment. The surrealist movement and its key artists were researched for their use of color, geometry, texture, and space and how these elements contributed to their work as a whole, which often conveys feelings of unexpectedness or uneasiness. The end goal was to preserve these feelings while allowing the viewer to actively engage with the space.

  20. Who Needs 3D When the Universe Is Flat?

    ERIC Educational Resources Information Center

    Eriksson, Urban; Linder, Cedric; Airey, John; Redfors, Andreas

    2014-01-01

    An overlooked feature in astronomy education is the need for students to learn to extrapolate three-dimensionality and the challenges that this may involve. Discerning critical features in the night sky that are embedded in dimensionality is a long-term learning process. Several articles have addressed the usefulness of three-dimensional (3D)…

  1. ULF foreshock under radial IMF: THEMIS observations and global kinetic simulation Vlasiator results compared

    NASA Astrophysics Data System (ADS)

    Palmroth, Minna; Rami, Vainio; Archer, Martin; Hietala, Heli; Afanasiev, Alexandr; Kempf, Yann; Hoilijoki, Sanni; von Alfthan, Sebastian

    2015-04-01

    For decades, a certain type of ultra low frequency waves with a period of about 30 seconds have been observed in the Earth's quasi-parallel foreshock. These waves, with a wavelength of about an Earth radius, are compressive and propagate with an average angle of 20 degrees with respect of the interplanetary magnetic field (IMF). The latter property has caused trouble to scientists as the growth rate for the instability causing the waves is maximized along the magnetic field. So far, these waves have been characterized by single or multi-spacecraft methods and 2-dimensional hybrid-PIC simulations, which have not fully reproduced the wave properties. Vlasiator is a newly developed, global hybrid-Vlasov simulation, which solves the six-dimensional phase space utilising the Vlasov equation for protons, while electrons are a charge-neutralising fluid. The outcome of the simulation is a global reproduction of ion-scale physics in a holistic manner where the generation of physical features can be followed in time and their consequences can be quantitatively characterised. Vlasiator produces the ion distribution functions and the related kinetic physics in unprecedented detail, in the global scale magnetospheric scale with a resolution of a couple of hundred kilometres in the ordinary space and 20 km/s in the velocity space. We run Vlasiator under a radial IMF in five dimensions consisting of the three-dimensional velocity space embedded in the ecliptic plane. We observe the generation of the 30-second ULF waves, and characterize their evolution and physical properties in time. We compare the results both to THEMIS observations and to the quasi-linear theory. We find that Vlasiator reproduces the foreshock ULF waves in all reported observational aspects, i.e., they are of the observed size in wavelength and period, they are compressive and propagate obliquely to the IMF. In particular, we discuss the issues related to the long-standing question of oblique propagation.

  2. Lagrangian statistics in weakly forced two-dimensional turbulence.

    PubMed

    Rivera, Michael K; Ecke, Robert E

    2016-01-01

    Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale ri. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in terms of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.

  3. Application of Hyperspectral Techniques to Monitoring and Management of Invasive Plant Species Infestation

    DTIC Science & Technology

    2008-01-01

    the sensor is a data cloud in multi- dimensional space with each band generating an axis of dimension. When the data cloud is viewed in two or three...endmember of interest is not a true endmember in the data space . A ) B) Figure 8: Linear mixture models. A ) two- dimensional ...multi- dimensional space . A classifier is a computer algorithm that takes

  4. Application of Hyperspectal Techniques to Monitoring & Management of Invasive Plant Species Infestation

    DTIC Science & Technology

    2008-01-09

    The image data as acquired from the sensor is a data cloud in multi- dimensional space with each band generating an axis of dimension. When the data... The color of a material is defined by the direction of its unit vector in n- dimensional spectral space . The length of the vector relates only to how...to n- dimensional space . SAM determines the similarity

  5. A comprehensive three-dimensional cortical map of vowel space.

    PubMed

    Scharinger, Mathias; Idsardi, William J; Poe, Samantha

    2011-12-01

    Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral-medial, anterior-posterior, and inferior-superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom-up information but crucially involves featural-phonetic top-down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.

  6. Resource seeking strategies of zoosporic true fungi in heterogeneous soil habitats at the microscale level

    PubMed Central

    Gleason, Frank H.; Crawford, John W.; Neuhauser, Sigrid; Henderson, Linda E.; Lilje, Osu

    2012-01-01

    Zoosporic true fungi have frequently been identified in samples from soil and freshwater ecosystems using baiting and molecular techniques. In fact some species can be components of the dominant groups of microorganisms in particular soil habitats. Yet these microorganisms have not yet been directly observed growing in soil ecosystems. Significant physical characteristics and features of the three-dimensional structures of soils which impact microorganisms at the microscale level are discussed. A thorough knowledge of soil structures is important for studying the distribution of assemblages of these fungi and understanding their ecological roles along spatial and temporal gradients. A number of specific adaptations and resource seeking strategies possibly give these fungi advantages over other groups of microorganisms in soil ecosystems. These include chemotactic zoospores, mechanisms for adhesion to substrates, rhizoids which can penetrate substrates in small spaces, structures which are resistant to environmental extremes, rapid growth rates and simple nutritional requirements. These adaptations are discussed in the context of the characteristics of soils ecosystems. Recent advances in instrumentation have led to the development of new and more precise methods for studying microorganisms in three-dimensional space. New molecular techniques have made identification of microbes possible in environmental samples. PMID:22308003

  7. Four-Dimensional Ultrafast Electron Microscopy: Insights into an Emerging Technique.

    PubMed

    Adhikari, Aniruddha; Eliason, Jeffrey K; Sun, Jingya; Bose, Riya; Flannigan, David J; Mohammed, Omar F

    2017-01-11

    Four-dimensional ultrafast electron microscopy (4D-UEM) is a novel analytical technique that aims to fulfill the long-held dream of researchers to investigate materials at extremely short spatial and temporal resolutions by integrating the excellent spatial resolution of electron microscopes with the temporal resolution of ultrafast femtosecond laser-based spectroscopy. The ingenious use of pulsed photoelectrons to probe surfaces and volumes of materials enables time-resolved snapshots of the dynamics to be captured in a way hitherto impossible by other conventional techniques. The flexibility of 4D-UEM lies in the fact that it can be used in both the scanning (S-UEM) and transmission (UEM) modes depending upon the type of electron microscope involved. While UEM can be employed to monitor elementary structural changes and phase transitions in samples using real-space mapping, diffraction, electron energy-loss spectroscopy, and tomography, S-UEM is well suited to map ultrafast dynamical events on materials surfaces in space and time. This review provides an overview of the unique features that distinguish these techniques and also illustrates the applications of both S-UEM and UEM to a multitude of problems relevant to materials science and chemistry.

  8. Answers in search of a question: 'proofs' of the tri-dimensionality of space

    NASA Astrophysics Data System (ADS)

    Callender, Craig

    From Kant's first published work to recent articles in the physics literature, philosophers and physicists have long sought an answer to the question: Why does space have three dimensions? In this paper, I will flesh out Kant's claim with a brief detour through Gauss' law. I then describe Büchel's version of the common argument that stable orbits are possible only if space is three dimensional. After examining objections by Russell and van Fraassen, I develop three original criticisms of my own. These criticisms are relevant to both historical and contemporary proofs of the dimensionality of space (in particular, a recent one by Burgbacher, Lämmerzahl, and Macias). In general, I argue that modern "proofs" of the dimensionality of space have gone off track.

  9. Classification of motor imagery tasks for BCI with multiresolution analysis and multiobjective feature selection.

    PubMed

    Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés

    2016-07-15

    Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.

  10. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    PubMed

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Diagnostic value of sleep stage dissociation as visualized on a 2-dimensional sleep state space in human narcolepsy.

    PubMed

    Olsen, Anders Vinther; Stephansen, Jens; Leary, Eileen; Peppard, Paul E; Sheungshul, Hong; Jennum, Poul Jørgen; Sorensen, Helge; Mignot, Emmanuel

    2017-04-15

    Type 1 narcolepsy (NT1) is characterized by symptoms believed to represent Rapid Eye Movement (REM) sleep stage dissociations, occurrences where features of wake and REM sleep are intermingled, resulting in a mixed state. We hypothesized that sleep stage dissociations can be objectively detected through the analysis of nocturnal Polysomnography (PSG) data, and that those affecting REM sleep can be used as a diagnostic feature for narcolepsy. A Linear Discriminant Analysis (LDA) model using 38 features extracted from EOG, EMG and EEG was used in control subjects to select features differentiating wake, stage N1, N2, N3 and REM sleep. Sleep stage differentiation was next represented in a 2D projection. Features characteristic of sleep stage differences were estimated from the residual sleep stage probability in the 2D space. Using this model we evaluated PSG data from NT1 and non-narcoleptic subjects. An LDA classifier was used to determine the best separation plane. This method replicates the specificity/sensitivity from the training set to the validation set better than many other methods. Eight prominent features could differentiate narcolepsy and controls in the validation dataset. Using a composite measure and a specificity cut off 95% in the training dataset, sensitivity was 43%. Specificity/sensitivity was 94%/38% in the validation set. Using hypersomnia subjects, specificity/sensitivity was 84%/15%. Analyzing treated narcoleptics the specificity/sensitivity was 94%/10%. Sleep stage dissociation can be used for the diagnosis of narcolepsy. However the use of some medications and presence of undiagnosed hypersomnolence patients impacts the result. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Compactification on phase space

    NASA Astrophysics Data System (ADS)

    Lovelady, Benjamin; Wheeler, James

    2016-03-01

    A major challenge for string theory is to understand the dimensional reduction required for comparison with the standard model. We propose reducing the dimension of the compactification by interpreting some of the extra dimensions as the energy-momentum portion of a phase-space. Such models naturally arise as generalized quotients of the conformal group called biconformal spaces. By combining the standard Kaluza-Klein approach with such a conformal gauge theory, we may start from the conformal group of an n-dimensional Euclidean space to form a 2n-dimensional quotient manifold with symplectic structure. A pair of involutions leads naturally to two n-dimensional Lorentzian manifolds. For n = 5, this leaves only two extra dimensions, with a countable family of possible compactifications and an SO(5) Yang-Mills field on the fibers. Starting with n=6 leads to 4-dimensional compactification of the phase space. In the latter case, if the two dimensions each from spacetime and momentum space are compactified onto spheres, then there is an SU(2)xSU(2) (left-right symmetric electroweak) field between phase and configuration space and an SO(6) field on the fibers. Such a theory, with minor additional symmetry breaking, could contain all parts of the standard model.

  13. Comparative Study of SVM Methods Combined with Voxel Selection for Object Category Classification on fMRI Data

    PubMed Central

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-01-01

    Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184

  14. Comparative study of SVM methods combined with voxel selection for object category classification on fMRI data.

    PubMed

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-02-16

    Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.

  15. Consensus embedding: theory, algorithms and application to segmentation and classification of biomedical data

    PubMed Central

    2012-01-01

    Background Dimensionality reduction (DR) enables the construction of a lower dimensional space (embedding) from a higher dimensional feature space while preserving object-class discriminability. However several popular DR approaches suffer from sensitivity to choice of parameters and/or presence of noise in the data. In this paper, we present a novel DR technique known as consensus embedding that aims to overcome these problems by generating and combining multiple low-dimensional embeddings, hence exploiting the variance among them in a manner similar to ensemble classifier schemes such as Bagging. We demonstrate theoretical properties of consensus embedding which show that it will result in a single stable embedding solution that preserves information more accurately as compared to any individual embedding (generated via DR schemes such as Principal Component Analysis, Graph Embedding, or Locally Linear Embedding). Intelligent sub-sampling (via mean-shift) and code parallelization are utilized to provide for an efficient implementation of the scheme. Results Applications of consensus embedding are shown in the context of classification and clustering as applied to: (1) image partitioning of white matter and gray matter on 10 different synthetic brain MRI images corrupted with 18 different combinations of noise and bias field inhomogeneity, (2) classification of 4 high-dimensional gene-expression datasets, (3) cancer detection (at a pixel-level) on 16 image slices obtained from 2 different high-resolution prostate MRI datasets. In over 200 different experiments concerning classification and segmentation of biomedical data, consensus embedding was found to consistently outperform both linear and non-linear DR methods within all applications considered. Conclusions We have presented a novel framework termed consensus embedding which leverages ensemble classification theory within dimensionality reduction, allowing for application to a wide range of high-dimensional biomedical data classification and segmentation problems. Our generalizable framework allows for improved representation and classification in the context of both imaging and non-imaging data. The algorithm offers a promising solution to problems that currently plague DR methods, and may allow for extension to other areas of biomedical data analysis. PMID:22316103

  16. Dimensional oscillation. A fast variation of energy embedding gives good results with the AMBER potential energy function.

    PubMed

    Snow, M E; Crippen, G M

    1991-08-01

    The structure of the AMBER potential energy surface of the cyclic tetrapeptide cyclotetrasarcosyl is analyzed as a function of the dimensionality of coordinate space. It is found that the number of local energy minima decreases as the dimensionality of the space increases until some limit at which point equipotential subspaces appear. The applicability of energy embedding methods to finding global energy minima in this type of energy-conformation space is explored. Dimensional oscillation, a computationally fast variant of energy embedding is introduced and found to sample conformation space widely and to do a good job of finding global and near-global energy minima.

  17. Overhead View of Area Surrounding Pathfinder

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Overhead view of the area surrounding the Pathfinder lander illustrating the Sojourner traverse. Red rectangles are rover positions at the end of sols 1-30. Locations of soil mechanics experiments, wheel abrasion experiments, and APXS measurements are shown. The A numbers refer to APXS measurements as discussed in the paper by Rieder et al. (p. 1770, Science Magazine, see image note). Coordinates are given in the LL frame.

    The photorealistic, interactive, three-dimensional virtual reality (VR) terrain models were created from IMP images using a software package developed for Pathfinder by C. Stoker et al. as a participating science project. By matching features in the left and right camera, an automated machine vision algorithm produced dense range maps of the nearfield, which were projected into a three-dimensional model as a connected polygonal mesh. Distance and angle measurements can be made on features viewed in the model using a mouse-driven three-dimensional cursor and a point-and-click interface. The VR model also incorporates graphical representations of the lander and rover and the sequence and spatial locations at which rover data were taken. As the rover moved, graphical models of the rover were added for each position that could be uniquely determined using stereo images of the rover taken by the IMP. Images taken by the rover were projected into the model as two-dimensional 'billboards' to show the proper perspective of these images.

    NOTE: original caption as published in Science Magazine

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech).

  18. Data Mining and Machine Learning Models for Predicting Drug Likeness and Their Disease or Organ Category.

    PubMed

    Yosipof, Abraham; Guedes, Rita C; García-Sosa, Alfonso T

    2018-01-01

    Data mining approaches can uncover underlying patterns in chemical and pharmacological property space decisive for drug discovery and development. Two of the most common approaches are visualization and machine learning methods. Visualization methods use dimensionality reduction techniques in order to reduce multi-dimension data into 2D or 3D representations with a minimal loss of information. Machine learning attempts to find correlations between specific activities or classifications for a set of compounds and their features by means of recurring mathematical models. Both models take advantage of the different and deep relationships that can exist between features of compounds, and helpfully provide classification of compounds based on such features or in case of visualization methods uncover underlying patterns in the feature space. Drug-likeness has been studied from several viewpoints, but here we provide the first implementation in chemoinformatics of the t-Distributed Stochastic Neighbor Embedding (t-SNE) method for the visualization and the representation of chemical space, and the use of different machine learning methods separately and together to form a new ensemble learning method called AL Boost. The models obtained from AL Boost synergistically combine decision tree, random forests (RF), support vector machine (SVM), artificial neural network (ANN), k nearest neighbors (kNN), and logistic regression models. In this work, we show that together they form a predictive model that not only improves the predictive force but also decreases bias. This resulted in a corrected classification rate of over 0.81, as well as higher sensitivity and specificity rates for the models. In addition, separation and good models were also achieved for disease categories such as antineoplastic compounds and nervous system diseases, among others. Such models can be used to guide decision on the feature landscape of compounds and their likeness to either drugs or other characteristics, such as specific or multiple disease-category(ies) or organ(s) of action of a molecule.

  19. Quantum Transport Properties in Two-Dimensional and Low Dimensional Systems

    NASA Astrophysics Data System (ADS)

    Fang, Hao

    1991-02-01

    The quantum transport properties in quasi two -dimensional and zero-dimensional systems have been studied at magnetic field of 0 - 8T and low temperatures down to 1.3K. In the (100) Si inversion layer, we investigated the effect of valley splitting on the value of the enhanced effective g factor by the tilted magnetic field measurement. The valley splitting is determined from the beat effect on samples with measurable valley splitting behavior due to misorientation effects. Experimental results illustrate that the effective g factor is enhanced by many body interactions and that the valley splitting has no obvious effect on the g-value. A simulation calculation with a Gaussian distribution of density of states has been carried out and the simulated results are in an excellent agreement with the experimental data. A new and very simple technique has been developed for fabricating two-dimensional periodic submicron structures with feature sizes down to about 300 A. The etching mask is made by coating the material surface with a monolayer of close-packed uniform latex particles. We have demonstrated the formation of a quasi zero-dimensional quantum dot array and performed capacitance measurements on GaAs/AlGaAs heterostructure samples with periodicities ranging from 3000 to 4000 A. A series of nearly equally spaced peaks in a curve of the derivative of capacitance with respect to gate voltage, which corresponds to the energy levels formed by the lateral electric confining potential, is observed. The energy spacings and effective dot widths estimated from a simple parabolic potential model are consistent with the experimental data. Novel magnetoresistance oscillations in a two -dimensional electron gas modulated by a two-dimensional triangular superlattice potential are observed in GaAs/AlGaAs heterostructures. The new oscillations appear at very low magnetic fields and the peak positions are directly determined by the magnetic field and the periodicity of the modulation structure. New oscillation results from the modulation-broadened Landau bandwidth and the induced density of states variation with magnetic field. Physical explanations and theoretical approaches for the commensurability problem in a two-dimensional triangular superlattice potential are presented. The differences in oscillation frequencies and phase factors for two kinds of samples correlate with structures differing in degree of depletion and the resulting geometry.

  20. Visual word ambiguity.

    PubMed

    van Gemert, Jan C; Veenman, Cor J; Smeulders, Arnold W M; Geusebroek, Jan-Mark

    2010-07-01

    This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.

  1. Analyzing linear spatial features in ecology.

    PubMed

    Buettel, Jessie C; Cole, Andrew; Dickey, John M; Brook, Barry W

    2018-06-01

    The spatial analysis of dimensionless points (e.g., tree locations on a plot map) is common in ecology, for instance using point-process statistics to detect and compare patterns. However, the treatment of one-dimensional linear features (fiber processes) is rarely attempted. Here we appropriate the methods of vector sums and dot products, used regularly in fields like astrophysics, to analyze a data set of mapped linear features (logs) measured in 12 × 1-ha forest plots. For this demonstrative case study, we ask two deceptively simple questions: do trees tend to fall downhill, and if so, does slope gradient matter? Despite noisy data and many potential confounders, we show clearly that topography (slope direction and steepness) of forest plots does matter to treefall. More generally, these results underscore the value of mathematical methods of physics to problems in the spatial analysis of linear features, and the opportunities that interdisciplinary collaboration provides. This work provides scope for a variety of future ecological analyzes of fiber processes in space. © 2018 by the Ecological Society of America.

  2. Fourier transform infrared spectroscopy microscopic imaging classification based on spatial-spectral features

    NASA Astrophysics Data System (ADS)

    Liu, Lian; Yang, Xiukun; Zhong, Mingliang; Liu, Yao; Jing, Xiaojun; Yang, Qin

    2018-04-01

    The discrete fractional Brownian incremental random (DFBIR) field is used to describe the irregular, random, and highly complex shapes of natural objects such as coastlines and biological tissues, for which traditional Euclidean geometry cannot be used. In this paper, an anisotropic variable window (AVW) directional operator based on the DFBIR field model is proposed for extracting spatial characteristics of Fourier transform infrared spectroscopy (FTIR) microscopic imaging. Probabilistic principal component analysis first extracts spectral features, and then the spatial features of the proposed AVW directional operator are combined with the former to construct a spatial-spectral structure, which increases feature-related information and helps a support vector machine classifier to obtain more efficient distribution-related information. Compared to Haralick’s grey-level co-occurrence matrix, Gabor filters, and local binary patterns (e.g. uniform LBPs, rotation-invariant LBPs, uniform rotation-invariant LBPs), experiments on three FTIR spectroscopy microscopic imaging datasets show that the proposed AVW directional operator is more advantageous in terms of classification accuracy, particularly for low-dimensional spaces of spatial characteristics.

  3. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  4. On the Ck-embedding of Lorentzian manifolds in Ricci-flat spaces

    NASA Astrophysics Data System (ADS)

    Avalos, R.; Dahia, F.; Romero, C.

    2018-05-01

    In this paper, we investigate the problem of non-analytic embeddings of Lorentzian manifolds in Ricci-flat semi-Riemannian spaces. In order to do this, we first review some relevant results in the area and then motivate both the mathematical and physical interests in this problem. We show that any n-dimensional compact Lorentzian manifold (Mn, g), with g in the Sobolev space Hs+3, s >n/2 , admits an isometric embedding in a (2n + 2)-dimensional Ricci-flat semi-Riemannian manifold. The sharpest result available for these types of embeddings, in the general setting, comes as a corollary of Greene's remarkable embedding theorems R. Greene [Mem. Am. Math. Soc. 97, 1 (1970)], which guarantee the embedding of a compact n-dimensional semi-Riemannian manifold into an n(n + 5)-dimensional semi-Euclidean space, thereby guaranteeing the embedding into a Ricci-flat space with the same dimension. The theorem presented here improves this corollary in n2 + 3n - 2 codimensions by replacing the Riemann-flat condition with the Ricci-flat one from the beginning. Finally, we will present a corollary of this theorem, which shows that a compact strip in an n-dimensional globally hyperbolic space-time can be embedded in a (2n + 2)-dimensional Ricci-flat semi-Riemannian manifold.

  5. Well-balanced compressible cut-cell simulation of atmospheric flow.

    PubMed

    Klein, R; Bates, K R; Nikiforakis, N

    2009-11-28

    Cut-cell meshes present an attractive alternative to terrain-following coordinates for the representation of topography within atmospheric flow simulations, particularly in regions of steep topographic gradients. In this paper, we present an explicit two-dimensional method for the numerical solution on such meshes of atmospheric flow equations including gravitational sources. This method is fully conservative and allows for time steps determined by the regular grid spacing, avoiding potential stability issues due to arbitrarily small boundary cells. We believe that the scheme is unique in that it is developed within a dimensionally split framework, in which each coordinate direction in the flow is solved independently at each time step. Other notable features of the scheme are: (i) its conceptual and practical simplicity, (ii) its flexibility with regard to the one-dimensional flux approximation scheme employed, and (iii) the well-balancing of the gravitational sources allowing for stable simulation of near-hydrostatic flows. The presented method is applied to a selection of test problems including buoyant bubble rise interacting with geometry and lee-wave generation due to topography.

  6. Monte-Carlo simulations of the clean and disordered contact process in three space dimensions

    NASA Astrophysics Data System (ADS)

    Vojta, Thomas

    2013-03-01

    The absorbing-state transition in the three-dimensional contact process with and without quenched randomness is investigated by means of Monte-Carlo simulations. In the clean case, a reweighting technique is combined with a careful extrapolation of the data to infinite time to determine with high accuracy the critical behavior in the three-dimensional directed percolation universality class. In the presence of quenched spatial disorder, our data demonstrate that the absorbing-state transition is governed by an unconventional infinite-randomness critical point featuring activated dynamical scaling. The critical behavior of this transition does not depend on the disorder strength, i.e., it is universal. Close to the disordered critical point, the dynamics is characterized by the nonuniversal power laws typical of a Griffiths phase. We compare our findings to the results of other numerical methods, and we relate them to a general classification of phase transitions in disordered systems based on the rare region dimensionality. This work has been supported in part by the NSF under grants no. DMR-0906566 and DMR-1205803.

  7. Evolution of lower hybrid turbulence in the ionosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganguli, G.; Crabtree, C.; Mithaiwala, M.

    2015-11-15

    Three-dimensional evolution of the lower hybrid turbulence driven by a spatially localized ion ring beam perpendicular to the ambient magnetic field in space plasmas is analyzed. It is shown that the quasi-linear saturation model breaks down when the nonlinear rate of scattering by thermal electron is larger than linear damping rates, which can occur even for low wave amplitudes. The evolution is found to be essentially a three-dimensional phenomenon, which cannot be accurately explained by two-dimensional simulations. An important feature missed in previous studies of this phenomenon is the nonlinear conversion of electrostatic lower hybrid waves into electromagnetic whistler andmore » magnetosonic waves and the consequent energy loss due to radiation from the source region. This can result in unique low-amplitude saturation with extended saturation time. It is shown that when the nonlinear effects are considered the net energy that can be permanently extracted from the ring beam is larger. The results are applied to anticipate the outcome of a planned experiment that will seed lower hybrid turbulence in the ionosphere and monitor its evolution.« less

  8. Studies on Manfred Eigen's model for the self-organization of information processing.

    PubMed

    Ebeling, W; Feistel, R

    2018-05-01

    In 1971, Manfred Eigen extended the principles of Darwinian evolution to chemical processes, from catalytic networks to the emergence of information processing at the molecular level, leading to the emergence of life. In this paper, we investigate some very general characteristics of this scenario, such as the valuation process of phenotypic traits in a high-dimensional fitness landscape, the effect of spatial compartmentation on the valuation, and the self-organized transition from structural to symbolic genetic information of replicating chain molecules. In the first part, we perform an analysis of typical dynamical properties of continuous dynamical models of evolutionary processes. In particular, we study the mapping of genotype to continuous phenotype spaces following the ideas of Wright and Conrad. We investigate typical features of a Schrödinger-like dynamics, the consequences of the high dimensionality, the leading role of saddle points, and Conrad's extra-dimensional bypass. In the last part, we discuss in brief the valuation of compartment models and the self-organized emergence of molecular symbols at the beginning of life.

  9. Stargate GTM: Bridging Descriptor and Activity Spaces.

    PubMed

    Gaspar, Héléna A; Baskin, Igor I; Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre

    2015-11-23

    Predicting the activity profile of a molecule or discovering structures possessing a specific activity profile are two important goals in chemoinformatics, which could be achieved by bridging activity and molecular descriptor spaces. In this paper, we introduce the "Stargate" version of the Generative Topographic Mapping approach (S-GTM) in which two different multidimensional spaces (e.g., structural descriptor space and activity space) are linked through a common 2D latent space. In the S-GTM algorithm, the manifolds are trained simultaneously in two initial spaces using the probabilities in the 2D latent space calculated as a weighted geometric mean of probability distributions in both spaces. S-GTM has the following interesting features: (1) activities are involved during the training procedure; therefore, the method is supervised, unlike conventional GTM; (2) using molecular descriptors of a given compound as input, the model predicts a whole activity profile, and (3) using an activity profile as input, areas populated by relevant chemical structures can be detected. To assess the performance of S-GTM prediction models, a descriptor space (ISIDA descriptors) of a set of 1325 GPCR ligands was related to a B-dimensional (B = 1 or 8) activity space corresponding to pKi values for eight different targets. S-GTM outperforms conventional GTM for individual activities and performs similarly to the Lasso multitask learning algorithm, although it is still slightly less accurate than the Random Forest method.

  10. Maximum projection designs for computer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, V. Roshan; Gul, Evren; Ba, Shan

    Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less

  11. Maximum projection designs for computer experiments

    DOE PAGES

    Joseph, V. Roshan; Gul, Evren; Ba, Shan

    2015-03-18

    Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less

  12. The density-matrix renormalization group: a short introduction.

    PubMed

    Schollwöck, Ulrich

    2011-07-13

    The density-matrix renormalization group (DMRG) method has established itself over the last decade as the leading method for the simulation of the statics and dynamics of one-dimensional strongly correlated quantum lattice systems. The DMRG is a method that shares features of a renormalization group procedure (which here generates a flow in the space of reduced density operators) and of a variational method that operates on a highly interesting class of quantum states, so-called matrix product states (MPSs). The DMRG method is presented here entirely in the MPS language. While the DMRG generally fails in larger two-dimensional systems, the MPS picture suggests a straightforward generalization to higher dimensions in the framework of tensor network states. The resulting algorithms, however, suffer from difficulties absent in one dimension, apart from a much more unfavourable efficiency, such that their ultimate success remains far from clear at the moment.

  13. Long-range two-dimensional superstructure in the superconducting electron-doped cuprate Pr 0.88 LaCe 0.12 CuO 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, B. J.; Rosenkranz, S.; Kang, H. J.

    2015-07-01

    Utilizing single-crystal synchrotron x-ray scattering, we observe distorted CuO 2 planes in the electron- doped superconductor Pr 1-xLaCe xCuO 4+δ , x =0.12. Resolution-limited rods of scattering are indicative of a long-range two-dimensional 2√2 × 2√2 superstructure in the a-b plane, adhering to planar space-group symmetry p4gm, which is subject to stacking disorder perpendicular to the planes. This superstructure is present only in annealed, superconducting samples, but not in the as-grown, nonsuperconducting samples. These long-range distortions of the CuO 2 planes, which are generally considered to be detrimental to superconductivity, have avoided detection to date due to the challenges ofmore » observing and interpreting subtle diffuse-scattering features.« less

  14. Features specific to retinal pigment epithelium cells derived from three-dimensional human embryonic stem cell cultures - a new donor for cell therapy.

    PubMed

    Wu, Wei; Zeng, Yuxiao; Li, Zhengya; Li, Qiyou; Xu, Haiwei; Yin, Zheng Qin

    2016-04-19

    Retinal pigment epithelium (RPE) transplantation is a particularly promising treatment of retinal degenerative diseases affecting RPE-photoreceptor complex. Embryonic stem cells (ESCs) provide an abundant donor source for RPE transplantation. Herein, we studied the time-course characteristics of RPE cells derived from three-dimensional human ESCs cultures (3D-RPE). We showed that 3D-RPE cells possessed morphology, ultrastructure, gene expression profile, and functions of authentic RPE. As differentiation proceeded, 3D-RPE cells could mature gradually with decreasing proliferation but increasing functions. Besides, 3D-RPE cells could form polarized monolayer with functional tight junction and gap junction. When grafted into the subretinal space of Royal College of Surgeons rats, 3D-RPE cells were safe and efficient to rescue retinal degeneration. This study showed that 3D-RPE cells were a new donor for cell therapy of retinal degenerative diseases.

  15. Three-dimensional vesicles under shear flow: numerical study of dynamics and phase diagram.

    PubMed

    Biben, Thierry; Farutin, Alexander; Misbah, Chaouqi

    2011-03-01

    The study of vesicles under flow, a model system for red blood cells (RBCs), is an essential step in understanding various intricate dynamics exhibited by RBCs in vivo and in vitro. Quantitative three-dimensional analyses of vesicles under flow are presented. The regions of parameters to produce tumbling (TB), tank-treating, vacillating-breathing (VB), and even kayaking (or spinning) modes are determined. New qualitative features are found: (i) a significant widening of the VB mode region in parameter space upon increasing shear rate γ and (ii) a robustness of normalized period of TB and VB with γ. Analytical support is also provided. We make a comparison with existing experimental results. In particular, we find that the phase diagram of the various dynamics depends on three dimensionless control parameters, while a recent experimental work reported that only two are sufficient.

  16. An extended Lagrangian method

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1992-01-01

    A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. Unlike the Lagrangian method previously imposed which is valid only for supersonic flows, the present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multi-dimensional discontinuities with a high level of accuracy, similar to that found in one-dimensional problems.

  17. Low-Dimensional Feature Representation for Instrument Identification

    NASA Astrophysics Data System (ADS)

    Ihara, Mizuki; Maeda, Shin-Ichi; Ikeda, Kazushi; Ishii, Shin

    For monophonic music instrument identification, various feature extraction and selection methods have been proposed. One of the issues toward instrument identification is that the same spectrum is not always observed even in the same instrument due to the difference of the recording condition. Therefore, it is important to find non-redundant instrument-specific features that maintain information essential for high-quality instrument identification to apply them to various instrumental music analyses. For such a dimensionality reduction method, the authors propose the utilization of linear projection methods: local Fisher discriminant analysis (LFDA) and LFDA combined with principal component analysis (PCA). After experimentally clarifying that raw power spectra are actually good for instrument classification, the authors reduced the feature dimensionality by LFDA or by PCA followed by LFDA (PCA-LFDA). The reduced features achieved reasonably high identification performance that was comparable or higher than those by the power spectra and those achieved by other existing studies. These results demonstrated that our LFDA and PCA-LFDA can successfully extract low-dimensional instrument features that maintain the characteristic information of the instruments.

  18. Three-dimensional desirability spaces for quality-by-design-based HPLC development.

    PubMed

    Mokhtar, Hatem I; Abdel-Salam, Randa A; Hadad, Ghada M

    2015-04-01

    In this study, three-dimensional desirability spaces were introduced as a graphical representation method of design space. This was illustrated in the context of application of quality-by-design concepts on development of a stability indicating gradient reversed-phase high-performance liquid chromatography method for the determination of vinpocetine and α-tocopheryl acetate in a capsule dosage form. A mechanistic retention model to optimize gradient time, initial organic solvent concentration and ternary solvent ratio was constructed for each compound from six experimental runs. Then, desirability function of each optimized criterion and subsequently the global desirability function were calculated throughout the knowledge space. The three-dimensional desirability spaces were plotted as zones exceeding a threshold value of desirability index in space defined by the three optimized method parameters. Probabilistic mapping of desirability index aided selection of design space within the potential desirability subspaces. Three-dimensional desirability spaces offered better visualization and potential design spaces for the method as a function of three method parameters with ability to assign priorities to this critical quality as compared with the corresponding resolution spaces. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Complexity of free energy landscapes of peptides revealed by nonlinear principal component analysis.

    PubMed

    Nguyen, Phuong H

    2006-12-01

    Employing the recently developed hierarchical nonlinear principal component analysis (NLPCA) method of Saegusa et al. (Neurocomputing 2004;61:57-70 and IEICE Trans Inf Syst 2005;E88-D:2242-2248), the complexities of the free energy landscapes of several peptides, including triglycine, hexaalanine, and the C-terminal beta-hairpin of protein G, were studied. First, the performance of this NLPCA method was compared with the standard linear principal component analysis (PCA). In particular, we compared two methods according to (1) the ability of the dimensionality reduction and (2) the efficient representation of peptide conformations in low-dimensional spaces spanned by the first few principal components. The study revealed that NLPCA reduces the dimensionality of the considered systems much better, than did PCA. For example, in order to get the similar error, which is due to representation of the original data of beta-hairpin in low dimensional space, one needs 4 and 21 principal components of NLPCA and PCA, respectively. Second, by representing the free energy landscapes of the considered systems as a function of the first two principal components obtained from PCA, we obtained the relatively well-structured free energy landscapes. In contrast, the free energy landscapes of NLPCA are much more complicated, exhibiting many states which are hidden in the PCA maps, especially in the unfolded regions. Furthermore, the study also showed that many states in the PCA maps are mixed up by several peptide conformations, while those of the NLPCA maps are more pure. This finding suggests that the NLPCA should be used to capture the essential features of the systems. (c) 2006 Wiley-Liss, Inc.

  20. Elasticity of fractal materials using the continuum model with non-integer dimensional space

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2015-01-01

    Using a generalization of vector calculus for space with non-integer dimension, we consider elastic properties of fractal materials. Fractal materials are described by continuum models with non-integer dimensional space. A generalization of elasticity equations for non-integer dimensional space, and its solutions for the equilibrium case of fractal materials are suggested. Elasticity problems for fractal hollow ball and cylindrical fractal elastic pipe with inside and outside pressures, for rotating cylindrical fractal pipe, for gradient elasticity and thermoelasticity of fractal materials are solved.

  1. Phases of five-dimensional theories, monopole walls, and melting crystals

    NASA Astrophysics Data System (ADS)

    Cherkis, Sergey A.

    2014-06-01

    Moduli spaces of doubly periodic monopoles, also called monopole walls or monowalls, are hyperkähler; thus, when four-dimensional, they are self-dual gravitational instantons. We find all monowalls with lowest number of moduli. Their moduli spaces can be identified, on the one hand, with Coulomb branches of five-dimensional supersymmetric quantum field theories on 3 × T 2 and, on the other hand, with moduli spaces of local Calabi-Yau metrics on the canonical bundle of a del Pezzo surface. We explore the asymptotic metric of these moduli spaces and compare our results with Seiberg's low energy description of the five-dimensional quantum theories. We also give a natural description of the phase structure of general monowall moduli spaces in terms of triangulations of Newton polygons, secondary polyhedra, and associahedral projections of secondary fans.

  2. Unsupervised Deep Hashing With Pseudo Labels for Scalable Image Retrieval.

    PubMed

    Zhang, Haofeng; Liu, Li; Long, Yang; Shao, Ling

    2018-04-01

    In order to achieve efficient similarity searching, hash functions are designed to encode images into low-dimensional binary codes with the constraint that similar features will have a short distance in the projected Hamming space. Recently, deep learning-based methods have become more popular, and outperform traditional non-deep methods. However, without label information, most state-of-the-art unsupervised deep hashing (DH) algorithms suffer from severe performance degradation for unsupervised scenarios. One of the main reasons is that the ad-hoc encoding process cannot properly capture the visual feature distribution. In this paper, we propose a novel unsupervised framework that has two main contributions: 1) we convert the unsupervised DH model into supervised by discovering pseudo labels; 2) the framework unifies likelihood maximization, mutual information maximization, and quantization error minimization so that the pseudo labels can maximumly preserve the distribution of visual features. Extensive experiments on three popular data sets demonstrate the advantages of the proposed method, which leads to significant performance improvement over the state-of-the-art unsupervised hashing algorithms.

  3. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features

    PubMed Central

    Huo, Guanying

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614

  4. Feature generation using genetic programming with application to fault classification.

    PubMed

    Guo, Hong; Jack, Lindsay B; Nandi, Asoke K

    2005-02-01

    One of the major challenges in pattern recognition problems is the feature extraction process which derives new features from existing features, or directly from raw data in order to reduce the cost of computation during the classification process, while improving classifier efficiency. Most current feature extraction techniques transform the original pattern vector into a new vector with increased discrimination capability but lower dimensionality. This is conducted within a predefined feature space, and thus, has limited searching power. Genetic programming (GP) can generate new features from the original dataset without prior knowledge of the probabilistic distribution. In this paper, a GP-based approach is developed for feature extraction from raw vibration data recorded from a rotating machine with six different conditions. The created features are then used as the inputs to a neural classifier for the identification of six bearing conditions. Experimental results demonstrate the ability of GP to discover autimatically the different bearing conditions using features expressed in the form of nonlinear functions. Furthermore, four sets of results--using GP extracted features with artificial neural networks (ANN) and support vector machines (SVM), as well as traditional features with ANN and SVM--have been obtained. This GP-based approach is used for bearing fault classification for the first time and exhibits superior searching power over other techniques. Additionaly, it significantly reduces the time for computation compared with genetic algorithm (GA), therefore, makes a more practical realization of the solution.

  5. Comment on "Modeling Extreme "Carrington-Type" Space Weather Events Using Three-Dimensional Global MHD Simulations" by C. M. Ngwira, A. Pulkkinen, M. M. Kuznetsova, and A. Glocer"

    NASA Astrophysics Data System (ADS)

    Tsurutani, Bruce T.; Lakhina, Gurbax S.; Echer, Ezequiel; Hajra, Rajkumar; Nayak, Chinmaya; Mannucci, Anthony J.; Meng, Xing

    2018-02-01

    An alternative scenario to the Ngwira et al. (2014, https://doi.org/10.1002/2013JA019661) high sheath densities is proposed for modeling the Carrington magnetic storm. Typical slow solar wind densities ( 5 cm-3) and lower interplanetary magnetic cloud magnetic field intensities ( 90 nT) can be used to explain the observed initial and main phase storm features. A second point is that the fast storm recovery may be explained by ring current losses due to electromagnetic ion cyclotron wave scattering.

  6. Cosmic strings: Gravitation without local curvature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helliwell, T.M.; Konkowski, D.A.

    1987-05-01

    Cosmic strings are very long, thin structures which might stretch over vast reaches of the universe. If they exist, they would have been formed during phase transitions in the very early universe. The space-time surrounding a straight cosmic string is flat but nontrivial: A two-dimensional spatial section is a cone rather than a plane. This feature leads to unique gravitational effects. The flatness of the cone means that many of the gravitational effects can be understood with no mathematics beyond trigonometry. This includes the observational predictions of the double imaging of quasars and the truncation of the images of galaxies.

  7. Modeling extracellular fields for a three-dimensional network of cells using NEURON.

    PubMed

    Appukuttan, Shailesh; Brain, Keith L; Manchanda, Rohit

    2017-10-01

    Computational modeling of biological cells usually ignores their extracellular fields, assuming them to be inconsequential. Though such an assumption might be justified in certain cases, it is debatable for networks of tightly packed cells, such as in the central nervous system and the syncytial tissues of cardiac and smooth muscle. In the present work, we demonstrate a technique to couple the extracellular fields of individual cells within the NEURON simulation environment. The existing features of the simulator are extended by explicitly defining current balance equations, resulting in the coupling of the extracellular fields of adjacent cells. With this technique, we achieved continuity of extracellular space for a network model, thereby allowing the exploration of extracellular interactions computationally. Using a three-dimensional network model, passive and active electrical properties were evaluated under varying levels of extracellular volumes. Simultaneous intracellular and extracellular recordings for synaptic and action potentials were analyzed, and the potential of ephaptic transmission towards functional coupling of cells was explored. We have implemented a true bi-domain representation of a network of cells, with the extracellular domain being continuous throughout the entire model. This has hitherto not been achieved using NEURON, or other compartmental modeling platforms. We have demonstrated the coupling of the extracellular field of every cell in a three-dimensional model to obtain a continuous uniform extracellular space. This technique provides a framework for the investigation of interactions in tightly packed networks of cells via their extracellular fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curchod, Basile F. E.; Agostini, Federica, E-mail: agostini@mpi-halle.mpg.de; Gross, E. K. U.

    Nonadiabatic quantum interferences emerge whenever nuclear wavefunctions in different electronic states meet and interact in a nonadiabatic region. In this work, we analyze how nonadiabatic quantum interferences translate in the context of the exact factorization of the molecular wavefunction. In particular, we focus our attention on the shape of the time-dependent potential energy surface—the exact surface on which the nuclear dynamics takes place. We use a one-dimensional exactly solvable model to reproduce different conditions for quantum interferences, whose characteristic features already appear in one-dimension. The time-dependent potential energy surface develops complex features when strong interferences are present, in clear contrastmore » to the observed behavior in simple nonadiabatic crossing cases. Nevertheless, independent classical trajectories propagated on the exact time-dependent potential energy surface reasonably conserve a distribution in configuration space that mimics one of the exact nuclear probability densities.« less

  9. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm

    PubMed Central

    Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2017-01-01

    This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772

  10. Experimental two-dimensional quantum walk on a photonic chip

    PubMed Central

    Lin, Xiao-Feng; Feng, Zhen; Chen, Jing-Yuan; Gao, Jun; Sun, Ke; Wang, Chao-Yue; Lai, Peng-Cheng; Xu, Xiao-Yun; Wang, Yao; Qiao, Lu-Feng; Yang, Ai-Lin

    2018-01-01

    Quantum walks, in virtue of the coherent superposition and quantum interference, have exponential superiority over their classical counterpart in applications of quantum searching and quantum simulation. The quantum-enhanced power is highly related to the state space of quantum walks, which can be expanded by enlarging the photon number and/or the dimensions of the evolution network, but the former is considerably challenging due to probabilistic generation of single photons and multiplicative loss. We demonstrate a two-dimensional continuous-time quantum walk by using the external geometry of photonic waveguide arrays, rather than the inner degree of freedoms of photons. Using femtosecond laser direct writing, we construct a large-scale three-dimensional structure that forms a two-dimensional lattice with up to 49 × 49 nodes on a photonic chip. We demonstrate spatial two-dimensional quantum walks using heralded single photons and single photon–level imaging. We analyze the quantum transport properties via observing the ballistic evolution pattern and the variance profile, which agree well with simulation results. We further reveal the transient nature that is the unique feature for quantum walks of beyond one dimension. An architecture that allows a quantum walk to freely evolve in all directions and at a large scale, combining with defect and disorder control, may bring up powerful and versatile quantum walk machines for classically intractable problems. PMID:29756040

  11. Experimental two-dimensional quantum walk on a photonic chip.

    PubMed

    Tang, Hao; Lin, Xiao-Feng; Feng, Zhen; Chen, Jing-Yuan; Gao, Jun; Sun, Ke; Wang, Chao-Yue; Lai, Peng-Cheng; Xu, Xiao-Yun; Wang, Yao; Qiao, Lu-Feng; Yang, Ai-Lin; Jin, Xian-Min

    2018-05-01

    Quantum walks, in virtue of the coherent superposition and quantum interference, have exponential superiority over their classical counterpart in applications of quantum searching and quantum simulation. The quantum-enhanced power is highly related to the state space of quantum walks, which can be expanded by enlarging the photon number and/or the dimensions of the evolution network, but the former is considerably challenging due to probabilistic generation of single photons and multiplicative loss. We demonstrate a two-dimensional continuous-time quantum walk by using the external geometry of photonic waveguide arrays, rather than the inner degree of freedoms of photons. Using femtosecond laser direct writing, we construct a large-scale three-dimensional structure that forms a two-dimensional lattice with up to 49 × 49 nodes on a photonic chip. We demonstrate spatial two-dimensional quantum walks using heralded single photons and single photon-level imaging. We analyze the quantum transport properties via observing the ballistic evolution pattern and the variance profile, which agree well with simulation results. We further reveal the transient nature that is the unique feature for quantum walks of beyond one dimension. An architecture that allows a quantum walk to freely evolve in all directions and at a large scale, combining with defect and disorder control, may bring up powerful and versatile quantum walk machines for classically intractable problems.

  12. Formation of Spiral-Arm Spurs and Bound Clouds in Vertically Stratified Galactic Gas Disks

    NASA Astrophysics Data System (ADS)

    Kim, Woong-Tae; Ostriker, Eve C.

    2006-07-01

    We investigate the growth of spiral-arm substructure in vertically stratified, self-gravitating, galactic gas disks, using local numerical MHD simulations. Our new models extend our previous two-dimensional studies, which showed that a magnetized spiral shock in a thin disk can undergo magneto-Jeans instability (MJI), resulting in regularly spaced interarm spur structures and massive gravitationally bound fragments. Similar spur (or ``feather'') features have recently been seen in high-resolution observations of several galaxies. Here we consider two sets of numerical models: two-dimensional simulations that use a ``thick-disk'' gravitational kernel, and three-dimensional simulations with explicit vertical stratification. Both models adopt an isothermal equation of state with cs=7 km s-1. When disks are sufficiently magnetized and self-gravitating, the result in both sorts of models is the growth of spiral-arm substructure similar to that in our previous razor-thin models. Reduced self-gravity due to nonzero disk thickness increases the spur spacing to ~10 times the Jeans length at the arm peak. Bound clouds that form from spur fragmentation have masses ~(1-3)×107 Msolar each, similar to the largest observed GMCs. The mass-to-flux ratios and specific angular momenta of the bound condensations are lower than large-scale galactic values, as is true for observed GMCs. We find that unmagnetized or weakly magnetized two-dimensional models are unstable to the ``wiggle instability'' previously identified by Wada & Koda. However, our fully three-dimensional models do not show this effect. Nonsteady motions and strong vertical shear prevent coherent vortical structures from forming, evidently suppressing the wiggle instability. We also find no clear traces of Parker instability in the nonlinear spiral arm substructures that emerge, although conceivably Parker modes may help seed the MJI at early stages since azimuthal wavelengths are similar.

  13. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  14. Improved method for predicting protein fold patterns with ensemble classifiers.

    PubMed

    Chen, W; Liu, X; Huang, Y; Jiang, Y; Zou, Q; Lin, C

    2012-01-27

    Protein folding is recognized as a critical problem in the field of biophysics in the 21st century. Predicting protein-folding patterns is challenging due to the complex structure of proteins. In an attempt to solve this problem, we employed ensemble classifiers to improve prediction accuracy. In our experiments, 188-dimensional features were extracted based on the composition and physical-chemical property of proteins and 20-dimensional features were selected using a coupled position-specific scoring matrix. Compared with traditional prediction methods, these methods were superior in terms of prediction accuracy. The 188-dimensional feature-based method achieved 71.2% accuracy in five cross-validations. The accuracy rose to 77% when we used a 20-dimensional feature vector. These methods were used on recent data, with 54.2% accuracy. Source codes and dataset, together with web server and software tools for prediction, are available at: http://datamining.xmu.edu.cn/main/~cwc/ProteinPredict.html.

  15. Weighted Distance Functions Improve Analysis of High-Dimensional Data: Application to Molecular Dynamics Simulations.

    PubMed

    Blöchliger, Nicolas; Caflisch, Amedeo; Vitalis, Andreas

    2015-11-10

    Data mining techniques depend strongly on how the data are represented and how distance between samples is measured. High-dimensional data often contain a large number of irrelevant dimensions (features) for a given query. These features act as noise and obfuscate relevant information. Unsupervised approaches to mine such data require distance measures that can account for feature relevance. Molecular dynamics simulations produce high-dimensional data sets describing molecules observed in time. Here, we propose to globally or locally weight simulation features based on effective rates. This emphasizes, in a data-driven manner, slow degrees of freedom that often report on the metastable states sampled by the molecular system. We couple this idea to several unsupervised learning protocols. Our approach unmasks slow side chain dynamics within the native state of a miniprotein and reveals additional metastable conformations of a protein. The approach can be combined with most algorithms for clustering or dimensionality reduction.

  16. Dimensionality reduction of collective motion by principal manifolds

    NASA Astrophysics Data System (ADS)

    Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.

    2015-01-01

    While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.

  17. Real-space Wigner-Seitz Cells Imaging of Potassium on Graphite via Elastic Atomic Manipulation

    PubMed Central

    Yin, Feng; Koskinen, Pekka; Kulju, Sampo; Akola, Jaakko; Palmer, Richard E.

    2015-01-01

    Atomic manipulation in the scanning tunnelling microscopy, conventionally a tool to build nanostructures one atom at a time, is here employed to enable the atomic-scale imaging of a model low-dimensional system. Specifically, we use low-temperature STM to investigate an ultra thin film (4 atomic layers) of potassium created by epitaxial growth on a graphite substrate. The STM images display an unexpected honeycomb feature, which corresponds to a real-space visualization of the Wigner-Seitz cells of the close-packed surface K atoms. Density functional simulations indicate that this behaviour arises from the elastic, tip-induced vertical manipulation of potassium atoms during imaging, i.e. elastic atomic manipulation, and reflects the ultrasoft properties of the surface under strain. The method may be generally applicable to other soft e.g. molecular or biomolecular systems. PMID:25651973

  18. A Kernel-Based Low-Rank (KLR) Model for Low-Dimensional Manifold Recovery in Highly Accelerated Dynamic MRI.

    PubMed

    Nakarmi, Ukash; Wang, Yanhua; Lyu, Jingyuan; Liang, Dong; Ying, Leslie

    2017-11-01

    While many low rank and sparsity-based approaches have been developed for accelerated dynamic magnetic resonance imaging (dMRI), they all use low rankness or sparsity in input space, overlooking the intrinsic nonlinear correlation in most dMRI data. In this paper, we propose a kernel-based framework to allow nonlinear manifold models in reconstruction from sub-Nyquist data. Within this framework, many existing algorithms can be extended to kernel framework with nonlinear models. In particular, we have developed a novel algorithm with a kernel-based low-rank model generalizing the conventional low rank formulation. The algorithm consists of manifold learning using kernel, low rank enforcement in feature space, and preimaging with data consistency. Extensive simulation and experiment results show that the proposed method surpasses the conventional low-rank-modeled approaches for dMRI.

  19. Spectral Dimensionality and Scale of Urban Radiance

    NASA Technical Reports Server (NTRS)

    Small, Christopher

    2001-01-01

    Characterization of urban radiance and reflectance is important for understanding the effects of solar energy flux on the urban environment as well as for satellite mapping of urban settlement patterns. Spectral mixture analyses of Landsat and Ikonos imagery suggest that the urban radiance field can very often be described with combinations of three or four spectral endmembers. Dimensionality estimates of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) radiance measurements of urban areas reveal the existence of 30 to 60 spectral dimensions. The extent to which broadband imagery collected by operational satellites can represent the higher dimensional mixing space is a function of both the spatial and spectral resolution of the sensor. AVIRIS imagery offers the spatial and spectral resolution necessary to investigate the scale dependence of the spectral dimensionality. Dimensionality estimates derived from Minimum Noise Fraction (MNF) eigenvalue distributions show a distinct scale dependence for AVIRIS radiance measurements of Milpitas, California. Apparent dimensionality diminishes from almost 40 to less than 10 spectral dimensions between scales of 8000 m and 300 m. The 10 to 30 m scale of most features in urban mosaics results in substantial spectral mixing at the 20 m scale of high altitude AVIRIS pixels. Much of the variance at pixel scales is therefore likely to result from actual differences in surface reflectance at pixel scales. Spatial smoothing and spectral subsampling of AVIRIS spectra both result in substantial loss of information and reduction of apparent dimensionality, but the primary spectral endmembers in all cases are analogous to those found in global analyses of Landsat and Ikonos imagery of other urban areas.

  20. Resolving runaway electron distributions in space, time, and energy

    DOE PAGES

    Paz-Soldan, Carlos; Cooper, C. M.; Aleynikov, P.; ...

    2018-05-01

    Areas of agreement and disagreement with present-day models of RE evolution are revealed by measuring MeV-level bremsstrahlung radiation from runaway electrons (REs) with a pinhole camera. Spatially-resolved measurements localize the RE beam, reveal energy-dependent RE transport, and can be used to perform full two-dimensional (energy and pitch-angle) inversions of the RE phase space distribution. Energy-resolved measurements find qualitative agreement with modeling on the role of collisional and synchrotron damping in modifying the RE distribution shape. Measurements are consistent with predictions of phase-space attractors that accumulate REs, with non-monotonic features observed in the distribution. Temporally-resolved measurements find qualitative agreement with modelingmore » on the impact of collisional and synchrotron damping in varying the RE growth and decay rate. Anomalous RE loss is observed and found to be largest at low energy. As a result, possible roles for kinetic instability or spatial transport to resolve these anomalies are discussed.« less

  1. Resolving runaway electron distributions in space, time, and energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paz-Soldan, Carlos; Cooper, C. M.; Aleynikov, P.

    Areas of agreement and disagreement with present-day models of RE evolution are revealed by measuring MeV-level bremsstrahlung radiation from runaway electrons (REs) with a pinhole camera. Spatially-resolved measurements localize the RE beam, reveal energy-dependent RE transport, and can be used to perform full two-dimensional (energy and pitch-angle) inversions of the RE phase space distribution. Energy-resolved measurements find qualitative agreement with modeling on the role of collisional and synchrotron damping in modifying the RE distribution shape. Measurements are consistent with predictions of phase-space attractors that accumulate REs, with non-monotonic features observed in the distribution. Temporally-resolved measurements find qualitative agreement with modelingmore » on the impact of collisional and synchrotron damping in varying the RE growth and decay rate. Anomalous RE loss is observed and found to be largest at low energy. As a result, possible roles for kinetic instability or spatial transport to resolve these anomalies are discussed.« less

  2. Face recognition: a convolutional neural-network approach.

    PubMed

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  3. Supervised Classification Techniques for Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Jimenez, Luis O.

    1997-01-01

    The recent development of more sophisticated remote sensing systems enables the measurement of radiation in many mm-e spectral intervals than previous possible. An example of this technology is the AVIRIS system, which collects image data in 220 bands. The increased dimensionality of such hyperspectral data provides a challenge to the current techniques for analyzing such data. Human experience in three dimensional space tends to mislead one's intuition of geometrical and statistical properties in high dimensional space, properties which must guide our choices in the data analysis process. In this paper high dimensional space properties are mentioned with their implication for high dimensional data analysis in order to illuminate the next steps that need to be taken for the next generation of hyperspectral data classifiers.

  4. Balancing Newtonian gravity and spin to create localized structures

    NASA Astrophysics Data System (ADS)

    Bush, Michael; Lindner, John

    2015-03-01

    Using geometry and Newtonian physics, we design localized structures that do not require electromagnetic or other forces to resist implosion or explosion. In two-dimensional Euclidean space, we find an equilibrium configuration of a rotating ring of massive dust whose inward gravity is the centripetal force that spins it. We find similar solutions in three-dimensional Euclidean and hyperbolic spaces, but only in the limit of vanishing mass. Finally, in three-dimensional Euclidean space, we generalize the two-dimensional result by finding an equilibrium configuration of a spherical shell of massive dust that supports itself against gravitational collapse by spinning isoclinically in four dimensions so its three-dimensional acceleration is everywhere inward. These Newtonian ``atoms'' illuminate classical physics and geometry.

  5. An Energy Model of Place Cell Network in Three Dimensional Space.

    PubMed

    Wang, Yihong; Xu, Xuying; Wang, Rubin

    2018-01-01

    Place cells are important elements in the spatial representation system of the brain. A considerable amount of experimental data and classical models are achieved in this area. However, an important question has not been addressed, which is how the three dimensional space is represented by the place cells. This question is preliminarily surveyed by energy coding method in this research. Energy coding method argues that neural information can be expressed by neural energy and it is convenient to model and compute for neural systems due to the global and linearly addable properties of neural energy. Nevertheless, the models of functional neural networks based on energy coding method have not been established. In this work, we construct a place cell network model to represent three dimensional space on an energy level. Then we define the place field and place field center and test the locating performance in three dimensional space. The results imply that the model successfully simulates the basic properties of place cells. The individual place cell obtains unique spatial selectivity. The place fields in three dimensional space vary in size and energy consumption. Furthermore, the locating error is limited to a certain level and the simulated place field agrees to the experimental results. In conclusion, this is an effective model to represent three dimensional space by energy method. The research verifies the energy efficiency principle of the brain during the neural coding for three dimensional spatial information. It is the first step to complete the three dimensional spatial representing system of the brain, and helps us further understand how the energy efficiency principle directs the locating, navigating, and path planning function of the brain.

  6. node2vec: Scalable Feature Learning for Networks

    PubMed Central

    Grover, Aditya; Leskovec, Jure

    2016-01-01

    Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node’s network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks. PMID:27853626

  7. Multidimensionally encoded magnetic resonance imaging.

    PubMed

    Lin, Fa-Hsuan

    2013-07-01

    Magnetic resonance imaging (MRI) typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here, we propose the multidimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel radiofrequency coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. Copyright © 2012 Wiley Periodicals, Inc.

  8. Advanced Microwave Precipitation Radiometer (AMPR) for remote observation of precipitation

    NASA Technical Reports Server (NTRS)

    Galliano, J. A.; Platt, R. H.

    1990-01-01

    The design, development, and tests of the Advanced Microwave Precipitation Radiometer (AMPR) operating in the 10 to 85 GHz range specifically for precipitation retrieval and mesoscale storm system studies from a high altitude aircraft platform (i.e., ER-2) are described. The primary goals of AMPR are the exploitation of the scattering signal of precipitation at frequencies near 10, 19, 37, and 85 GHz together to unambiguously retrieve precipitation and storm structure and intensity information in support of proposed and planned space sensors in geostationary and low earth orbit, as well as storm-related field experiments. The development of AMPR will have an important impact on the interpretation of microwave radiances for rain retrievals over both land and ocean for the following reasons: (1) A scanning instrument, such as AMPR, will allow the unambiguous detection and analysis of features in two dimensional space, allowing an improved interpretation of signals in terms of cloud features, and microphysical and radiative processes; (2) AMPR will offer more accurate comparisons with ground-based radar data by feature matching since the navigation of the ER-2 platform can be expected to drift 3 to 4 km per hour of flight time; and (3) AMPR will allow underflights of the SSM/I satellite instrument with enough spatial coverage at the same frequencies to make meaningful comparisons of the data for precipitation studies.

  9. Depth measurements through controlled aberrations of projected patterns.

    PubMed

    Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim

    2012-03-12

    Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

  10. Fractal geometry in an expanding, one-dimensional, Newtonian universe.

    PubMed

    Miller, Bruce N; Rouet, Jean-Louis; Le Guirriec, Emmanuel

    2007-09-01

    Observations of galaxies over large distances reveal the possibility of a fractal distribution of their positions. The source of fractal behavior is the lack of a length scale in the two body gravitational interaction. However, even with new, larger, sample sizes from recent surveys, it is difficult to extract information concerning fractal properties with confidence. Similarly, three-dimensional N-body simulations with a billion particles only provide a thousand particles per dimension, far too small for accurate conclusions. With one-dimensional models these limitations can be overcome by carrying out simulations with on the order of a quarter of a million particles without compromising the computation of the gravitational force. Here the multifractal properties of two of these models that incorporate different features of the dynamical equations governing the evolution of a matter dominated universe are compared. For each model at least two scaling regions are identified. By employing criteria from dynamical systems theory it is shown that only one of them can be geometrically significant. The results share important similarities with galaxy observations, such as hierarchical clustering and apparent bifractal geometry. They also provide insights concerning possible constraints on length and time scales for fractal structure. They clearly demonstrate that fractal geometry evolves in the mu (position, velocity) space. The observed patterns are simply a shadow (projection) of higher-dimensional structure.

  11. Cosmological perturbations in the (1 + 3 + 6)-dimensional space-times

    NASA Astrophysics Data System (ADS)

    Tomita, K.

    2014-12-01

    Cosmological perturbations in the (1+3+6)-dimensional space-times including photon gas without viscous processes are studied on the basis of Abbott et al.'s formalism [R. B. Abbott, B. Bednarz, and S. D. Ellis, Phys. Rev. D 33, 2147 (1986)]. Space-times consist of outer space (the 3-dimensional expanding section) and inner space (the 6-dimensional section). The inner space expands initially and later contracts. Abbott et al. derived only power-type solutions, which appear at the final stage of the space-times, in the small wave-number limit. In this paper, we derive not only small wave-number solutions, but also large wave-number solutions. It is found that the latter solutions depend on the two wave-numbers k_r and k_R (which are defined in the outer and inner spaces, respectively), and that the k_r-dependent and k_R-dependent parts dominate the total perturbations when (k_r/r(t))/(k_R/R(t)) ≫ 1 or ≪ 1, respectively, where r(t) and R(t) are the scale-factors in the outer and inner spaces. By comparing the behaviors of these perturbations, moreover, changes in the spectrum of perturbations in the outer space with time are discussed.

  12. Big-data-based edge biomarkers: study on dynamical drug sensitivity and resistance in individuals.

    PubMed

    Zeng, Tao; Zhang, Wanwei; Yu, Xiangtian; Liu, Xiaoping; Li, Meiyi; Chen, Luonan

    2016-07-01

    Big-data-based edge biomarker is a new concept to characterize disease features based on biomedical big data in a dynamical and network manner, which also provides alternative strategies to indicate disease status in single samples. This article gives a comprehensive review on big-data-based edge biomarkers for complex diseases in an individual patient, which are defined as biomarkers based on network information and high-dimensional data. Specifically, we firstly introduce the sources and structures of biomedical big data accessible in public for edge biomarker and disease study. We show that biomedical big data are typically 'small-sample size in high-dimension space', i.e. small samples but with high dimensions on features (e.g. omics data) for each individual, in contrast to traditional big data in many other fields characterized as 'large-sample size in low-dimension space', i.e. big samples but with low dimensions on features. Then, we demonstrate the concept, model and algorithm for edge biomarkers and further big-data-based edge biomarkers. Dissimilar to conventional biomarkers, edge biomarkers, e.g. module biomarkers in module network rewiring-analysis, are able to predict the disease state by learning differential associations between molecules rather than differential expressions of molecules during disease progression or treatment in individual patients. In particular, in contrast to using the information of the common molecules or edges (i.e.molecule-pairs) across a population in traditional biomarkers including network and edge biomarkers, big-data-based edge biomarkers are specific for each individual and thus can accurately evaluate the disease state by considering the individual heterogeneity. Therefore, the measurement of big data in a high-dimensional space is required not only in the learning process but also in the diagnosing or predicting process of the tested individual. Finally, we provide a case study on analyzing the temporal expression data from a malaria vaccine trial by big-data-based edge biomarkers from module network rewiring-analysis. The illustrative results show that the identified module biomarkers can accurately distinguish vaccines with or without protection and outperformed previous reported gene signatures in terms of effectiveness and efficiency. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. The Fisher-Markov selector: fast selecting maximally separable feature subset for multiclass classification with applications to high-dimensional data.

    PubMed

    Cheng, Qiang; Zhou, Hongbo; Cheng, Jie

    2011-06-01

    Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.

  14. Using 2D correlation analysis to enhance spectral information available from highly spatially resolved AFM-IR spectra

    NASA Astrophysics Data System (ADS)

    Marcott, Curtis; Lo, Michael; Hu, Qichi; Kjoller, Kevin; Boskey, Adele; Noda, Isao

    2014-07-01

    The recent combination of atomic force microscopy and infrared spectroscopy (AFM-IR) has led to the ability to obtain IR spectra with nanoscale spatial resolution, nearly two orders-of-magnitude better than conventional Fourier transform infrared (FT-IR) microspectroscopy. This advanced methodology can lead to significantly sharper spectral features than are typically seen in conventional IR spectra of inhomogeneous materials, where a wider range of molecular environments are coaveraged by the larger sample cross section being probed. In this work, two-dimensional (2D) correlation analysis is used to examine position sensitive spectral variations in datasets of closely spaced AFM-IR spectra. This analysis can reveal new key insights, providing a better understanding of the new spectral information that was previously hidden under broader overlapped spectral features. Two examples of the utility of this new approach are presented. Two-dimensional correlation analysis of a set of AFM-IR spectra were collected at 200-nm increments along a line through a nucleation site generated by remelting a small spot on a thin film of poly(3-hydroxybutyrate-co-3-hydroxyhexanoate). There are two different crystalline carbonyl band components near 1720 cm-1 that sequentially disappear before a band at 1740 cm-1 due to more disordered material appears. In the second example, 2D correlation analysis of a series of AFM-IR spectra spaced every 1 μm of a thin cross section of a bone sample measured outward from an osteon center of bone growth. There are many changes in the amide I and phosphate band contours, suggesting changes in the bone structure are occurring as the bone matures.

  15. Metallic Borides, La 2 Re 3 B 7 and La 3 Re 2 B 5 , Featuring Extensive Boron–Boron Bonding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bugaris, Daniel E.; Malliakas, Christos D.; Chung, Duck Young

    We synthesized La 2Re 3B 7 and La 3Re 2B 5 in single-crystalline form from a molten La/Ni eutectic at 1000°C, in the first example of the flux crystal growth of ternary rare-earth rhenium borides. Both compounds crystallize in their own orthorhombic structure types, with La 2Re 3B 7 (space group Pcca) having lattice parameters a = 7.657(2) Å, b = 6.755(1) Å, and c = 11.617(2) Å, and La 3Re 2B 5 (space group Pmma) having lattice parameters a = 10.809(2) Å, b = 5.287(1) Å, and c = 5.747(1) Å. Furthermore, the compounds possess three-dimensional framework structures thatmore » are built up from rhenium boride polyhedra and boron-boron bonding. La 3Re 2B 5 features fairly common B 2 dumbbells, whereas La 2Re 3B 7 has unique one-dimensional subunits composed of alternating triangular B3 and trans-B4 zigzag chain fragments. Also observed in La 3Re 2B 5 is an unusual coordination of B by an octahedron of La atoms. Electronic band structure calculations predict that La 2Re 3B 7 is a semimetal, which is observed in the electrical resistivity data as measured on single crystals, with behavior obeying the Bloch-Grüneisen model and a room-temperature resistivity ρ300K of ~ 375 μΩ cm. The electronic band structure calculations also suggest that La 3Re 2B 5 is a regular metal.« less

  16. Metallic Borides, La 2 Re 3 B 7 and La 3 Re 2 B 5 , Featuring Extensive Boron–Boron Bonding

    DOE PAGES

    Bugaris, Daniel E.; Malliakas, Christos D.; Chung, Duck Young; ...

    2016-01-26

    We synthesized La 2Re 3B 7 and La 3Re 2B 5 in single-crystalline form from a molten La/Ni eutectic at 1000°C, in the first example of the flux crystal growth of ternary rare-earth rhenium borides. Both compounds crystallize in their own orthorhombic structure types, with La 2Re 3B 7 (space group Pcca) having lattice parameters a = 7.657(2) Å, b = 6.755(1) Å, and c = 11.617(2) Å, and La 3Re 2B 5 (space group Pmma) having lattice parameters a = 10.809(2) Å, b = 5.287(1) Å, and c = 5.747(1) Å. Furthermore, the compounds possess three-dimensional framework structures thatmore » are built up from rhenium boride polyhedra and boron-boron bonding. La 3Re 2B 5 features fairly common B 2 dumbbells, whereas La 2Re 3B 7 has unique one-dimensional subunits composed of alternating triangular B3 and trans-B4 zigzag chain fragments. Also observed in La 3Re 2B 5 is an unusual coordination of B by an octahedron of La atoms. Electronic band structure calculations predict that La 2Re 3B 7 is a semimetal, which is observed in the electrical resistivity data as measured on single crystals, with behavior obeying the Bloch-Grüneisen model and a room-temperature resistivity ρ300K of ~ 375 μΩ cm. The electronic band structure calculations also suggest that La 3Re 2B 5 is a regular metal.« less

  17. Substrate Topography Induces a Crossover from 2D to 3D Behavior in Fibroblast Migration

    PubMed Central

    Ghibaudo, Marion; Trichet, Léa; Le Digabel, Jimmy; Richert, Alain; Hersen, Pascal; Ladoux, Benoît

    2009-01-01

    Abstract In a three-dimensional environment, cells migrate through complex topographical features. Using microstructured substrates, we investigate the role of substrate topography in cell adhesion and migration. To do so, fibroblasts are plated on chemically identical substrates composed of microfabricated pillars. When the dimensions of the pillars (i.e., the diameter, length, and spacing) are varied, migrating cells encounter alternating flat and rough surfaces that depend on the spacing between the pillars. Consequently, we show that substrate topography affects cell shape and migration by modifying cell-to-substrate interactions. Cells on micropillar substrates exhibit more elongated and branched shapes with fewer actin stress fibers compared with cells on flat surfaces. By analyzing the migration paths in various environments, we observe different mechanisms of cell migration, including a persistent type of migration, that depend on the organization of the topographical features. These responses can be attributed to a spatial reorganization of the actin cytoskeleton due to physical constraints and a preferential formation of focal adhesions on the micropillars, with an increased lifetime compared to that observed on flat surfaces. By changing myosin II activity, we show that actomyosin contractility is essential in the cellular response to micron-scale topographic signals. Finally, the analysis of cell movements at the frontier between flat and micropillar substrates shows that cell transmigration through the micropillar substrates depends on the spacing between the pillars. PMID:19580774

  18. Space Radar Image of Long Valley, California in 3-D

    NASA Image and Video Library

    1999-05-01

    This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. http://photojournal.jpl.nasa.gov/catalog/PIA01769

  19. Defect-Repairable Latent Feature Extraction of Driving Behavior via a Deep Sparse Autoencoder

    PubMed Central

    Taniguchi, Tadahiro; Takenaka, Kazuhito; Bando, Takashi

    2018-01-01

    Data representing driving behavior, as measured by various sensors installed in a vehicle, are collected as multi-dimensional sensor time-series data. These data often include redundant information, e.g., both the speed of wheels and the engine speed represent the velocity of the vehicle. Redundant information can be expected to complicate the data analysis, e.g., more factors need to be analyzed; even varying the levels of redundancy can influence the results of the analysis. We assume that the measured multi-dimensional sensor time-series data of driving behavior are generated from low-dimensional data shared by the many types of one-dimensional data of which multi-dimensional time-series data are composed. Meanwhile, sensor time-series data may be defective because of sensor failure. Therefore, another important function is to reduce the negative effect of defective data when extracting low-dimensional time-series data. This study proposes a defect-repairable feature extraction method based on a deep sparse autoencoder (DSAE) to extract low-dimensional time-series data. In the experiments, we show that DSAE provides high-performance latent feature extraction for driving behavior, even for defective sensor time-series data. In addition, we show that the negative effect of defects on the driving behavior segmentation task could be reduced using the latent features extracted by DSAE. PMID:29462931

  20. Three-Dimensional View of Ionized Gas Conditions in Galaxies

    NASA Astrophysics Data System (ADS)

    Juneau, Stephanie; NOAO Data Lab

    2018-06-01

    We present a 3D version of common emission line diagnostic diagrams used to identify the source of ionization in galaxies, and highlight interesting features in this new 3D space, which are associated with global galaxy properties. Namely, we combine the BPT and Mass-Excitation (MEx) diagrams, and apply it to a set of >300,000 galaxies from the SDSS survey. Among other features, we show that the usual “branch” of star-forming galaxies becomes a curved surface in the new 3D space. Understanding the underlying reasons can shed light on the nearby galaxy population but also aid our interpretation of high-redshift surveys, which indicate a strong evolution of emission line ratios. Despite efforts to explain the origin of this strong evolution, a consensus has not yet been reached. Yet, the implications are crucial to our understanding of galaxy growth across cosmic time, and in particular to assess how star forming regions differed at earlier times (gas properties? stellar properties? a combination?). We perform this analysis within the framework of the NOAO Data Lab (datalab.noao.edu) jointly with public visualization tools. The final workflow will be released publicly.

  1. Low-Dimensional Statistics of Anatomical Variability via Compact Representation of Image Deformations.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2016-10-01

    Using image-based descriptors to investigate clinical hypotheses and therapeutic implications is challenging due to the notorious "curse of dimensionality" coupled with a small sample size. In this paper, we present a low-dimensional analysis of anatomical shape variability in the space of diffeomorphisms and demonstrate its benefits for clinical studies. To combat the high dimensionality of the deformation descriptors, we develop a probabilistic model of principal geodesic analysis in a bandlimited low-dimensional space that still captures the underlying variability of image data. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than models based on the high-dimensional state-of-the-art approaches such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA).

  2. Modelling Parsing Constraints with High-Dimensional Context Space.

    ERIC Educational Resources Information Center

    Burgess, Curt; Lund, Kevin

    1997-01-01

    Presents a model of high-dimensional context space, the Hyperspace Analogue to Language (HAL), with a series of simulations modelling human empirical results. Proposes that HAL's context space can be used to provide a basic categorization of semantic and grammatical concepts; model certain aspects of morphological ambiguity in verbs; and provide…

  3. Lagrangian statistics in weakly forced two-dimensional turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivera, Michael K.; Ecke, Robert E.

    Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale r i. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in termsmore » of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Furthermore, implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.« less

  4. Lagrangian statistics in weakly forced two-dimensional turbulence

    DOE PAGES

    Rivera, Michael K.; Ecke, Robert E.

    2016-01-14

    Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale r i. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in termsmore » of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Furthermore, implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.« less

  5. Study of the X-Ray Diagnosis of Unstable Pelvic Fracture Displacements in Three-Dimensional Space and its Application in Closed Reduction.

    PubMed

    Shi, Chengdi; Cai, Leyi; Hu, Wei; Sun, Junying

    2017-09-19

    ABSTRACTS Objective: To study the method of X-ray diagnosis of unstable pelvic fractures displaced in three-dimensional (3D) space and its clinical application in closed reduction. Five models of hemipelvic displacement were made in an adult pelvic specimen. Anteroposterior radiographs of the pelvis were analyzed in PACS. The method of X-ray diagnosis was applied in closed reductions. From February 2012 to June 2016, 23 patients (15 men, 8 women; mean age, 43.4 years) with unstable pelvic fractures were included. All patients were treated by closed reduction and percutaneous cannulate screw fixation of the pelvic ring. According to Tile's classification, the patients were classified into type B1 in 7 cases, B2 in 3, B3 in 3, C1 in 5, C2 in 3, and C3 in 2. The operation time and intraoperative blood loss were recorded. Postoperative images were evaluated by Matta radiographic standards. Five models of displacement were made successfully. The X-ray features of the models were analyzed. For clinical patients, the average operation time was 44.8 min (range, 20-90 min) and the average intraoperative blood loss was 35.7 (range, 20-100) mL. According to the Matta standards, 7 cases were excellent, 12 cases were good, and 4 were fair. The displacements in 3D space of unstable pelvic fractures can be diagnosed rapidly by X-ray analysis to guide closed reduction, with a satisfactory clinical outcome.

  6. Three-Dimensional Rotating Wall Vessel-Derived Cell Culture Models for Studying Virus-Host Interactions

    PubMed Central

    Gardner, Jameson K.; Herbst-Kralovetz, Melissa M.

    2016-01-01

    The key to better understanding complex virus-host interactions is the utilization of robust three-dimensional (3D) human cell cultures that effectively recapitulate native tissue architecture and model the microenvironment. A lack of physiologically-relevant animal models for many viruses has limited the elucidation of factors that influence viral pathogenesis and of complex host immune mechanisms. Conventional monolayer cell cultures may support viral infection, but are unable to form the tissue structures and complex microenvironments that mimic host physiology and, therefore, limiting their translational utility. The rotating wall vessel (RWV) bioreactor was designed by the National Aeronautics and Space Administration (NASA) to model microgravity and was later found to more accurately reproduce features of human tissue in vivo. Cells grown in RWV bioreactors develop in a low fluid-shear environment, which enables cells to form complex 3D tissue-like aggregates. A wide variety of human tissues (from neuronal to vaginal tissue) have been grown in RWV bioreactors and have been shown to support productive viral infection and physiological meaningful host responses. The in vivo-like characteristics and cellular features of the human 3D RWV-derived aggregates make them ideal model systems to effectively recapitulate pathophysiology and host responses necessary to conduct rigorous basic science, preclinical and translational studies. PMID:27834891

  7. Fully Three-Dimensional Virtual-Reality System

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.

    1994-01-01

    Proposed virtual-reality system presents visual displays to simulate free flight in three-dimensional space. System, virtual space pod, is testbed for control and navigation schemes. Unlike most virtual-reality systems, virtual space pod would not depend for orientation on ground plane, which hinders free flight in three dimensions. Space pod provides comfortable seating, convenient controls, and dynamic virtual-space images for virtual traveler. Controls include buttons plus joysticks with six degrees of freedom.

  8. Spectral feature design in high dimensional multispectral data

    NASA Technical Reports Server (NTRS)

    Chen, Chih-Chien Thomas; Landgrebe, David A.

    1988-01-01

    The High resolution Imaging Spectrometer (HIRIS) is designed to acquire images simultaneously in 192 spectral bands in the 0.4 to 2.5 micrometers wavelength region. It will make possible the collection of essentially continuous reflectance spectra at a spectral resolution sufficient to extract significantly enhanced amounts of information from return signals as compared to existing systems. The advantages of such high dimensional data come at a cost of increased system and data complexity. For example, since the finer the spectral resolution, the higher the data rate, it becomes impractical to design the sensor to be operated continuously. It is essential to find new ways to preprocess the data which reduce the data rate while at the same time maintaining the information content of the high dimensional signal produced. Four spectral feature design techniques are developed from the Weighted Karhunen-Loeve Transforms: (1) non-overlapping band feature selection algorithm; (2) overlapping band feature selection algorithm; (3) Walsh function approach; and (4) infinite clipped optimal function approach. The infinite clipped optimal function approach is chosen since the features are easiest to find and their classification performance is the best. After the preprocessed data has been received at the ground station, canonical analysis is further used to find the best set of features under the criterion that maximal class separability is achieved. Both 100 dimensional vegetation data and 200 dimensional soil data were used to test the spectral feature design system. It was shown that the infinite clipped versions of the first 16 optimal features had excellent classification performance. The overall probability of correct classification is over 90 percent while providing for a reduced downlink data rate by a factor of 10.

  9. Dimensionality Reduction Through Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Kagan; Norwig, Peter (Technical Monitor)

    1999-01-01

    In data mining, one often needs to analyze datasets with a very large number of attributes. Performing machine learning directly on such data sets is often impractical because of extensive run times, excessive complexity of the fitted model (often leading to overfitting), and the well-known "curse of dimensionality." In practice, to avoid such problems, feature selection and/or extraction are often used to reduce data dimensionality prior to the learning step. However, existing feature selection/extraction algorithms either evaluate features by their effectiveness across the entire data set or simply disregard class information altogether (e.g., principal component analysis). Furthermore, feature extraction algorithms such as principal components analysis create new features that are often meaningless to human users. In this article, we present input decimation, a method that provides "feature subsets" that are selected for their ability to discriminate among the classes. These features are subsequently used in ensembles of classifiers, yielding results superior to single classifiers, ensembles that use the full set of features, and ensembles based on principal component analysis on both real and synthetic datasets.

  10. Certain approximation problems for functions on the infinite-dimensional torus: Lipschitz spaces

    NASA Astrophysics Data System (ADS)

    Platonov, S. S.

    2018-02-01

    We consider some questions about the approximation of functions on the infinite-dimensional torus by trigonometric polynomials. Our main results are analogues of the direct and inverse theorems in the classical theory of approximation of periodic functions and a description of the Lipschitz spaces on the infinite-dimensional torus in terms of the best approximation.

  11. Simulating Scenes In Outer Space

    NASA Technical Reports Server (NTRS)

    Callahan, John D.

    1989-01-01

    Multimission Interactive Picture Planner, MIP, computer program for scientifically accurate and fast, three-dimensional animation of scenes in deep space. Versatile, reasonably comprehensive, and portable, and runs on microcomputers. New techniques developed to perform rapidly calculations and transformations necessary to animate scenes in scientifically accurate three-dimensional space. Written in FORTRAN 77 code. Primarily designed to handle Voyager, Galileo, and Space Telescope. Adapted to handle other missions.

  12. On the importance of an accurate representation of the initial state of the system in classical dynamics simulations

    NASA Astrophysics Data System (ADS)

    García-Vela, A.

    2000-05-01

    A definition of a quantum-type phase-space distribution is proposed in order to represent the initial state of the system in a classical dynamics simulation. The central idea is to define an initial quantum phase-space state of the system as the direct product of the coordinate and momentum representations of the quantum initial state. The phase-space distribution is then obtained as the square modulus of this phase-space state. The resulting phase-space distribution closely resembles the quantum nature of the system initial state. The initial conditions are sampled with the distribution, using a grid technique in phase space. With this type of sampling the distribution of initial conditions reproduces more faithfully the shape of the original phase-space distribution. The method is applied to generate initial conditions describing the three-dimensional state of the Ar-HCl cluster prepared by ultraviolet excitation. The photodissociation dynamics is simulated by classical trajectories, and the results are compared with those of a wave packet calculation. The classical and quantum descriptions are found in good agreement for those dynamical events less subject to quantum effects. The classical result fails to reproduce the quantum mechanical one for the more strongly quantum features of the dynamics. The properties and applicability of the phase-space distribution and the sampling technique proposed are discussed.

  13. Selecting relevant 3D image features of margin sharpness and texture for lung nodule retrieval.

    PubMed

    Ferreira, José Raniery; de Azevedo-Marques, Paulo Mazzoncini; Oliveira, Marcelo Costa

    2017-03-01

    Lung cancer is the leading cause of cancer-related deaths in the world. Its diagnosis is a challenge task to specialists due to several aspects on the classification of lung nodules. Therefore, it is important to integrate content-based image retrieval methods on the lung nodule classification process, since they are capable of retrieving similar cases from databases that were previously diagnosed. However, this mechanism depends on extracting relevant image features in order to obtain high efficiency. The goal of this paper is to perform the selection of 3D image features of margin sharpness and texture that can be relevant on the retrieval of similar cancerous and benign lung nodules. A total of 48 3D image attributes were extracted from the nodule volume. Border sharpness features were extracted from perpendicular lines drawn over the lesion boundary. Second-order texture features were extracted from a cooccurrence matrix. Relevant features were selected by a correlation-based method and a statistical significance analysis. Retrieval performance was assessed according to the nodule's potential malignancy on the 10 most similar cases and by the parameters of precision and recall. Statistical significant features reduced retrieval performance. Correlation-based method selected 2 margin sharpness attributes and 6 texture attributes and obtained higher precision compared to all 48 extracted features on similar nodule retrieval. Feature space dimensionality reduction of 83 % obtained higher retrieval performance and presented to be a computationaly low cost method of retrieving similar nodules for the diagnosis of lung cancer.

  14. The exaptive excellence of spandrels as a term and prototype

    PubMed Central

    Gould, Stephen Jay

    1997-01-01

    In 1979, Lewontin and I borrowed the architectural term “spandrel” (using the pendentives of San Marco in Venice as an example) to designate the class of forms and spaces that arise as necessary byproducts of another decision in design, and not as adaptations for direct utility in themselves. This proposal has generated a large literature featuring two critiques: (i) the terminological claim that the spandrels of San Marco are not true spandrels at all and (ii) the conceptual claim that they are adaptations and not byproducts. The features of the San Marco pendentives that we explicitly defined as spandrel-properties—their necessary number (four) and shape (roughly triangular)—are inevitable architectural byproducts, whatever the structural attributes of the pendentives themselves. The term spandrel may be extended from its particular architectural use for two-dimensional byproducts to the generality of “spaces left over,” a definition that properly includes the San Marco pendentives. Evolutionary biology needs such an explicit term for features arising as byproducts, rather than adaptations, whatever their subsequent exaptive utility. The concept of biological spandrels—including the examples here given of masculinized genitalia in female hyenas, exaptive use of an umbilicus as a brooding chamber by snails, the shoulder hump of the giant Irish deer, and several key features of human mentality—anchors the critique of overreliance upon adaptive scenarios in evolutionary explanation. Causes of historical origin must always be separated from current utilities; their conflation has seriously hampered the evolutionary analysis of form in the history of life. PMID:11038582

  15. EHR-based phenotyping: Bulk learning and evaluation.

    PubMed

    Chiu, Po-Hsiang; Hripcsak, George

    2017-06-01

    In data-driven phenotyping, a core computational task is to identify medical concepts and their variations from sources of electronic health records (EHR) to stratify phenotypic cohorts. A conventional analytic framework for phenotyping largely uses a manual knowledge engineering approach or a supervised learning approach where clinical cases are represented by variables encompassing diagnoses, medicinal treatments and laboratory tests, among others. In such a framework, tasks associated with feature engineering and data annotation remain a tedious and expensive exercise, resulting in poor scalability. In addition, certain clinical conditions, such as those that are rare and acute in nature, may never accumulate sufficient data over time, which poses a challenge to establishing accurate and informative statistical models. In this paper, we use infectious diseases as the domain of study to demonstrate a hierarchical learning method based on ensemble learning that attempts to address these issues through feature abstraction. We use a sparse annotation set to train and evaluate many phenotypes at once, which we call bulk learning. In this batch-phenotyping framework, disease cohort definitions can be learned from within the abstract feature space established by using multiple diseases as a substrate and diagnostic codes as surrogates. In particular, using surrogate labels for model training renders possible its subsequent evaluation using only a sparse annotated sample. Moreover, statistical models can be trained and evaluated, using the same sparse annotation, from within the abstract feature space of low dimensionality that encapsulates the shared clinical traits of these target diseases, collectively referred to as the bulk learning set. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Electronic Transport and Possible Superconductivity at Van Hove Singularities in Carbon Nanotubes.

    PubMed

    Yang, Y; Fedorov, G; Shafranjuk, S E; Klapwijk, T M; Cooper, B K; Lewis, R M; Lobb, C J; Barbara, P

    2015-12-09

    Van Hove singularities (VHSs) are a hallmark of reduced dimensionality, leading to a divergent density of states in one and two dimensions and predictions of new electronic properties when the Fermi energy is close to these divergences. In carbon nanotubes, VHSs mark the onset of new subbands. They are elusive in standard electronic transport characterization measurements because they do not typically appear as notable features and therefore their effect on the nanotube conductance is largely unexplored. Here we report conductance measurements of carbon nanotubes where VHSs are clearly revealed by interference patterns of the electronic wave functions, showing both a sharp increase of quantum capacitance, and a sharp reduction of energy level spacing, consistent with an upsurge of density of states. At VHSs, we also measure an anomalous increase of conductance below a temperature of about 30 K. We argue that this transport feature is consistent with the formation of Cooper pairs in the nanotube.

  17. Musings about beauty.

    PubMed

    Kintsch, Walter

    2012-01-01

    In this essay, I explore how cognitive science could illuminate the concept of beauty. Two results from the extensive literature on aesthetics guide my discussion. As the term "beauty" is overextended in general usage, I choose as my starting point the notion of "perfect form." Aesthetic theorists are in reasonable agreement about the criteria for perfect form. What do these criteria imply for mental representations that are experienced as beautiful? Complexity theory can be used to specify constraints on mental representations abstractly formulated as vectors in a high-dimensional space. A central feature of the proposed model is that perfect form depends both on features of the objects or events perceived and on the nature of the encoding strategies or model of the observer. A simple example illustrates the proposed calculations. A number of interesting implications that arise as a consequence of reformulating beauty in this way are noted. Copyright © 2012 Cognitive Science Society, Inc.

  18. Static and dynamic friction in sliding colloidal monolayers

    PubMed Central

    Vanossi, Andrea; Manini, Nicola; Tosatti, Erio

    2012-01-01

    In a pioneer experiment, Bohlein et al. realized the controlled sliding of two-dimensional colloidal crystals over laser-generated periodic or quasi-periodic potentials. Here we present realistic simulations and arguments that besides reproducing the main experimentally observed features give a first theoretical demonstration of the potential impact of colloid sliding in nanotribology. The free motion of solitons and antisolitons in the sliding of hard incommensurate crystals is contrasted with the soliton–antisoliton pair nucleation at the large static friction threshold Fs when the two lattices are commensurate and pinned. The frictional work directly extracted from particles’ velocities can be analyzed as a function of classic tribological parameters, including speed, spacing, and amplitude of the periodic potential (representing, respectively, the mismatch of the sliding interface and the corrugation, or “load”). These and other features suggestive of further experiments and insights promote colloid sliding to a unique friction study instrument. PMID:23019582

  19. Reliability of numerical wind tunnels for VAWT simulation

    NASA Astrophysics Data System (ADS)

    Raciti Castelli, M.; Masi, M.; Battisti, L.; Benini, E.; Brighenti, A.; Dossena, V.; Persico, G.

    2016-09-01

    Computational Fluid Dynamics (CFD) based on the Unsteady Reynolds Averaged Navier Stokes (URANS) equations have long been widely used to study vertical axis wind turbines (VAWTs). Following a comprehensive experimental survey on the wakes downwind of a troposkien-shaped rotor, a campaign of bi-dimensional simulations is presented here, with the aim of assessing its reliability in reproducing the main features of the flow, also identifying areas needing additional research. Starting from both a well consolidated turbulence model (k-ω SST) and an unstructured grid typology, the main simulation settings are here manipulated in a convenient form to tackle rotating grids reproducing a VAWT operating in an open jet wind tunnel. The dependence of the numerical predictions from the selected grid spacing is investigated, thus establishing the less refined grid size that is still capable of capturing some relevant flow features such as integral quantities (rotor torque) and local ones (wake velocities).

  20. Three-dimensional object recognition using similar triangles and decision trees

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

Top