The origin of absorptive features in the two-dimensional electronic spectra of rhodopsin.
Farag, Marwa H; Jansen, Thomas L C; Knoester, Jasper
2018-05-09
In rhodopsin, the absorption of a photon causes the isomerization of the 11-cis isomer of the retinal chromophore to its all-trans isomer. This isomerization is known to occur through a conical intersection (CI) and the internal conversion through the CI is known to be vibrationally coherent. Recently measured two-dimensional electronic spectra (2DES) showed dramatic absorptive spectral features at early waiting times associated with the transition through the CI. The common two-state two-mode model Hamiltonian was unable to elucidate the origin of these features. To rationalize the source of these features, we employ a three-state three-mode model Hamiltonian where the hydrogen out-of plane (HOOP) mode and a higher-lying electronic state are included. The 2DES of the retinal chromophore in rhodopsin are calculated and compared with the experiment. Our analysis shows that the source of the observed features in the measured 2DES is the excited state absorption to a higher-lying electronic state and not the HOOP mode.
Schädler, Marc René; Kollmeier, Birger
2015-04-01
To test if simultaneous spectral and temporal processing is required to extract robust features for automatic speech recognition (ASR), the robust spectro-temporal two-dimensional-Gabor filter bank (GBFB) front-end from Schädler, Meyer, and Kollmeier [J. Acoust. Soc. Am. 131, 4134-4151 (2012)] was de-composed into a spectral one-dimensional-Gabor filter bank and a temporal one-dimensional-Gabor filter bank. A feature set that is extracted with these separate spectral and temporal modulation filter banks was introduced, the separate Gabor filter bank (SGBFB) features, and evaluated on the CHiME (Computational Hearing in Multisource Environments) keywords-in-noise recognition task. From the perspective of robust ASR, the results showed that spectral and temporal processing can be performed independently and are not required to interact with each other. Using SGBFB features permitted the signal-to-noise ratio (SNR) to be lowered by 1.2 dB while still performing as well as the GBFB-based reference system, which corresponds to a relative improvement of the word error rate by 12.8%. Additionally, the real time factor of the spectro-temporal processing could be reduced by more than an order of magnitude. Compared to human listeners, the SNR needed to be 13 dB higher when using Mel-frequency cepstral coefficient features, 11 dB higher when using GBFB features, and 9 dB higher when using SGBFB features to achieve the same recognition performance.
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
The applications of a higher-dimensional Lie algebra and its decomposed subalgebras
Yu, Zhang; Zhang, Yufeng
2009-01-01
With the help of invertible linear transformations and the known Lie algebras, a higher-dimensional 6 × 6 matrix Lie algebra sμ(6) is constructed. It follows a type of new loop algebra is presented. By using a (2 + 1)-dimensional partial-differential equation hierarchy we obtain the integrable coupling of the (2 + 1)-dimensional KN integrable hierarchy, then its corresponding Hamiltonian structure is worked out by employing the quadratic-form identity. Furthermore, a higher-dimensional Lie algebra denoted by E, is given by decomposing the Lie algebra sμ(6), then a discrete lattice integrable coupling system is produced. A remarkable feature of the Lie algebras sμ(6) and E is used to directly construct integrable couplings. PMID:20084092
The applications of a higher-dimensional Lie algebra and its decomposed subalgebras.
Yu, Zhang; Zhang, Yufeng
2009-01-15
With the help of invertible linear transformations and the known Lie algebras, a higher-dimensional 6 x 6 matrix Lie algebra smu(6) is constructed. It follows a type of new loop algebra is presented. By using a (2 + 1)-dimensional partial-differential equation hierarchy we obtain the integrable coupling of the (2 + 1)-dimensional KN integrable hierarchy, then its corresponding Hamiltonian structure is worked out by employing the quadratic-form identity. Furthermore, a higher-dimensional Lie algebra denoted by E, is given by decomposing the Lie algebra smu(6), then a discrete lattice integrable coupling system is produced. A remarkable feature of the Lie algebras smu(6) and E is used to directly construct integrable couplings.
Low-Dimensional Feature Representation for Instrument Identification
NASA Astrophysics Data System (ADS)
Ihara, Mizuki; Maeda, Shin-Ichi; Ikeda, Kazushi; Ishii, Shin
For monophonic music instrument identification, various feature extraction and selection methods have been proposed. One of the issues toward instrument identification is that the same spectrum is not always observed even in the same instrument due to the difference of the recording condition. Therefore, it is important to find non-redundant instrument-specific features that maintain information essential for high-quality instrument identification to apply them to various instrumental music analyses. For such a dimensionality reduction method, the authors propose the utilization of linear projection methods: local Fisher discriminant analysis (LFDA) and LFDA combined with principal component analysis (PCA). After experimentally clarifying that raw power spectra are actually good for instrument classification, the authors reduced the feature dimensionality by LFDA or by PCA followed by LFDA (PCA-LFDA). The reduced features achieved reasonably high identification performance that was comparable or higher than those by the power spectra and those achieved by other existing studies. These results demonstrated that our LFDA and PCA-LFDA can successfully extract low-dimensional instrument features that maintain the characteristic information of the instruments.
NASA Technical Reports Server (NTRS)
Hess, J. L.; Friedman, D. M.
1982-01-01
A three dimensional higher order panel method was specialized to the case of inlets with auxiliary inlets. The resulting program has a number of graphical input-output features to make it highly useful to the designer. The various aspects of the program are described instructions for its use are presented.
World-volume effective theory for higher-dimensional black holes.
Emparan, Roberto; Harmark, Troels; Niarchos, Vasilis; Obers, Niels A
2009-05-15
We argue that the main feature behind novel properties of higher-dimensional black holes, compared to four-dimensional ones, is that their horizons can have two characteristic lengths of very different size. We develop a long-distance world-volume effective theory that captures the black hole dynamics at scales much larger than the short scale. In this limit the black hole is regarded as a blackfold: a black brane (possibly boosted locally) whose world volume spans a curved submanifold of the spacetime. This approach reveals black objects with novel horizon geometries and topologies more complex than the black ring, but more generally it provides a new organizing framework for the dynamics of higher-dimensional black holes.
Feature extraction and classification algorithms for high dimensional data
NASA Technical Reports Server (NTRS)
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.
Signatures of extra dimensions in gravitational waves from black hole quasinormal modes
NASA Astrophysics Data System (ADS)
Chakraborty, Sumanta; Chakravarti, Kabir; Bose, Sukanta; SenGupta, Soumitra
2018-05-01
In this work, we have derived the evolution equation for gravitational perturbation in four-dimensional spacetime in the presence of a spatial extra dimension. The evolution equation is derived by perturbing the effective gravitational field equations on the four-dimensional spacetime, which inherits nontrivial higher-dimensional effects. Note that this is different from the perturbation of the five-dimensional gravitational field equations that exist in the literature and possess quantitatively new features. The gravitational perturbation has further been decomposed into a purely four-dimensional part and another piece that depends on extra dimensions. The four-dimensional gravitational perturbation now admits massive propagating degrees of freedom, owing to the existence of higher dimensions. We have also studied the influence of these massive propagating modes on the quasinormal mode frequencies, signaling the higher-dimensional nature of the spacetime, and have contrasted these massive modes with the massless modes in general relativity. Surprisingly, it turns out that the massive modes experience damping much smaller than that of the massless modes in general relativity and may even dominate over and above the general relativity contribution if one observes the ringdown phase of a black hole merger event at sufficiently late times. Furthermore, the whole analytical framework has been supplemented by the fully numerical Cauchy evolution problem, as well. In this context, we have shown that, except for minute details, the overall features of the gravitational perturbations are captured both in the Cauchy evolution as well as in the analysis of quasinormal modes. The implications on observations of black holes with LIGO and proposed space missions such as LISA are also discussed.
Spectral feature design in high dimensional multispectral data
NASA Technical Reports Server (NTRS)
Chen, Chih-Chien Thomas; Landgrebe, David A.
1988-01-01
The High resolution Imaging Spectrometer (HIRIS) is designed to acquire images simultaneously in 192 spectral bands in the 0.4 to 2.5 micrometers wavelength region. It will make possible the collection of essentially continuous reflectance spectra at a spectral resolution sufficient to extract significantly enhanced amounts of information from return signals as compared to existing systems. The advantages of such high dimensional data come at a cost of increased system and data complexity. For example, since the finer the spectral resolution, the higher the data rate, it becomes impractical to design the sensor to be operated continuously. It is essential to find new ways to preprocess the data which reduce the data rate while at the same time maintaining the information content of the high dimensional signal produced. Four spectral feature design techniques are developed from the Weighted Karhunen-Loeve Transforms: (1) non-overlapping band feature selection algorithm; (2) overlapping band feature selection algorithm; (3) Walsh function approach; and (4) infinite clipped optimal function approach. The infinite clipped optimal function approach is chosen since the features are easiest to find and their classification performance is the best. After the preprocessed data has been received at the ground station, canonical analysis is further used to find the best set of features under the criterion that maximal class separability is achieved. Both 100 dimensional vegetation data and 200 dimensional soil data were used to test the spectral feature design system. It was shown that the infinite clipped versions of the first 16 optimal features had excellent classification performance. The overall probability of correct classification is over 90 percent while providing for a reduced downlink data rate by a factor of 10.
Deep neural networks for texture classification-A theoretical analysis.
Basu, Saikat; Mukhopadhyay, Supratik; Karki, Manohar; DiBiano, Robert; Ganguly, Sangram; Nemani, Ramakrishna; Gayaka, Shreekant
2018-01-01
We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Brane surgery: energy conditions, traversable wormholes, and voids
NASA Astrophysics Data System (ADS)
Barceló1, C.; Visser, M.
2000-09-01
Branes are ubiquitous elements of any low-energy limit of string theory. We point out that negative tension branes violate all the standard energy conditions of the higher-dimensional spacetime they are embedded in; this opens the door to very peculiar solutions of the higher-dimensional Einstein equations. Building upon the (/3+1)-dimensional implementation of fundamental string theory, we illustrate the possibilities by considering a toy model consisting of a (/2+1)-dimensional brane propagating through our observable (/3+1)-dimensional universe. Developing a notion of ``brane surgery'', based on the Israel-Lanczos-Sen ``thin shell'' formalism of general relativity, we analyze the dynamics and find traversable wormholes, closed baby universes, voids (holes in the spacetime manifold), and an evasion (not a violation) of both the singularity theorems and the positive mass theorem. These features appear generic to any brane model that permits negative tension branes: This includes the Randall-Sundrum models and their variants.
A Higher-Order Neural Network Design for Improving Segmentation Performance in Medical Image Series
NASA Astrophysics Data System (ADS)
Selvi, Eşref; Selver, M. Alper; Güzeliş, Cüneyt; Dicle, Oǧuz
2014-03-01
Segmentation of anatomical structures from medical image series is an ongoing field of research. Although, organs of interest are three-dimensional in nature, slice-by-slice approaches are widely used in clinical applications because of their ease of integration with the current manual segmentation scheme. To be able to use slice-by-slice techniques effectively, adjacent slice information, which represents likelihood of a region to be the structure of interest, plays critical role. Recent studies focus on using distance transform directly as a feature or to increase the feature values at the vicinity of the search area. This study presents a novel approach by constructing a higher order neural network, the input layer of which receives features together with their multiplications with the distance transform. This allows higher-order interactions between features through the non-linearity introduced by the multiplication. The application of the proposed method to 9 CT datasets for segmentation of the liver shows higher performance than well-known higher order classification neural networks.
Resonance Raman signature of intertube excitons in compositionally-defined carbon nanotube bundles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, Jeffrey R.; Roslyak, Oleksiy; Duque, Juan G.
Electronic interactions in low-dimensional nanomaterial heterostructures can lead to novel optical responses arising from exciton delocalization over the constituent materials. Similar phenomena have been suggested to arise between closely interacting semiconducting carbon nanotubes of identical structure. Such behavior in carbon nanotubes has potential to generate new exciton physics, impact exciton transport mechanisms in nanotube networks, and place nanotubes as one-dimensional models for such behaviors in systems of higher dimensionality. Here we use resonance Raman spectroscopy to probe intertube interactions in (6,5) chirality-enriched bundles. Raman excitation profiles for the radial breathing mode and G-mode display a previously unobserved sharp resonance feature.more » We show the feature is evidence for creation of intertube excitons and is identified as a Fano resonance arising from the interaction between intratube and intertube excitons. The universality of the model suggests that similar Raman excitation profile features may be observed for interlayer exciton resonances in 2D multilayered systems.« less
Resonance Raman signature of intertube excitons in compositionally-defined carbon nanotube bundles
Simpson, Jeffrey R.; Roslyak, Oleksiy; Duque, Juan G.; ...
2018-02-12
Electronic interactions in low-dimensional nanomaterial heterostructures can lead to novel optical responses arising from exciton delocalization over the constituent materials. Similar phenomena have been suggested to arise between closely interacting semiconducting carbon nanotubes of identical structure. Such behavior in carbon nanotubes has potential to generate new exciton physics, impact exciton transport mechanisms in nanotube networks, and place nanotubes as one-dimensional models for such behaviors in systems of higher dimensionality. Here we use resonance Raman spectroscopy to probe intertube interactions in (6,5) chirality-enriched bundles. Raman excitation profiles for the radial breathing mode and G-mode display a previously unobserved sharp resonance feature.more » We show the feature is evidence for creation of intertube excitons and is identified as a Fano resonance arising from the interaction between intratube and intertube excitons. The universality of the model suggests that similar Raman excitation profile features may be observed for interlayer exciton resonances in 2D multilayered systems.« less
Resonance Raman signature of intertube excitons in compositionally-defined carbon nanotube bundles.
Simpson, Jeffrey R; Roslyak, Oleksiy; Duque, Juan G; Hároz, Erik H; Crochet, Jared J; Telg, Hagen; Piryatinski, Andrei; Walker, Angela R Hight; Doorn, Stephen K
2018-02-12
Electronic interactions in low-dimensional nanomaterial heterostructures can lead to novel optical responses arising from exciton delocalization over the constituent materials. Similar phenomena have been suggested to arise between closely interacting semiconducting carbon nanotubes of identical structure. Such behavior in carbon nanotubes has potential to generate new exciton physics, impact exciton transport mechanisms in nanotube networks, and place nanotubes as one-dimensional models for such behaviors in systems of higher dimensionality. Here we use resonance Raman spectroscopy to probe intertube interactions in (6,5) chirality-enriched bundles. Raman excitation profiles for the radial breathing mode and G-mode display a previously unobserved sharp resonance feature. We show the feature is evidence for creation of intertube excitons and is identified as a Fano resonance arising from the interaction between intratube and intertube excitons. The universality of the model suggests that similar Raman excitation profile features may be observed for interlayer exciton resonances in 2D multilayered systems.
Pilling, Michael; Gellatly, Angus
2013-07-01
We investigated the influence of dimensional set on report of object feature information using an immediate memory probe task. Participants viewed displays containing up to 36 coloured geometric shapes which were presented for several hundred milliseconds before one item was abruptly occluded by a probe. A cue presented simultaneously with the probe instructed participants to report either about the colour or shape of the probe item. A dimensional set towards the colour or shape of the presented items was induced by manipulating task probability - the relative probability with which the two feature dimensions required report. This was done across two participant groups: One group was given trials where there was a higher report probability of colour, the other a higher report probability of shape. Two experiments showed that features were reported most accurately when they were of high task probability, though in both cases the effect was largely driven by the colour dimension. Importantly the task probability effect did not interact with display set size. This is interpreted as tentative evidence that this manipulation influences feature processing in a global manner and at a stage prior to visual short term memory. Copyright © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Arendasy, Martin E.; Sommer, Markus
2010-01-01
In complex three-dimensional mental rotation tasks males have been reported to score up to one standard deviation higher than females. However, this effect size estimate could be compromised by the presence of gender bias at the item level, which calls the validity of purely quantitative performance comparisons into question. We hypothesized that…
Three-dimensional massive gravity and the bigravity black hole
NASA Astrophysics Data System (ADS)
Bañados, Máximo; Theisen, Stefan
2009-11-01
We study three-dimensional massive gravity formulated as a theory with two dynamical metrics, like the f-g theories of Isham-Salam and Strathdee. The action is parity preserving and has no higher derivative terms. The spectrum contains a single massive graviton. This theory has several features discussed recently in TMG and NMG. We find warped black holes, a critical point, and generalized Brown-Henneaux boundary conditions.
A real negative selection algorithm with evolutionary preference for anomaly detection
NASA Astrophysics Data System (ADS)
Yang, Tao; Chen, Wen; Li, Tao
2017-04-01
Traditional real negative selection algorithms (RNSAs) adopt the estimated coverage (c0) as the algorithm termination threshold, and generate detectors randomly. With increasing dimensions, the data samples could reside in the low-dimensional subspace, so that the traditional detectors cannot effectively distinguish these samples. Furthermore, in high-dimensional feature space, c0 cannot exactly reflect the detectors set coverage rate for the nonself space, and it could lead the algorithm to be terminated unexpectedly when the number of detectors is insufficient. These shortcomings make the traditional RNSAs to perform poorly in high-dimensional feature space. Based upon "evolutionary preference" theory in immunology, this paper presents a real negative selection algorithm with evolutionary preference (RNSAP). RNSAP utilizes the "unknown nonself space", "low-dimensional target subspace" and "known nonself feature" as the evolutionary preference to guide the generation of detectors, thus ensuring the detectors can cover the nonself space more effectively. Besides, RNSAP uses redundancy to replace c0 as the termination threshold, in this way RNSAP can generate adequate detectors under a proper convergence rate. The theoretical analysis and experimental result demonstrate that, compared to the classical RNSA (V-detector), RNSAP can achieve a higher detection rate, but with less detectors and computing cost.
Face-iris multimodal biometric scheme based on feature level fusion
NASA Astrophysics Data System (ADS)
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei
2015-11-01
Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.
NASA Astrophysics Data System (ADS)
Chaa, Mourad; Boukezzoula, Naceur-Eddine; Attia, Abdelouahab
2017-01-01
Two types of scores extracted from two-dimensional (2-D) and three-dimensional (3-D) palmprint for personal recognition systems are merged, introducing a local image descriptor for 2-D palmprint-based recognition systems, named bank of binarized statistical image features (B-BSIF). The main idea of B-BSIF is that the extracted histograms from the binarized statistical image features (BSIF) code images (the results of applying the different BSIF descriptor size with the length 12) are concatenated into one to produce a large feature vector. 3-D palmprint contains the depth information of the palm surface. The self-quotient image (SQI) algorithm is applied for reconstructing illumination-invariant 3-D palmprint images. To extract discriminative Gabor features from SQI images, Gabor wavelets are defined and used. Indeed, the dimensionality reduction methods have shown their ability in biometrics systems. Given this, a principal component analysis (PCA)+linear discriminant analysis (LDA) technique is employed. For the matching process, the cosine Mahalanobis distance is applied. Extensive experiments were conducted on a 2-D and 3-D palmprint database with 10,400 range images from 260 individuals. Then, a comparison was made between the proposed algorithm and other existing methods in the literature. Results clearly show that the proposed framework provides a higher correct recognition rate. Furthermore, the best results were obtained by merging the score of B-BSIF descriptor with the score of the SQI+Gabor wavelets+PCA+LDA method, yielding an equal error rate of 0.00% and a recognition rate of rank-1=100.00%.
New Features for Neuron Classification.
Hernández-Pérez, Leonardo A; Delgado-Castillo, Duniel; Martín-Pérez, Rainer; Orozco-Morales, Rubén; Lorenzo-Ginori, Juan V
2018-04-28
This paper addresses the problem of obtaining new neuron features capable of improving results of neuron classification. Most studies on neuron classification using morphological features have been based on Euclidean geometry. Here three one-dimensional (1D) time series are derived from the three-dimensional (3D) structure of neuron instead, and afterwards a spatial time series is finally constructed from which the features are calculated. Digitally reconstructed neurons were separated into control and pathological sets, which are related to three categories of alterations caused by epilepsy, Alzheimer's disease (long and local projections), and ischemia. These neuron sets were then subjected to supervised classification and the results were compared considering three sets of features: morphological, features obtained from the time series and a combination of both. The best results were obtained using features from the time series, which outperformed the classification using only morphological features, showing higher correct classification rates with differences of 5.15, 3.75, 5.33% for epilepsy and Alzheimer's disease (long and local projections) respectively. The morphological features were better for the ischemia set with a difference of 3.05%. Features like variance, Spearman auto-correlation, partial auto-correlation, mutual information, local minima and maxima, all related to the time series, exhibited the best performance. Also we compared different evaluators, among which ReliefF was the best ranked.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Froning, H. David; Meholic, Gregory V.
2010-01-28
This paper briefly explores higher dimensional spacetimes that extend Meholic's visualizable, fluidic views of: subluminal-luminal-superluminal flight; gravity, inertia, light quanta, and electromagnetism from 2-D to 3-D representations. Although 3-D representations have the potential to better model features of Meholic's most fundamental entities (Transluminal Energy Quantum) and of the zero-point quantum vacuum that pervades all space, the more complex 3-D representations loose some of the clarity of Meholic's 2-D representations of subluminal and superlumimal realms. So, much new work would be needed to replace Meholic's 2-D views of reality with 3-D ones.
Derivation of an artificial gene to improve classification accuracy upon gene selection.
Seo, Minseok; Oh, Sejong
2012-02-01
Classification analysis has been developed continuously since 1936. This research field has advanced as a result of development of classifiers such as KNN, ANN, and SVM, as well as through data preprocessing areas. Feature (gene) selection is required for very high dimensional data such as microarray before classification work. The goal of feature selection is to choose a subset of informative features that reduces processing time and provides higher classification accuracy. In this study, we devised a method of artificial gene making (AGM) for microarray data to improve classification accuracy. Our artificial gene was derived from a whole microarray dataset, and combined with a result of gene selection for classification analysis. We experimentally confirmed a clear improvement of classification accuracy after inserting artificial gene. Our artificial gene worked well for popular feature (gene) selection algorithms and classifiers. The proposed approach can be applied to any type of high dimensional dataset. Copyright © 2011 Elsevier Ltd. All rights reserved.
Induced gravity on intersecting brane worlds. II. Cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corradini, Olindo; Koyama, Kazuya; Tasinato, Gianmassimo
2008-12-15
We explore cosmology of intersecting brane worlds with induced gravity on the branes. We find the cosmological equations that control the evolution of a moving codimension-one brane and a codimension-two brane that sits at the intersection. We study the Friedmann equation at the intersection, finding new contributions from the six-dimensional bulk. These higher dimensional contributions allow us to find new examples of self-accelerating configurations for the codimension-two brane at the intersection and we discuss their features.
NASA Astrophysics Data System (ADS)
Taylor, M. B.
2009-09-01
The new plotting functionality in version 2.0 of STILTS is described. STILTS is a mature and powerful package for all kinds of table manipulation, and this version adds facilities for generating plots from one or more tables to its existing wide range of non-graphical capabilities. 2- and 3-dimensional scatter plots and 1-dimensional histograms may be generated using highly configurable style parameters. Features include multiple dataset overplotting, variable transparency, 1-, 2- or 3-dimensional symmetric or asymmetric error bars, higher-dimensional visualization using color, and textual point labeling. Vector and bitmapped output formats are supported. The plotting options provide enough flexibility to perform meaningful visualization on datasets from a few points up to tens of millions. Arbitrarily large datasets can be plotted without heavy memory usage.
n-SIFT: n-dimensional scale invariant feature transform.
Cheung, Warren; Hamarneh, Ghassan
2009-09-01
We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.
Evidence of tampering in watermark identification
NASA Astrophysics Data System (ADS)
McLauchlan, Lifford; Mehrübeoglu, Mehrübe
2009-08-01
In this work, watermarks are embedded in digital images in the discrete wavelet transform (DWT) domain. Principal component analysis (PCA) is performed on the DWT coefficients. Next higher order statistics based on the principal components and the eigenvalues are determined for different sets of images. Feature sets are analyzed for different types of attacks in m dimensional space. The results demonstrate the separability of the features for the tampered digital copies. Different feature sets are studied to determine more effective tamper evident feature sets. The digital forensics, the probable manipulation(s) or modification(s) performed on the digital information can be identified using the described technique.
LDA boost classification: boosting by topics
NASA Astrophysics Data System (ADS)
Lei, La; Qiao, Guo; Qimin, Cao; Qitao, Li
2012-12-01
AdaBoost is an efficacious classification algorithm especially in text categorization (TC) tasks. The methodology of setting up a classifier committee and voting on the documents for classification can achieve high categorization precision. However, traditional Vector Space Model can easily lead to the curse of dimensionality and feature sparsity problems; so it affects classification performance seriously. This article proposed a novel classification algorithm called LDABoost based on boosting ideology which uses Latent Dirichlet Allocation (LDA) to modeling the feature space. Instead of using words or phrase, LDABoost use latent topics as the features. In this way, the feature dimension is significantly reduced. Improved Naïve Bayes (NB) is designed as the weaker classifier which keeps the efficiency advantage of classic NB algorithm and has higher precision. Moreover, a two-stage iterative weighted method called Cute Integration in this article is proposed for improving the accuracy by integrating weak classifiers into strong classifier in a more rational way. Mutual Information is used as metrics of weights allocation. The voting information and the categorization decision made by basis classifiers are fully utilized for generating the strong classifier. Experimental results reveals LDABoost making categorization in a low-dimensional space, it has higher accuracy than traditional AdaBoost algorithms and many other classic classification algorithms. Moreover, its runtime consumption is lower than different versions of AdaBoost, TC algorithms based on support vector machine and Neural Networks.
Podlogar, Matthew C; Rogers, Megan L; Stanley, Ian H; Hom, Melanie A; Chiurliza, Bruno; Joiner, Thomas E
2017-03-20
Anxiety and depression diagnoses are associated with suicidal thoughts and behaviours. However, a categorical understanding of these associations limits insight into identifying dimensional mechanisms of suicide risk. This study investigated anxious and depressive features through a lens of suicide risk, independent of diagnosis. Latent class analysis of 97 depression, anxiety, and suicidality-related items among 616 psychiatric outpatients indicated a 3-class solution, specifically: (1) a higher suicide-risk class uniquely differentiated from both other classes by high reported levels of depression and anxious arousal; (2) a lower suicide-risk class that reported levels of anxiety sensitivity and generalised worry comparable to Class 1, but lower levels of depression and anxious arousal; and (3) a low to non-suicidal class that reported relatively low levels across all depression and anxiety measures. Discriminants of the higher suicide-risk class included borderline personality disorder; report of worthlessness, crying, and sadness; higher levels of anxious arousal and negative affect; and lower levels of positive affect. Depression and anxiety diagnoses were not discriminant between higher and lower suicide risk classes. This transdiagnostic and dimensional approach to understanding the suicidal spectrum contrasts with treating it as a depressive symptom, and illustrates the advantages of a tripartite model for conceptualising suicide risk.
Automated computation of autonomous spectral submanifolds for nonlinear modal analysis
NASA Astrophysics Data System (ADS)
Ponsioen, Sten; Pedergnana, Tiemo; Haller, George
2018-04-01
We discuss an automated computational methodology for computing two-dimensional spectral submanifolds (SSMs) in autonomous nonlinear mechanical systems of arbitrary degrees of freedom. In our algorithm, SSMs, the smoothest nonlinear continuations of modal subspaces of the linearized system, are constructed up to arbitrary orders of accuracy, using the parameterization method. An advantage of this approach is that the construction of the SSMs does not break down when the SSM folds over its underlying spectral subspace. A further advantage is an automated a posteriori error estimation feature that enables a systematic increase in the orders of the SSM computation until the required accuracy is reached. We find that the present algorithm provides a major speed-up, relative to numerical continuation methods, in the computation of backbone curves, especially in higher-dimensional problems. We illustrate the accuracy and speed of the automated SSM algorithm on lower- and higher-dimensional mechanical systems.
Chiral higher spin theories and self-duality
NASA Astrophysics Data System (ADS)
Ponomarev, Dmitry
2017-12-01
We study recently proposed chiral higher spin theories — cubic theories of interacting massless higher spin fields in four-dimensional flat space. We show that they are naturally associated with gauge algebras, which manifest themselves in several related ways. Firstly, the chiral higher spin equations of motion can be reformulated as the self-dual Yang-Mills equations with the associated gauge algebras instead of the usual colour gauge algebra. We also demonstrate that the chiral higher spin field equations, similarly to the self-dual Yang-Mills equations, feature an infinite algebra of hidden symmetries, which ensures their integrability. Secondly, we show that off-shell amplitudes in chiral higher spin theories satisfy the generalised BCJ relations with the usual colour structure constants replaced by the structure constants of higher spin gauge algebras. We also propose generalised double copy procedures featuring higher spin theory amplitudes. Finally, using the light-cone deformation procedure we prove that the structure of the Lagrangian that leads to all these properties is universal and follows from Lorentz invariance.
Unsupervised spike sorting based on discriminative subspace learning.
Keshtkaran, Mohammad Reza; Yang, Zhi
2014-01-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. In this paper, we present two unsupervised spike sorting algorithms based on discriminative subspace learning. The first algorithm simultaneously learns the discriminative feature subspace and performs clustering. It uses histogram of features in the most discriminative projection to detect the number of neurons. The second algorithm performs hierarchical divisive clustering that learns a discriminative 1-dimensional subspace for clustering in each level of the hierarchy until achieving almost unimodal distribution in the subspace. The algorithms are tested on synthetic and in-vivo data, and are compared against two widely used spike sorting methods. The comparative results demonstrate that our spike sorting methods can achieve substantially higher accuracy in lower dimensional feature space, and they are highly robust to noise. Moreover, they provide significantly better cluster separability in the learned subspace than in the subspace obtained by principal component analysis or wavelet transform.
Dimensional assessment of personality pathology in patients with eating disorders.
Goldner, E M; Srikameswaran, S; Schroeder, M L; Livesley, W J; Birmingham, C L
1999-02-22
This study examined patients with eating disorders on personality pathology using a dimensional method. Female subjects who met DSM-IV diagnostic criteria for eating disorder (n = 136) were evaluated and compared to an age-controlled general population sample (n = 68). We assessed 18 features of personality disorder with the Dimensional Assessment of Personality Pathology - Basic Questionnaire (DAPP-BQ). Factor analysis and cluster analysis were used to derive three clusters of patients. A five-factor solution was obtained with limited intercorrelation between factors. Cluster analysis produced three clusters with the following characteristics: Cluster 1 members (constituting 49.3% of the sample and labelled 'rigid') had higher mean scores on factors denoting compulsivity and interpersonal difficulties; Cluster 2 (18.4% of the sample) showed highest scores in factors denoting psychopathy, neuroticism and impulsive features, and appeared to constitute a borderline psychopathology group; Cluster 3 (32.4% of the sample) was characterized by few differences in personality pathology in comparison to the normal population sample. Cluster membership was associated with DSM-IV diagnosis -- a large proportion of patients with anorexia nervosa were members of Cluster 1. An empirical classification of eating-disordered patients derived from dimensional assessment of personality pathology identified three groups with clinical relevance.
Classification of Microarray Data Using Kernel Fuzzy Inference System
Kumar Rath, Santanu
2014-01-01
The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function. PMID:27433543
Non-classical photon correlation in a two-dimensional photonic lattice.
Gao, Jun; Qiao, Lu-Feng; Lin, Xiao-Feng; Jiao, Zhi-Qiang; Feng, Zhen; Zhou, Zheng; Gao, Zhen-Wei; Xu, Xiao-Yun; Chen, Yuan; Tang, Hao; Jin, Xian-Min
2016-06-13
Quantum interference and quantum correlation, as two main features of quantum optics, play an essential role in quantum information applications, such as multi-particle quantum walk and boson sampling. While many experimental demonstrations have been done in one-dimensional waveguide arrays, it remains unexplored in higher dimensions due to tight requirement of manipulating and detecting photons in large-scale. Here, we experimentally observe non-classical correlation of two identical photons in a fully coupled two-dimensional structure, i.e. photonic lattice manufactured by three-dimensional femtosecond laser writing. Photon interference consists of 36 Hong-Ou-Mandel interference and 9 bunching. The overlap between measured and simulated distribution is up to 0.890 ± 0.001. Clear photon correlation is observed in the two-dimensional photonic lattice. Combining with controllably engineered disorder, our results open new perspectives towards large-scale implementation of quantum simulation on integrated photonic chips.
Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach
NASA Astrophysics Data System (ADS)
Liu, Wenyang; Sawant, Amit; Ruan, Dan
2016-07-01
The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.
Assessment of Student Engagement: An Analysis of Trends
ERIC Educational Resources Information Center
Nauffal, Diane I.
2012-01-01
Quality is a multi-dimensional concept and embraces all functions and activities of higher education (academic programs, research, and community services) in all their features and components. Traditionally quality was a measure of resources and reputation. In recent years there has been a shift in emphasis to institutional best practices such as…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
NASA Astrophysics Data System (ADS)
Qu, Haicheng; Liang, Xuejian; Liang, Shichao; Liu, Wanjun
2018-01-01
Many methods of hyperspectral image classification have been proposed recently, and the convolutional neural network (CNN) achieves outstanding performance. However, spectral-spatial classification of CNN requires an excessively large model, tremendous computations, and complex network, and CNN is generally unable to use the noisy bands caused by water-vapor absorption. A dimensionality-varied CNN (DV-CNN) is proposed to address these issues. There are four stages in DV-CNN and the dimensionalities of spectral-spatial feature maps vary with the stages. DV-CNN can reduce the computation and simplify the structure of the network. All feature maps are processed by more kernels in higher stages to extract more precise features. DV-CNN also improves the classification accuracy and enhances the robustness to water-vapor absorption bands. The experiments are performed on data sets of Indian Pines and Pavia University scene. The classification performance of DV-CNN is compared with state-of-the-art methods, which contain the variations of CNN, traditional, and other deep learning methods. The experiment of performance analysis about DV-CNN itself is also carried out. The experimental results demonstrate that DV-CNN outperforms state-of-the-art methods for spectral-spatial classification and it is also robust to water-vapor absorption bands. Moreover, reasonable parameters selection is effective to improve classification accuracy.
NASA Astrophysics Data System (ADS)
Kobayashi, M.; Miura, H.; Toda, H.
2015-08-01
Anisotropy of mechanical responses depending on crystallographic orientation causes inhomogeneous deformation on the mesoscopic scale (grain size scale). Investigation of the local plastic strain development is important for discussing recrystallization mechanisms, because the sites with higher local plastic strain may act as potential nucleation sites for recrystallization. Recently, high-resolution X-ray tomography, which is non-destructive inspection method, has been utilized for observation of the materials structure. In synchrotron radiation X-ray tomography, more than 10,000 microstructural features, like precipitates, dispersions, compounds and hydrogen pores, can be observed in aluminium alloys. We have proposed employing these microstructural features as marker gauges to measure local strains, and then have developed a method to calculate the three-dimensional strain distribution by tracking the microstructural features. In this study, we report the development of local plastic strain as a function of the grain microstructure in an aluminium alloy by means of this three-dimensional strain measurement technique. Strongly heterogeneous strain development was observed during tensile loading to 30%. In other words, some parts of the sample deform little whereas another deforms a lot. However, strain in the whole specimen was keeping harmony. Comparing the microstructure with the strain concentration that is obtained by this method has a potential to reveal potential nucleation sites of recrystallization.
Unbiased feature selection in learning random forests for high-dimensional data.
Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi
2015-01-01
Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.
Jacobs, Richard H A H; Haak, Koen V; Thumfart, Stefan; Renken, Remco; Henson, Brian; Cornelissen, Frans W
2016-01-01
Our world is filled with texture. For the human visual system, this is an important source of information for assessing environmental and material properties. Indeed-and presumably for this reason-the human visual system has regions dedicated to processing textures. Despite their abundance and apparent relevance, only recently the relationships between texture features and high-level judgments have captured the interest of mainstream science, despite long-standing indications for such relationships. In this study, we explore such relationships, as these might be used to predict perceived texture qualities. This is relevant, not only from a psychological/neuroscience perspective, but also for more applied fields such as design, architecture, and the visual arts. In two separate experiments, observers judged various qualities of visual textures such as beauty, roughness, naturalness, elegance, and complexity. Based on factor analysis, we find that in both experiments, ~75% of the variability in the judgments could be explained by a two-dimensional space, with axes that are closely aligned to the beauty and roughness judgments. That a two-dimensional judgment space suffices to capture most of the variability in the perceived texture qualities suggests that observers use a relatively limited set of internal scales on which to base various judgments, including aesthetic ones. Finally, for both of these judgments, we determined the relationship with a large number of texture features computed for each of the texture stimuli. We find that the presence of lower spatial frequencies, oblique orientations, higher intensity variation, higher saturation, and redness correlates with higher beauty ratings. Features that captured image intensity and uniformity correlated with roughness ratings. Therefore, a number of computational texture features are predictive of these judgments. This suggests that perceived texture qualities-including the aesthetic appreciation-are sufficiently universal to be predicted-with reasonable accuracy-based on the computed feature content of the textures.
Jacobs, Richard H. A. H.; Haak, Koen V.; Thumfart, Stefan; Renken, Remco; Henson, Brian; Cornelissen, Frans W.
2016-01-01
Our world is filled with texture. For the human visual system, this is an important source of information for assessing environmental and material properties. Indeed—and presumably for this reason—the human visual system has regions dedicated to processing textures. Despite their abundance and apparent relevance, only recently the relationships between texture features and high-level judgments have captured the interest of mainstream science, despite long-standing indications for such relationships. In this study, we explore such relationships, as these might be used to predict perceived texture qualities. This is relevant, not only from a psychological/neuroscience perspective, but also for more applied fields such as design, architecture, and the visual arts. In two separate experiments, observers judged various qualities of visual textures such as beauty, roughness, naturalness, elegance, and complexity. Based on factor analysis, we find that in both experiments, ~75% of the variability in the judgments could be explained by a two-dimensional space, with axes that are closely aligned to the beauty and roughness judgments. That a two-dimensional judgment space suffices to capture most of the variability in the perceived texture qualities suggests that observers use a relatively limited set of internal scales on which to base various judgments, including aesthetic ones. Finally, for both of these judgments, we determined the relationship with a large number of texture features computed for each of the texture stimuli. We find that the presence of lower spatial frequencies, oblique orientations, higher intensity variation, higher saturation, and redness correlates with higher beauty ratings. Features that captured image intensity and uniformity correlated with roughness ratings. Therefore, a number of computational texture features are predictive of these judgments. This suggests that perceived texture qualities—including the aesthetic appreciation—are sufficiently universal to be predicted—with reasonable accuracy—based on the computed feature content of the textures. PMID:27493628
Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan
2017-07-01
High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.
Secondary iris recognition method based on local energy-orientation feature
NASA Astrophysics Data System (ADS)
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing
2015-01-01
This paper proposes a secondary iris recognition based on local features. The application of the energy-orientation feature (EOF) by two-dimensional Gabor filter to the extraction of the iris goes before the first recognition by the threshold of similarity, which sets the whole iris database into two categories-a correctly recognized class and a class to be recognized. Therefore, the former are accepted and the latter are transformed by histogram to achieve an energy-orientation histogram feature (EOHF), which is followed by a second recognition with the chi-square distance. The experiment has proved that the proposed method, because of its higher correct recognition rate, could be designated as the most efficient and effective among its companion studies in iris recognition algorithms.
A New Direction of Cancer Classification: Positive Effect of Low-Ranking MicroRNAs.
Li, Feifei; Piao, Minghao; Piao, Yongjun; Li, Meijing; Ryu, Keun Ho
2014-10-01
Many studies based on microRNA (miRNA) expression profiles showed a new aspect of cancer classification. Because one characteristic of miRNA expression data is the high dimensionality, feature selection methods have been used to facilitate dimensionality reduction. The feature selection methods have one shortcoming thus far: they just consider the problem of where feature to class is 1:1 or n:1. However, because one miRNA may influence more than one type of cancer, human miRNA is considered to be ranked low in traditional feature selection methods and are removed most of the time. In view of the limitation of the miRNA number, low-ranking miRNAs are also important to cancer classification. We considered both high- and low-ranking features to cover all problems (1:1, n:1, 1:n, and m:n) in cancer classification. First, we used the correlation-based feature selection method to select the high-ranking miRNAs, and chose the support vector machine, Bayes network, decision tree, k-nearest-neighbor, and logistic classifier to construct cancer classification. Then, we chose Chi-square test, information gain, gain ratio, and Pearson's correlation feature selection methods to build the m:n feature subset, and used the selected miRNAs to determine cancer classification. The low-ranking miRNA expression profiles achieved higher classification accuracy compared with just using high-ranking miRNAs in traditional feature selection methods. Our results demonstrate that the m:n feature subset made a positive impression of low-ranking miRNAs in cancer classification.
NASA Astrophysics Data System (ADS)
Ma, Yun-Ming; Wang, Tie-Jun
2017-10-01
Higher-dimensional quantum system is of great interest owing to the outstanding features exhibited in the implementation of novel fundamental tests of nature and application in various quantum information tasks. High-dimensional quantum logic gate is a key element in scalable quantum computation and quantum communication. In this paper, we propose a scheme to implement a controlled-phase gate between a 2 N -dimensional photon and N three-level artificial atoms. This high-dimensional controlled-phase gate can serve as crucial components of the high-capacity, long-distance quantum communication. We use the high-dimensional Bell state analysis as an example to show the application of this device. Estimates on the system requirements indicate that our protocol is realizable with existing or near-further technologies. This scheme is ideally suited to solid-state integrated optical approaches to quantum information processing, and it can be applied to various system, such as superconducting qubits coupled to a resonator or nitrogen-vacancy centers coupled to a photonic-band-gap structures.
Online 3D Ear Recognition by Combining Global and Local Features.
Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David
2016-01-01
The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%.
Online 3D Ear Recognition by Combining Global and Local Features
Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David
2016-01-01
The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%. PMID:27935955
Grid point extraction and coding for structured light system
NASA Astrophysics Data System (ADS)
Song, Zhan; Chung, Ronald
2011-09-01
A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.
Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua
2013-01-01
Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.
Online feature selection with streaming features.
Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan
2013-05-01
We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.
Fabrication and characterization of three-dimensional carbon electrodes for lithium-ion batteries
NASA Astrophysics Data System (ADS)
Teixidor, Genis Turon; Zaouk, Rabih B.; Park, Benjamin Y.; Madou, Marc J.
This paper presents fabrication and testing results of three-dimensional carbon anodes for lithium-ion batteries, which are fabricated through the pyrolysis of lithographically patterned epoxy resins. This technique, known as Carbon-MEMS, provides great flexibility and an unprecedented dimensional control in shaping carbon microstructures. Variations in the pattern density and in the pyrolysis conditions result in anodes with different specific and gravimetric capacities, with a three to six times increase in specific capacity with respect to the current thin-film battery technology. Newly designed cross-shaped Carbon-MEMS arrays have a much higher mechanical robustness (as given by their moment of inertia) than the traditionally used cylindrical posts, but the gravimetric analysis suggests that new designs with thinner features are required for better carbon utilization. Pyrolysis at higher temperatures and slower ramping up schedules reduces the irreversible capacity of the carbon electrodes. We also analyze the addition of Meso-Carbon Micro-Beads (MCMB) particles on the reversible and irreversible capacities of new three-dimensional, hybrid electrodes. This combination results in a slight increase in reversible capacity and a big increase in the irreversible capacity of the carbon electrodes, mostly due to the non-complete attachment of the MCMB particles.
Sublimation-Condensation of Multiscale Tellurium Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riley, Brian J.; Johnson, Bradley R.; Schaef, Herbert T.
This paper presents a simple technique for making tellurium (Te) nano and microtubes of widely varying dimensions with Multi-Scale Processing (MSP). In this process, the Te metal is placed in a reaction vessel (e.g., borosilicate or fused quartz), the vessel is evacuated, and then sealed under vacuum with a torch. The vessel is heat-treated in a temperature gradient where a portion of the tube that can also contain an additional substrate, is under a decreasing temperature gradient. Scanning and transmission electron microscopies have shown that multifaceted crystalline tubes have been formed extending from nano- up to micron-scale with diameters rangingmore » from 51.2 ± 5.9 to 1042 ± 134 nm between temperatures of 157 and 224 °C, respectively. One-dimensional tubular features are seen at lower temperatures, while three-dimensional features, at the higher temperatures. These features have been characterized with X-ray diffraction and found to be trigonal Te with space group P3121. Our results show that the MSP can adequately be described using a simple Arrhenius equation.« less
Dynamic Dimensionality Selection for Bayesian Classifier Ensembles
2015-03-19
learning of weights in an otherwise generatively learned naive Bayes classifier. WANBIA-C is very cometitive to Logistic Regression but much more...classifier, Generative learning, Discriminative learning, Naïve Bayes, Feature selection, Logistic regression , higher order attribute independence 16...discriminative learning of weights in an otherwise generatively learned naive Bayes classifier. WANBIA-C is very cometitive to Logistic Regression but
NASA Astrophysics Data System (ADS)
Wei, Tzu-Chieh; Huang, Ching-Yu
2017-09-01
Recent progress in the characterization of gapped quantum phases has also triggered the search for a universal resource for quantum computation in symmetric gapped phases. Prior works in one dimension suggest that it is a feature more common than previously thought, in that nontrivial one-dimensional symmetry-protected topological (SPT) phases provide quantum computational power characterized by the algebraic structure defining these phases. Progress in two and higher dimensions so far has been limited to special fixed points. Here we provide two families of two-dimensional Z2 symmetric wave functions such that there exists a finite region of the parameter in the SPT phases that supports universal quantum computation. The quantum computational power appears to lose its universality at the boundary between the SPT and the symmetry-breaking phases.
Graph theory approach to the eigenvalue problem of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.; Bainum, P. M.
1981-01-01
Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.
NASA Astrophysics Data System (ADS)
Haitjema, Henk M.
1985-10-01
A technique is presented to incorporate three-dimensional flow in a Dupuit-Forchheimer model. The method is based on superposition of approximate analytic solutions to both two- and three-dimensional flow features in a confined aquifer of infinite extent. Three-dimensional solutions are used in the domain of interest, while farfield conditions are represented by two-dimensional solutions. Approximate three- dimensional solutions have been derived for a partially penetrating well and a shallow creek. Each of these solutions satisfies the condition that no flow occurs across the confining layers of the aquifer. Because of this condition, the flow at some distance of a three-dimensional feature becomes nearly horizontal. Consequently, remotely from a three-dimensional feature, its three-dimensional solution is replaced by a corresponding two-dimensional one. The latter solution is trivial as compared to its three-dimensional counterpart, and its use greatly enhances the computational efficiency of the model. As an example, the flow is modeled between a partially penetrating well and a shallow creek that occur in a regional aquifer system.
High dimensional feature reduction via projection pursuit
NASA Technical Reports Server (NTRS)
Jimenez, Luis; Landgrebe, David
1994-01-01
The recent development of more sophisticated remote sensing systems enables the measurement of radiation in many more spectral intervals than previously possible. An example of that technology is the AVIRIS system, which collects image data in 220 bands. As a result of this, new algorithms must be developed in order to analyze the more complex data effectively. Data in a high dimensional space presents a substantial challenge, since intuitive concepts valid in a 2-3 dimensional space to not necessarily apply in higher dimensional spaces. For example, high dimensional space is mostly empty. This results from the concentration of data in the corners of hypercubes. Other examples may be cited. Such observations suggest the need to project data to a subspace of a much lower dimension on a problem specific basis in such a manner that information is not lost. Projection Pursuit is a technique that will accomplish such a goal. Since it processes data in lower dimensions, it should avoid many of the difficulties of high dimensional spaces. In this paper, we begin the investigation of some of the properties of Projection Pursuit for this purpose.
Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding
Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping
2015-01-01
Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771
Kopp, Bruno; Tabeling, Sandra; Moschner, Carsten; Wessel, Karl
2007-08-17
Decision-making is a fundamental capacity which is crucial to many higher-order psychological functions. We recorded event-related potentials (ERPs) during a visual target-identification task that required go-nogo choices. Targets were identified on the basis of cross-dimensional conjunctions of particular colors and forms. Color discriminability was manipulated in three conditions to determine the effects of color distinctiveness on component processes of decision-making. Target identification was accompanied by the emergence of prefrontal P2a and P3b. Selection negativity (SN) revealed that target-compatible features captured attention more than target-incompatible features, suggesting that intra-dimensional attentional capture was goal-contingent. No changes of cross-dimensional selection priorities were measurable when color discriminability was altered. Peak latencies of the color-related SN provided a chronometric measure of the duration of attention-related neural processing. ERPs recorded over the frontocentral scalp (N2c, P3a) revealed that color-overlap distractors, more than form-overlap distractors, required additional late selection. The need for additional response selection induced by color-overlap distractors was severely reduced when color discriminability decreased. We propose a simple model of cross-dimensional perceptual decision-making. The temporal synchrony of separate color-related and form-related choices determines whether or not distractor processing includes post-perceptual stages. ERP measures contribute to a comprehensive explanation of the temporal dynamics of component processes of perceptual decision-making.
High Dimensional Classification Using Features Annealed Independence Rules.
Fan, Jianqing; Fan, Yingying
2008-01-01
Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.
NASA Astrophysics Data System (ADS)
Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina
2014-03-01
We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.
Advancing three-dimensional MEMS by complimentary laser micro manufacturing
NASA Astrophysics Data System (ADS)
Palmer, Jeremy A.; Williams, John D.; Lemp, Tom; Lehecka, Tom M.; Medina, Francisco; Wicker, Ryan B.
2006-01-01
This paper describes improvements that enable engineers to create three-dimensional MEMS in a variety of materials. It also provides a means for selectively adding three-dimensional, high aspect ratio features to pre-existing PMMA micro molds for subsequent LIGA processing. This complimentary method involves in situ construction of three-dimensional micro molds in a stand-alone configuration or directly adjacent to features formed by x-ray lithography. Three-dimensional micro molds are created by micro stereolithography (MSL), an additive rapid prototyping technology. Alternatively, three-dimensional features may be added by direct femtosecond laser micro machining. Parameters for optimal femtosecond laser micro machining of PMMA at 800 nanometers are presented. The technical discussion also includes strategies for enhancements in the context of material selection and post-process surface finish. This approach may lead to practical, cost-effective 3-D MEMS with the surface finish and throughput advantages of x-ray lithography. Accurate three-dimensional metal microstructures are demonstrated. Challenges remain in process planning for micro stereolithography and development of buried features following femtosecond laser micro machining.
ERIC Educational Resources Information Center
Chan, Louis K. H.; Hayward, William G.
2009-01-01
In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed…
Broken Ergodicity in Two-Dimensional Homogeneous Magnetohydrodynamic Turbulence
NASA Technical Reports Server (NTRS)
Shebalin, John V.
2010-01-01
Two-dimensional (2-D) homogeneous magnetohydrodynamic (MHD) turbulence has many of the same qualitative features as three-dimensional (3-D) homogeneous MHD turbulence.The se features include several ideal invariants, along with the phenomenon of broken ergodicity. Broken ergodicity appears when certain modes act like random variables with mean values that are large compared to their standard deviations, indicating a coherent structure or dynamo.Recently, the origin of broken ergodicity in 3-D MHD turbulence that is manifest in the lowest wavenumbers was explained. Here, a detailed description of the origins of broken ergodicity in 2-D MHD turbulence is presented. It will be seen that broken ergodicity in ideal 2-D MHD turbulence can be manifest in the lowest wavenumbers of a finite numerical model for certain initial conditions or in the highest wavenumbers for another set of initial conditions.T he origins of broken ergodicity in ideal 2-D homogeneous MHD turbulence are found through an eigen analysis of the covariance matrices of the modal probability density functions.It will also be shown that when the lowest wavenumber magnetic field becomes quasi-stationary, the higher wavenumber modes can propagate as Alfven waves on these almost static large-scale magnetic structures
Multidimensional brain activity dictated by winner-take-all mechanisms.
Tozzi, Arturo; Peters, James F
2018-06-21
A novel demon-based architecture is introduced to elucidate brain functions such as pattern recognition during human perception and mental interpretation of visual scenes. Starting from the topological concepts of invariance and persistence, we introduce a Selfridge pandemonium variant of brain activity that takes into account a novel feature, namely, demons that recognize short straight-line segments, curved lines and scene shapes, such as shape interior, density and texture. Low-level representations of objects can be mapped to higher-level views (our mental interpretations): a series of transformations can be gradually applied to a pattern in a visual scene, without affecting its invariant properties. This makes it possible to construct a symbolic multi-dimensional representation of the environment. These representations can be projected continuously to an object that we have seen and continue to see, thanks to the mapping from shapes in our memory to shapes in Euclidean space. Although perceived shapes are 3-dimensional (plus time), the evaluation of shape features (volume, color, contour, closeness, texture, and so on) leads to n-dimensional brain landscapes. Here we discuss the advantages of our parallel, hierarchical model in pattern recognition, computer vision and biological nervous system's evolution. Copyright © 2018 Elsevier B.V. All rights reserved.
Sakado, K; Sakado, M; Seki, T; Kuwabara, H; Kojima, M; Sato, T; Someya, T
2001-06-01
Although a number of studies have reported on the association between obsessional personality features as measured by the Munich Personality Test (MPT) "Rigidity" scale and depression, there has been no examination of these relationships in a non-clinical sample. The dimensional scores on the MPT were compared between subjects with and without lifetime depression, using a sample of employed Japanese adults. The odds ratio for suffering from lifetime depression was estimated by multiple logistic regression analysis. To diagnose a lifetime history of depression, the Inventory to Diagnose Depression, Lifetime version (IDDL) was used. The subjects with lifetime depression scored significantly higher on the "Rigidity" scale than the subjects without lifetime depression. In our logistic regression analysis, three risk factors were identified as each independently increasing a person's risk for suffering from lifetime depression: higher levels of "Rigidity", being of the female gender, and suffering from current depressive symptoms. The MPT "Rigidity" scale is a sensitive measure of personality features that occur with depression.
Observation of Two-Dimensional Localized Jones-Roberts Solitons in Bose-Einstein Condensates
NASA Astrophysics Data System (ADS)
Meyer, Nadine; Proud, Harry; Perea-Ortiz, Marisa; O'Neale, Charlotte; Baumert, Mathis; Holynski, Michael; Kronjäger, Jochen; Barontini, Giovanni; Bongs, Kai
2017-10-01
Jones-Roberts solitons are the only known class of stable dark solitonic solutions of the nonlinear Schrödinger equation in two and three dimensions. They feature a distinctive elongated elliptical shape that allows them to travel without change of form. By imprinting a triangular phase pattern, we experimentally generate two-dimensional Jones-Roberts solitons in a three-dimensional atomic Bose-Einstein condensate. We monitor their dynamics, observing that this kind of soliton is indeed not affected by dynamic (snaking) or thermodynamic instabilities, that instead make other classes of dark solitons unstable in dimensions higher than one. Our results confirm the prediction that Jones-Roberts solitons are stable solutions of the nonlinear Schrödinger equation and promote them for applications beyond matter wave physics, like energy and information transport in noisy and inhomogeneous environments.
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
Chen, Nan; Majda, Andrew J
2017-12-05
Solving the Fokker-Planck equation for high-dimensional complex dynamical systems is an important issue. Recently, the authors developed efficient statistically accurate algorithms for solving the Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures, which contain many strong non-Gaussian features such as intermittency and fat-tailed probability density functions (PDFs). The algorithms involve a hybrid strategy with a small number of samples [Formula: see text], where a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious Gaussian kernel density estimation in the remaining low-dimensional subspace. In this article, two effective strategies are developed and incorporated into these algorithms. The first strategy involves a judicious block decomposition of the conditional covariance matrix such that the evolutions of different blocks have no interactions, which allows an extremely efficient parallel computation due to the small size of each individual block. The second strategy exploits statistical symmetry for a further reduction of [Formula: see text] The resulting algorithms can efficiently solve the Fokker-Planck equation with strongly non-Gaussian PDFs in much higher dimensions even with orders in the millions and thus beat the curse of dimension. The algorithms are applied to a [Formula: see text]-dimensional stochastic coupled FitzHugh-Nagumo model for excitable media. An accurate recovery of both the transient and equilibrium non-Gaussian PDFs requires only [Formula: see text] samples! In addition, the block decomposition facilitates the algorithms to efficiently capture the distinct non-Gaussian features at different locations in a [Formula: see text]-dimensional two-layer inhomogeneous Lorenz 96 model, using only [Formula: see text] samples. Copyright © 2017 the Author(s). Published by PNAS.
Analytic study of solutions for a (3 + 1) -dimensional generalized KP equation
NASA Astrophysics Data System (ADS)
Gao, Hui; Cheng, Wenguang; Xu, Tianzhou; Wang, Gangwei
2018-03-01
The (3 + 1) -dimensional generalized KP (gKP) equation is an important nonlinear partial differential equation in theoretical and mathematical physics which can be used to describe nonlinear wave motion. Through the Hirota bilinear method, one-solition, two-solition and N-solition solutions are derived via symbolic computation. Two classes of lump solutions, rationally localized in all directions in space, to the dimensionally reduced cases in (2 + 1)-dimensions, are constructed by using a direct method based on the Hirota bilinear form of the equation. It implies that we can derive the lump solutions of the reduced gKP equation from positive quadratic function solutions to the aforementioned bilinear equation. Meanwhile, we get interaction solutions between a lump and a kink of the gKP equation. The lump appears from a kink and is swallowed by it with the change of time. This work offers a possibility which can enrich the variety of the dynamical features of solutions for higher-dimensional nonlinear evolution equations.
Who Needs 3D When the Universe Is Flat?
ERIC Educational Resources Information Center
Eriksson, Urban; Linder, Cedric; Airey, John; Redfors, Andreas
2014-01-01
An overlooked feature in astronomy education is the need for students to learn to extrapolate three-dimensionality and the challenges that this may involve. Discerning critical features in the night sky that are embedded in dimensionality is a long-term learning process. Several articles have addressed the usefulness of three-dimensional (3D)…
ERIC Educational Resources Information Center
Brown, Timothy A.; Barlow, David H.
2009-01-01
A wealth of evidence attests to the extensive current and lifetime diagnostic comorbidity of the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed., "DSM-IV") anxiety and mood disorders. Research has shown that the considerable cross-sectional covariation of "DSM-IV" emotional disorders is accounted for by common higher order…
Image Recommendation Algorithm Using Feature-Based Collaborative Filtering
NASA Astrophysics Data System (ADS)
Kim, Deok-Hwan
As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.
Selecting relevant 3D image features of margin sharpness and texture for lung nodule retrieval.
Ferreira, José Raniery; de Azevedo-Marques, Paulo Mazzoncini; Oliveira, Marcelo Costa
2017-03-01
Lung cancer is the leading cause of cancer-related deaths in the world. Its diagnosis is a challenge task to specialists due to several aspects on the classification of lung nodules. Therefore, it is important to integrate content-based image retrieval methods on the lung nodule classification process, since they are capable of retrieving similar cases from databases that were previously diagnosed. However, this mechanism depends on extracting relevant image features in order to obtain high efficiency. The goal of this paper is to perform the selection of 3D image features of margin sharpness and texture that can be relevant on the retrieval of similar cancerous and benign lung nodules. A total of 48 3D image attributes were extracted from the nodule volume. Border sharpness features were extracted from perpendicular lines drawn over the lesion boundary. Second-order texture features were extracted from a cooccurrence matrix. Relevant features were selected by a correlation-based method and a statistical significance analysis. Retrieval performance was assessed according to the nodule's potential malignancy on the 10 most similar cases and by the parameters of precision and recall. Statistical significant features reduced retrieval performance. Correlation-based method selected 2 margin sharpness attributes and 6 texture attributes and obtained higher precision compared to all 48 extracted features on similar nodule retrieval. Feature space dimensionality reduction of 83 % obtained higher retrieval performance and presented to be a computationaly low cost method of retrieving similar nodules for the diagnosis of lung cancer.
Accuracy versus convergence rates for a three dimensional multistage Euler code
NASA Technical Reports Server (NTRS)
Turkel, Eli
1988-01-01
Using a central difference scheme, it is necessary to add an artificial viscosity in order to reach a steady state. This viscosity usually consists of a linear fourth difference to eliminate odd-even oscillations and a nonlinear second difference to suppress oscillations in the neighborhood of steep gradients. There are free constants in these differences. As one increases the artificial viscosity, the high modes are dissipated more and the scheme converges more rapidly. However, this higher level of viscosity smooths the shocks and eliminates other features of the flow. Thus, there is a conflict between the requirements of accuracy and efficiency. Examples are presented for a variety of three-dimensional inviscid solutions over isolated wings.
Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés
2016-07-15
Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.
Gardner Transition in Physical Dimensions
NASA Astrophysics Data System (ADS)
Hicks, C. L.; Wheatley, M. J.; Godfrey, M. J.; Moore, M. A.
2018-06-01
The Gardner transition is the transition that at mean-field level separates a stable glass phase from a marginally stable phase. This transition has similarities with the de Almeida-Thouless transition of spin glasses. We have studied a well-understood problem, that of disks moving in a narrow channel, which shows many features usually associated with the Gardner transition. We show that some of these features are artifacts that arise when a disk escapes its local cage during the quench to higher densities. There is evidence that the Gardner transition becomes an avoided transition, in that the correlation length becomes quite large, of order 15 particle diameters, even in our quasi-one-dimensional system.
NASA Astrophysics Data System (ADS)
Liu, Changying; Wu, Xinyuan
2017-07-01
In this paper we explore arbitrarily high-order Lagrange collocation-type time-stepping schemes for effectively solving high-dimensional nonlinear Klein-Gordon equations with different boundary conditions. We begin with one-dimensional periodic boundary problems and first formulate an abstract ordinary differential equation (ODE) on a suitable infinity-dimensional function space based on the operator spectrum theory. We then introduce an operator-variation-of-constants formula which is essential for the derivation of our arbitrarily high-order Lagrange collocation-type time-stepping schemes for the nonlinear abstract ODE. The nonlinear stability and convergence are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix under some suitable smoothness assumptions. With regard to the two dimensional Dirichlet or Neumann boundary problems, our new time-stepping schemes coupled with discrete Fast Sine / Cosine Transformation can be applied to simulate the two-dimensional nonlinear Klein-Gordon equations effectively. All essential features of the methodology are present in one-dimensional and two-dimensional cases, although the schemes to be analysed lend themselves with equal to higher-dimensional case. The numerical simulation is implemented and the numerical results clearly demonstrate the advantage and effectiveness of our new schemes in comparison with the existing numerical methods for solving nonlinear Klein-Gordon equations in the literature.
2009-01-01
Background The characterisation, or binning, of metagenome fragments is an important first step to further downstream analysis of microbial consortia. Here, we propose a one-dimensional signature, OFDEG, derived from the oligonucleotide frequency profile of a DNA sequence, and show that it is possible to obtain a meaningful phylogenetic signal for relatively short DNA sequences. The one-dimensional signal is essentially a compact representation of higher dimensional feature spaces of greater complexity and is intended to improve on the tetranucleotide frequency feature space preferred by current compositional binning methods. Results We compare the fidelity of OFDEG against tetranucleotide frequency in both an unsupervised and semi-supervised setting on simulated metagenome benchmark data. Four tests were conducted using assembler output of Arachne and phrap, and for each, performance was evaluated on contigs which are greater than or equal to 8 kbp in length and contigs which are composed of at least 10 reads. Using both G-C content in conjunction with OFDEG gave an average accuracy of 96.75% (semi-supervised) and 95.19% (unsupervised), versus 94.25% (semi-supervised) and 82.35% (unsupervised) for tetranucleotide frequency. Conclusion We have presented an observation of an alternative characteristic of DNA sequences. The proposed feature representation has proven to be more beneficial than the existing tetranucleotide frequency space to the metagenome binning problem. We do note, however, that our observation of OFDEG deserves further anlaysis and investigation. Unsupervised clustering revealed OFDEG related features performed better than standard tetranucleotide frequency in representing a relevant organism specific signal. Further improvement in binning accuracy is given by semi-supervised classification using OFDEG. The emphasis on a feature-driven, bottom-up approach to the problem of binning reveals promising avenues for future development of techniques to characterise short environmental sequences without bias toward cultivable organisms. PMID:19958473
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-10-21
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-01-01
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347
Arif, Muhammad
2012-06-01
In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.
Barbosa, Daniel C; Roupar, Dalila B; Ramos, Jaime C; Tavares, Adriano C; Lima, Carlos S
2012-01-11
Wireless capsule endoscopy has been introduced as an innovative, non-invasive diagnostic technique for evaluation of the gastrointestinal tract, reaching places where conventional endoscopy is unable to. However, the output of this technique is an 8 hours video, whose analysis by the expert physician is very time consuming. Thus, a computer assisted diagnosis tool to help the physicians to evaluate CE exams faster and more accurately is an important technical challenge and an excellent economical opportunity. The set of features proposed in this paper to code textural information is based on statistical modeling of second order textural measures extracted from co-occurrence matrices. To cope with both joint and marginal non-Gaussianity of second order textural measures, higher order moments are used. These statistical moments are taken from the two-dimensional color-scale feature space, where two different scales are considered. Second and higher order moments of textural measures are computed from the co-occurrence matrices computed from images synthesized by the inverse wavelet transform of the wavelet transform containing only the selected scales for the three color channels. The dimensionality of the data is reduced by using Principal Component Analysis. The proposed textural features are then used as the input of a classifier based on artificial neural networks. Classification performances of 93.1% specificity and 93.9% sensitivity are achieved on real data. These promising results open the path towards a deeper study regarding the applicability of this algorithm in computer aided diagnosis systems to assist physicians in their clinical practice.
Good Practices for Learning to Recognize Actions Using FV and VLAD.
Wu, Jianxin; Zhang, Yu; Lin, Weiyao
2016-12-01
High dimensional representations such as Fisher vectors (FV) and vectors of locally aggregated descriptors (VLAD) have shown state-of-the-art accuracy for action recognition in videos. The high dimensionality, on the other hand, also causes computational difficulties when scaling up to large-scale video data. This paper makes three lines of contributions to learning to recognize actions using high dimensional representations. First, we reviewed several existing techniques that improve upon FV or VLAD in image classification, and performed extensive empirical evaluations to assess their applicability for action recognition. Our analyses of these empirical results show that normality and bimodality are essential to achieve high accuracy. Second, we proposed a new pooling strategy for VLAD and three simple, efficient, and effective transformations for both FV and VLAD. Both proposed methods have shown higher accuracy than the original FV/VLAD method in extensive evaluations. Third, we proposed and evaluated new feature selection and compression methods for the FV and VLAD representations. This strategy uses only 4% of the storage of the original representation, but achieves comparable or even higher accuracy. Based on these contributions, we recommend a set of good practices for action recognition in videos for practitioners in this field.
Complex network view of evolving manifolds
NASA Astrophysics Data System (ADS)
da Silva, Diamantino C.; Bianconi, Ginestra; da Costa, Rui A.; Dorogovtsev, Sergey N.; Mendes, José F. F.
2018-03-01
We study complex networks formed by triangulations and higher-dimensional simplicial complexes representing closed evolving manifolds. In particular, for triangulations, the set of possible transformations of these networks is restricted by the condition that at each step, all the faces must be triangles. Stochastic application of these operations leads to random networks with different architectures. We perform extensive numerical simulations and explore the geometries of growing and equilibrium complex networks generated by these transformations and their local structural properties. This characterization includes the Hausdorff and spectral dimensions of the resulting networks, their degree distributions, and various structural correlations. Our results reveal a rich zoo of architectures and geometries of these networks, some of which appear to be small worlds while others are finite dimensional with Hausdorff dimension equal or higher than the original dimensionality of their simplices. The range of spectral dimensions of the evolving triangulations turns out to be from about 1.4 to infinity. Our models include simplicial complexes representing manifolds with evolving topologies, for example, an h -holed torus with a progressively growing number of holes. This evolving graph demonstrates features of a small-world network and has a particularly heavy-tailed degree distribution.
Improved method for predicting protein fold patterns with ensemble classifiers.
Chen, W; Liu, X; Huang, Y; Jiang, Y; Zou, Q; Lin, C
2012-01-27
Protein folding is recognized as a critical problem in the field of biophysics in the 21st century. Predicting protein-folding patterns is challenging due to the complex structure of proteins. In an attempt to solve this problem, we employed ensemble classifiers to improve prediction accuracy. In our experiments, 188-dimensional features were extracted based on the composition and physical-chemical property of proteins and 20-dimensional features were selected using a coupled position-specific scoring matrix. Compared with traditional prediction methods, these methods were superior in terms of prediction accuracy. The 188-dimensional feature-based method achieved 71.2% accuracy in five cross-validations. The accuracy rose to 77% when we used a 20-dimensional feature vector. These methods were used on recent data, with 54.2% accuracy. Source codes and dataset, together with web server and software tools for prediction, are available at: http://datamining.xmu.edu.cn/main/~cwc/ProteinPredict.html.
Blöchliger, Nicolas; Caflisch, Amedeo; Vitalis, Andreas
2015-11-10
Data mining techniques depend strongly on how the data are represented and how distance between samples is measured. High-dimensional data often contain a large number of irrelevant dimensions (features) for a given query. These features act as noise and obfuscate relevant information. Unsupervised approaches to mine such data require distance measures that can account for feature relevance. Molecular dynamics simulations produce high-dimensional data sets describing molecules observed in time. Here, we propose to globally or locally weight simulation features based on effective rates. This emphasizes, in a data-driven manner, slow degrees of freedom that often report on the metastable states sampled by the molecular system. We couple this idea to several unsupervised learning protocols. Our approach unmasks slow side chain dynamics within the native state of a miniprotein and reveals additional metastable conformations of a protein. The approach can be combined with most algorithms for clustering or dimensionality reduction.
NASA Astrophysics Data System (ADS)
Bolton, Philip H.
Heteronuclear two-dimensional magnetic resonance is a novel method for investigating the conformations of cellular phosphates. The two-dimensional proton spectra are detected indirectly via the phosphorus-31 nucleus and thus allow determination of proton chemical shifts and coupling constants in situations in which the normal proton spectrum is obscured. Previous investigations of cellular phosphates with relatively simple spin systems have shown that the two-dimensional proton spectrum can be readily related to the normal proton spectrum by subspectral analysis. The normal proton spectrum can be decomposed into two subspectra, one for each polarization of the phosphorus-31 nucleus. The two-dimensional spectrum arises from the difference between the subspectra, and the normal proton spectrum is the sum. This allows simulation of the two-dimensional spectra and hence determination of the proton chemical shifts and coupling constants. Many cellular phosphates of interest, such as 5'-nucleotides and phosphoserine, contain three protons coupled to the phosphorus which are strongly coupled to one another. These samples are amenable to the two-dimensional method and the straightforward subspectral analysis is preserved when a 90° pulse is applied to the protons in the magnetization transfer step. The two-dimensional proton spectra of the samples investigated here have higher resolution than the normal proton spectra, revealing spectral features not readily apparent in the normal proton spectra.
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.
Convolutional neural network features based change detection in satellite images
NASA Astrophysics Data System (ADS)
Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong
2016-07-01
With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.
Cheng, Qiang; Zhou, Hongbo; Cheng, Jie
2011-06-01
Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.
Wang, Jingjing; Sun, Tao; Gao, Ni; Menon, Desmond Dev; Luo, Yanxia; Gao, Qi; Li, Xia; Wang, Wei; Zhu, Huiping; Lv, Pingxin; Liang, Zhigang; Tao, Lixin; Liu, Xiangtong; Guo, Xiuhua
2014-01-01
To determine the value of contourlet textural features obtained from solitary pulmonary nodules in two dimensional CT images used in diagnoses of lung cancer. A total of 6,299 CT images were acquired from 336 patients, with 1,454 benign pulmonary nodule images from 84 patients (50 male, 34 female) and 4,845 malignant from 252 patients (150 male, 102 female). Further to this, nineteen patient information categories, which included seven demographic parameters and twelve morphological features, were also collected. A contourlet was used to extract fourteen types of textural features. These were then used to establish three support vector machine models. One comprised a database constructed of nineteen collected patient information categories, another included contourlet textural features and the third one contained both sets of information. Ten-fold cross-validation was used to evaluate the diagnosis results for the three databases, with sensitivity, specificity, accuracy, the area under the curve (AUC), precision, Youden index, and F-measure were used as the assessment criteria. In addition, the synthetic minority over-sampling technique (SMOTE) was used to preprocess the unbalanced data. Using a database containing textural features and patient information, sensitivity, specificity, accuracy, AUC, precision, Youden index, and F-measure were: 0.95, 0.71, 0.89, 0.89, 0.92, 0.66, and 0.93 respectively. These results were higher than results derived using the database without textural features (0.82, 0.47, 0.74, 0.67, 0.84, 0.29, and 0.83 respectively) as well as the database comprising only textural features (0.81, 0.64, 0.67, 0.72, 0.88, 0.44, and 0.85 respectively). Using the SMOTE as a pre-processing procedure, new balanced database generated, including observations of 5,816 benign ROIs and 5,815 malignant ROIs, and accuracy was 0.93. Our results indicate that the combined contourlet textural features of solitary pulmonary nodules in CT images with patient profile information could potentially improve the diagnosis of lung cancer.
Defect-Repairable Latent Feature Extraction of Driving Behavior via a Deep Sparse Autoencoder
Taniguchi, Tadahiro; Takenaka, Kazuhito; Bando, Takashi
2018-01-01
Data representing driving behavior, as measured by various sensors installed in a vehicle, are collected as multi-dimensional sensor time-series data. These data often include redundant information, e.g., both the speed of wheels and the engine speed represent the velocity of the vehicle. Redundant information can be expected to complicate the data analysis, e.g., more factors need to be analyzed; even varying the levels of redundancy can influence the results of the analysis. We assume that the measured multi-dimensional sensor time-series data of driving behavior are generated from low-dimensional data shared by the many types of one-dimensional data of which multi-dimensional time-series data are composed. Meanwhile, sensor time-series data may be defective because of sensor failure. Therefore, another important function is to reduce the negative effect of defective data when extracting low-dimensional time-series data. This study proposes a defect-repairable feature extraction method based on a deep sparse autoencoder (DSAE) to extract low-dimensional time-series data. In the experiments, we show that DSAE provides high-performance latent feature extraction for driving behavior, even for defective sensor time-series data. In addition, we show that the negative effect of defects on the driving behavior segmentation task could be reduced using the latent features extracted by DSAE. PMID:29462931
Arrindell, Willem A; Urbán, Róbert; Carrozzino, Danilo; Bech, Per; Demetrovics, Zsolt; Roozen, Hendrik G
2017-09-01
To fully understand the dimensionality of an instrument in a certain population, rival bi-factor models should be routinely examined and tested against oblique first-order and higher-order structures. The present study is among the very few studies that have carried out such a comparison in relation to the Symptom Checklist-90-R. In doing so, it utilized a sample comprising 2593 patients with substance use and impulse control disorders. The study also included a test of a one-dimensional model of general psychological distress. Oblique first-order factors were based on the original a priori 9-dimensional model advanced by Derogatis (1977); and on an 8-dimensional model proposed by Arrindell and Ettema (2003)-Agoraphobia, Anxiety, Depression, Somatization, Cognitive-performance deficits, Interpersonal sensitivity and mistrust, Acting-out hostility, and Sleep difficulties. Taking individual symptoms as input, three higher-order models were tested with at the second-order levels either (1) General psychological distress; (2) 'Panic with agoraphobia', 'Depression' and 'Extra-punitive behavior'; or (3) 'Irritable-hostile depression' and 'Panic with agoraphobia'. In line with previous studies, no support was found for the one-factor model. Bi-factor models were found to fit the dataset best relative to the oblique first-order and higher-order models. However, oblique first-order and higher-order factor models also fit the data fairly well in absolute terms. Higher-order solution (2) provided support for R.F. Krueger's empirical model of psychopathology which distinguishes between fear, distress, and externalizing factors (Krueger, 1999). The higher-order model (3), which combines externalizing and distress factors (Irritable-hostile depression), fit the data numerically equally well. Overall, findings were interpreted as supporting the hypothesis that the prevalent forms of symptomatology addressed have both important common and unique features. Proposals were made to improve the Depression subscale as its scores represent more of a very common construct as is measured with the severity (total) scale than of a specific measure that purports to measure what it should assess-symptoms of depression. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Integrated feature extraction and selection for neuroimage classification
NASA Astrophysics Data System (ADS)
Fan, Yong; Shen, Dinggang
2009-02-01
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
[Three-dimensional genome organization: a lesson from the Polycomb-Group proteins].
Bantignies, Frédéric
2013-01-01
As more and more genomes are being explored and annotated, important features of three-dimensional (3D) genome organization are just being uncovered. In the light of what we know about Polycomb group (PcG) proteins, we will present the latest findings on this topic. The PcG proteins are well-conserved chromatin factors that repress transcription of numerous target genes. They bind the genome at specific sites, forming chromatin domains of associated histone modifications as well as higher-order chromatin structures. These 3D chromatin structures involve the interactions between PcG-bound regulatory regions at short- and long-range distances, and may significantly contribute to PcG function. Recent high throughput "Chromosome Conformation Capture" (3C) analyses have revealed many other higher order structures along the chromatin fiber, partitioning the genomes into well demarcated topological domains. This revealed an unprecedented link between linear epigenetic domains and chromosome architecture, which might be intimately connected to genome function. © Société de Biologie, 2013.
A MUSIC-based method for SSVEP signal processing.
Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei
2016-03-01
The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications.
A Three-Dimensional Model of the Yeast Genome
NASA Astrophysics Data System (ADS)
Noble, William; Duan, Zhi-Jun; Andronescu, Mirela; Schutz, Kevin; McIlwain, Sean; Kim, Yoo Jung; Lee, Choli; Shendure, Jay; Fields, Stanley; Blau, C. Anthony
Layered on top of information conveyed by DNA sequence and chromatin are higher order structures that encompass portions of chromosomes, entire chromosomes, and even whole genomes. Interphase chromosomes are not positioned randomly within the nucleus, but instead adopt preferred conformations. Disparate DNA elements co-localize into functionally defined aggregates or factories for transcription and DNA replication. In budding yeast, Drosophila and many other eukaryotes, chromosomes adopt a Rabl configuration, with arms extending from centromeres adjacent to the spindle pole body to telomeres that abut the nuclear envelope. Nonetheless, the topologies and spatial relationships of chromosomes remain poorly understood. Here we developed a method to globally capture intra- and inter-chromosomal interactions, and applied it to generate a map at kilobase resolution of the haploid genome of Saccharomyces cerevisiae. The map recapitulates known features of genome organization, thereby validating the method, and identifies new features. Extensive regional and higher order folding of individual chromosomes is observed. Chromosome XII exhibits a striking conformation that implicates the nucleolus as a formidable barrier to interaction between DNA sequences at either end. Inter-chromosomal contacts are anchored by centromeres and include interactions among transfer RNA genes, among origins of early DNA replication and among sites where chromosomal breakpoints occur. Finally, we constructed a three-dimensional model of the yeast genome. Our findings provide a glimpse of the interface between the form and function of a eukaryotic genome.
NASA Astrophysics Data System (ADS)
Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman
2018-02-01
The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.
Dimensionality Reduction Through Classifier Ensembles
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Tumer, Kagan; Norwig, Peter (Technical Monitor)
1999-01-01
In data mining, one often needs to analyze datasets with a very large number of attributes. Performing machine learning directly on such data sets is often impractical because of extensive run times, excessive complexity of the fitted model (often leading to overfitting), and the well-known "curse of dimensionality." In practice, to avoid such problems, feature selection and/or extraction are often used to reduce data dimensionality prior to the learning step. However, existing feature selection/extraction algorithms either evaluate features by their effectiveness across the entire data set or simply disregard class information altogether (e.g., principal component analysis). Furthermore, feature extraction algorithms such as principal components analysis create new features that are often meaningless to human users. In this article, we present input decimation, a method that provides "feature subsets" that are selected for their ability to discriminate among the classes. These features are subsequently used in ensembles of classifiers, yielding results superior to single classifiers, ensembles that use the full set of features, and ensembles based on principal component analysis on both real and synthetic datasets.
NASA Technical Reports Server (NTRS)
Stoutemyer, D. R.
1977-01-01
The computer algebra language MACSYMA enables the programmer to include symbolic physical units in computer calculations, and features automatic detection of dimensionally-inhomogeneous formulas and conversion of inconsistent units in a dimensionally homogeneous formula. Some examples illustrate these features.
Binary classification of items of interest in a repeatable process
Abell, Jeffrey A; Spicer, John Patrick; Wincek, Michael Anthony; Wang, Hui; Chakraborty, Debejyo
2015-01-06
A system includes host and learning machines. Each machine has a processor in electrical communication with at least one sensor. Instructions for predicting a binary quality status of an item of interest during a repeatable process are recorded in memory. The binary quality status includes passing and failing binary classes. The learning machine receives signals from the at least one sensor and identifies candidate features. Features are extracted from the candidate features, each more predictive of the binary quality status. The extracted features are mapped to a dimensional space having a number of dimensions proportional to the number of extracted features. The dimensional space includes most of the passing class and excludes at least 90 percent of the failing class. Received signals are compared to the boundaries of the recorded dimensional space to predict, in real time, the binary quality status of a subsequent item of interest.
NASA Astrophysics Data System (ADS)
Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia
2018-05-01
Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.
Feature Screening for Ultrahigh Dimensional Categorical Data with Applications.
Huang, Danyang; Li, Runze; Wang, Hansheng
2014-01-01
Ultrahigh dimensional data with both categorical responses and categorical covariates are frequently encountered in the analysis of big data, for which feature screening has become an indispensable statistical tool. We propose a Pearson chi-square based feature screening procedure for categorical response with ultrahigh dimensional categorical covariates. The proposed procedure can be directly applied for detection of important interaction effects. We further show that the proposed procedure possesses screening consistency property in the terminology of Fan and Lv (2008). We investigate the finite sample performance of the proposed procedure by Monte Carlo simulation studies, and illustrate the proposed method by two empirical datasets.
Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition
NASA Astrophysics Data System (ADS)
Hafizhelmi Kamaru Zaman, Fadhlan
2018-03-01
Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.
Computing and visualizing time-varying merge trees for high-dimensional data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oesterling, Patrick; Heine, Christian; Weber, Gunther H.
2017-06-03
We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.
Deep neural network using color and synthesized three-dimensional shape for face recognition
NASA Astrophysics Data System (ADS)
Rhee, Seon-Min; Yoo, ByungIn; Han, Jae-Joon; Hwang, Wonjun
2017-03-01
We present an approach for face recognition using synthesized three-dimensional (3-D) shape information together with two-dimensional (2-D) color in a deep convolutional neural network (DCNN). As 3-D facial shape is hardly affected by the extrinsic 2-D texture changes caused by illumination, make-up, and occlusions, it could provide more reliable complementary features in harmony with the 2-D color feature in face recognition. Unlike other approaches that use 3-D shape information with the help of an additional depth sensor, our approach generates a personalized 3-D face model by using only face landmarks in the 2-D input image. Using the personalized 3-D face model, we generate a frontalized 2-D color facial image as well as 3-D facial images (e.g., a depth image and a normal image). In our DCNN, we first feed 2-D and 3-D facial images into independent convolutional layers, where the low-level kernels are successfully learned according to their own characteristics. Then, we merge them and feed into higher-level layers under a single deep neural network. Our proposed approach is evaluated with labeled faces in the wild dataset and the results show that the error rate of the verification rate at false acceptance rate 1% is improved by up to 32.1% compared with the baseline where only a 2-D color image is used.
Variable importance in nonlinear kernels (VINK): classification of digitized histopathology.
Ginsburg, Shoshana; Ali, Sahirzeeshan; Lee, George; Basavanhally, Ajay; Madabhushi, Anant
2013-01-01
Quantitative histomorphometry is the process of modeling appearance of disease morphology on digitized histopathology images via image-based features (e.g., texture, graphs). Due to the curse of dimensionality, building classifiers with large numbers of features requires feature selection (which may require a large training set) or dimensionality reduction (DR). DR methods map the original high-dimensional features in terms of eigenvectors and eigenvalues, which limits the potential for feature transparency or interpretability. Although methods exist for variable selection and ranking on embeddings obtained via linear DR schemes (e.g., principal components analysis (PCA)), similar methods do not yet exist for nonlinear DR (NLDR) methods. In this work we present a simple yet elegant method for approximating the mapping between the data in the original feature space and the transformed data in the kernel PCA (KPCA) embedding space; this mapping provides the basis for quantification of variable importance in nonlinear kernels (VINK). We show how VINK can be implemented in conjunction with the popular Isomap and Laplacian eigenmap algorithms. VINK is evaluated in the contexts of three different problems in digital pathology: (1) predicting five year PSA failure following radical prostatectomy, (2) predicting Oncotype DX recurrence risk scores for ER+ breast cancers, and (3) distinguishing good and poor outcome p16+ oropharyngeal tumors. We demonstrate that subsets of features identified by VINK provide similar or better classification or regression performance compared to the original high dimensional feature sets.
Lancaster, Matthew E; Shelhamer, Ryan; Homa, Donald
2013-04-01
Two experiments investigated category inference when categories were composed of correlated or uncorrelated dimensions and the categories overlapped minimally or moderately. When the categories minimally overlapped, the dimensions were strongly correlated with the category label. Following a classification learning phase, subsequent transfer required the selection of either a category label or a feature when one, two, or three features were missing. Experiments 1 and 2 differed primarily in the number of learning blocks prior to transfer. In each experiment, the inference of the category label or category feature was influenced by both dimensional and category correlations, as well as their interaction. The number of cues available at test impacted performance more when the dimensional correlations were zero and category overlap was high. However, a minimal number of cues were sufficient to produce high levels of inference when the dimensions were highly correlated; additional cues had a positive but reduced impact, even when overlap was high. Subjects were generally more accurate in inferring the category label than a category feature regardless of dimensional correlation, category overlap, or number of cues available at test. Whether the category label functioned as a special feature or not was critically dependent upon these embedded correlations, with feature inference driven more strongly by dimensional correlations.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813
NASA Astrophysics Data System (ADS)
Kohno, Masanori
2018-05-01
The single-particle spectral properties of the two-dimensional t-J model with next-nearest-neighbor hopping are investigated near the Mott transition by using cluster perturbation theory. The spectral features are interpreted by considering the effects of the next-nearest-neighbor hopping on the shift of the spectral-weight distribution of the two-dimensional t-J model. Various anomalous features observed in hole-doped and electron-doped high-temperature cuprate superconductors are collectively explained in the two-dimensional t-J model with next-nearest-neighbor hopping near the Mott transition.
Facial recognition using multisensor images based on localized kernel eigen spaces.
Gundimada, Satyanadh; Asari, Vijayan K
2009-06-01
A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.
Rothschild, Freda; Bishop, Alexis I; Kitchen, Marcus J; Paganin, David M
2014-03-24
The Cornu spiral is, in essence, the image resulting from an Argand-plane map associated with monochromatic complex scalar plane waves diffracting from an infinite edge. Argand-plane maps can be useful in the analysis of more general optical fields. We experimentally study particular features of Argand-plane mappings known as "vorticity singularities" that are associated with mapping continuous single-valued complex scalar speckle fields to the Argand plane. Vorticity singularities possess a hierarchy of Argand-plane catastrophes including the fold, cusp and elliptic umbilic. We also confirm their connection to vortices in two-dimensional complex scalar waves. The study of vorticity singularities may also have implications for higher-dimensional fields such as coherence functions and multi-component fields such as vector and spinor fields.
L-dependence of low energy spin excitations in FeTe/Se superconductors
NASA Astrophysics Data System (ADS)
Xu, Guangyong; Xu, Zhijun; Schneeloch, John; Wen, Jinsheng; Winn, Barry; Zhao, Yang; Birgeneau, Robert; Gu, Genda; Tranquada, John
We will present neutron scattering measurements on low energy magnetic excitations from FeTe1-xSex (``11'' system) samples. Our work shows that the low energy magnetic excitations are dominated by 2D correlations in the superconducting (SC) compound at low temperature, with the L-dependence well described by the Fe magnetic form factor. However, at temperatures much higher than TC, the magnetic excitations become more three-dimensional with a clear change in the L-dependence. The low energy magnetic excitations from non-superconducting (NSC) samples, on the other hand, always exhibit three-dimensional features for the entire temperature range of our measurements. Our results suggest that in additional to in-plane correlations, the inter-plane spin correlations are also coupled to the superconducting properties in the ``11'' system.
An efficient classification method based on principal component and sparse representation.
Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang
2016-01-01
As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.
NASA Astrophysics Data System (ADS)
Aytaç Korkmaz, Sevcan; Binol, Hamidullah
2018-03-01
Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.
Dimensional control of die castings
NASA Astrophysics Data System (ADS)
Karve, Aniruddha Ajit
The demand for net shape die castings, which require little or no machining, is steadily increasing. Stringent customer requirements are forcing die casters to deliver high quality castings in increasingly short lead times. Dimensional conformance to customer specifications is an inherent part of die casting quality. The dimensional attributes of a die casting are essentially dependent upon many factors--the quality of the die and the degree of control over the process variables being the two major sources of dimensional error in die castings. This study focused on investigating the nature and the causes of dimensional error in die castings. The two major components of dimensional error i.e., dimensional variability and die allowance were studied. The major effort of this study was to qualitatively and quantitatively study the effects of casting geometry and process variables on die casting dimensional variability and die allowance. This was accomplished by detailed dimensional data collection at production die casting sites. Robust feature characterization schemes were developed to describe complex casting geometry in quantitative terms. Empirical modeling was utilized to quantify the effects of the casting variables on dimensional variability and die allowance for die casting features. A number of casting geometry and process variables were found to affect dimensional variability in die castings. The dimensional variability was evaluated by comparisons with current published dimensional tolerance standards. The casting geometry was found to play a significant role in influencing the die allowance of the features measured. The predictive models developed for dimensional variability and die allowance were evaluated to test their effectiveness. Finally, the relative impact of all the components of dimensional error in die castings was put into perspective, and general guidelines for effective dimensional control in the die casting plant were laid out. The results of this study will contribute to enhancement of dimensional quality and lead time compression in the die casting industry, thus making it competitive with other net shape manufacturing processes.
Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.
Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng
2018-01-01
In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.
SNP selection and classification of genome-wide SNP data using stratified sampling random forests.
Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K
2012-09-01
For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.
Rosso, Osvaldo A; Ospina, Raydonal; Frery, Alejandro C
2016-01-01
We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups.
Anticipating and controlling mask costs within EDA physical design
NASA Astrophysics Data System (ADS)
Rieger, Michael L.; Mayhew, Jeffrey P.; Melvin, Lawrence S.; Lugg, Robert M.; Beale, Daniel F.
2003-08-01
For low k1 lithography, more aggressive OPC is being applied to critical layers, and the number of mask layers with OPC treatments is growing rapidly. The 130 nm, process node required, on average, 8 layers containing rules- or model-based OPC. The 90 nm node will have 16 OPC layers, of which 14 layers contain aggressive model-based OPC. This escalation of mask pattern complexity, coupled with the predominant use of vector-scan e-beam (VSB) mask writers contributes to the rising costs of advanced mask sets. Writing times for OPC layouts are several times longer than for traditional layouts, making mask exposure the single largest cost component for OPC masks. Lower mask yields, another key factor in higher mask costs, is also aggravated by OPC. Historical mask set costs are plotted below. The initial cost of a 90 nm-node mask set will exceed one million dollars. The relative impact of mask cost on chip depends on how many total wafers are printed with each mask set. For many foundry chips, where unit production is often well below 1000 wafers, mask costs are larger than wafer processing costs. Further increases in NRE may begin to discourage these suppliers' adoption to 90 nm and smaller nodes. In this paper we will outline several alternatives for reducing mask costs by strategically leveraging dimensional margins. Dimensional specifications for a particular masking layer usually are applied uniformly to all features on that layer. As a practical matter, accuracy requirements on different features in the design may vary widely. Take a polysilicon layer, for example: global tolerance specifications for that layer are driven by the transistor-gate requirements; but these parameters over-specify interconnect feature requirements. By identifying features where dimensional accuracy requirements can be reduced, additional margin can be leveraged to reduce OPC complexity. Mask writing time on VSB tools will drop in nearly direct proportion to reduce shot count. By inspecting masks with reference to feature-dependent margins, instead of uniform specifications, mask yield can be effectively increased further reducing delivered mask expense.
MO-AB-BRA-10: Cancer Therapy Outcome Prediction Based On Dempster-Shafer Theory and PET Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, C; University of Rouen, QuantIF - EA 4108 LITIS, 76000 Rouen; Li, H
2015-06-15
Purpose: In cancer therapy, utilizing FDG-18 PET image-based features for accurate outcome prediction is challenging because of 1) limited discriminative information within a small number of PET image sets, and 2) fluctuant feature characteristics caused by the inferior spatial resolution and system noise of PET imaging. In this study, we proposed a new Dempster-Shafer theory (DST) based approach, evidential low-dimensional transformation with feature selection (ELT-FS), to accurately predict cancer therapy outcome with both PET imaging features and clinical characteristics. Methods: First, a specific loss function with sparse penalty was developed to learn an adaptive low-rank distance metric for representing themore » dissimilarity between different patients’ feature vectors. By minimizing this loss function, a linear low-dimensional transformation of input features was achieved. Also, imprecise features were excluded simultaneously by applying a l2,1-norm regularization of the learnt dissimilarity metric in the loss function. Finally, the learnt dissimilarity metric was applied in an evidential K-nearest-neighbor (EK- NN) classifier to predict treatment outcome. Results: Twenty-five patients with stage II–III non-small-cell lung cancer and thirty-six patients with esophageal squamous cell carcinomas treated with chemo-radiotherapy were collected. For the two groups of patients, 52 and 29 features, respectively, were utilized. The leave-one-out cross-validation (LOOCV) protocol was used for evaluation. Compared to three existing linear transformation methods (PCA, LDA, NCA), the proposed ELT-FS leads to higher prediction accuracy for the training and testing sets both for lung-cancer patients (100+/−0.0, 88.0+/−33.17) and for esophageal-cancer patients (97.46+/−1.64, 83.33+/−37.8). The ELT-FS also provides superior class separation in both test data sets. Conclusion: A novel DST- based approach has been proposed to predict cancer treatment outcome using PET image features and clinical characteristics. A specific loss function has been designed for robust accommodation of feature set incertitude and imprecision, facilitating adaptive learning of the dissimilarity metric for the EK-NN classifier.« less
Functional Connectivity among Spikes in Low Dimensional Space during Working Memory Task in Rat
Tian, Xin
2014-01-01
Working memory (WM) is critically important in cognitive tasks. The functional connectivity has been a powerful tool for understanding the mechanism underlying the information processing during WM tasks. The aim of this study is to investigate how to effectively characterize the dynamic variations of the functional connectivity in low dimensional space among the principal components (PCs) which were extracted from the instantaneous firing rate series. Spikes were obtained from medial prefrontal cortex (mPFC) of rats with implanted microelectrode array and then transformed into continuous series via instantaneous firing rate method. Granger causality method is proposed to study the functional connectivity. Then three scalar metrics were applied to identify the changes of the reduced dimensionality functional network during working memory tasks: functional connectivity (GC), global efficiency (E) and casual density (CD). As a comparison, GC, E and CD were also calculated to describe the functional connectivity in the original space. The results showed that these network characteristics dynamically changed during the correct WM tasks. The measure values increased to maximum, and then decreased both in the original and in the reduced dimensionality. Besides, the feature values of the reduced dimensionality were significantly higher during the WM tasks than they were in the original space. These findings suggested that functional connectivity among the spikes varied dynamically during the WM tasks and could be described effectively in the low dimensional space. PMID:24658291
Keshtkaran, Mohammad Reza; Yang, Zhi
2017-06-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.
NASA Astrophysics Data System (ADS)
Keshtkaran, Mohammad Reza; Yang, Zhi
2017-06-01
Objective. Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. Approach. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Main results. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. Significance. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmon, S; Jeraj, R; Galavis, P
Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patchesmore » along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity and correlation of various texture features were shown to significantly differ between 2D and 3D extraction. Additionally, inter-feature correlations were more sensitive to reconstruction variation using single-plane extraction. This work highlights a need for standardized feature extraction/selection techniques in radiomics.« less
NASA Astrophysics Data System (ADS)
Jaferzadeh, Keyvan; Moon, Inkyu
2016-12-01
The classification of erythrocytes plays an important role in the field of hematological diagnosis, specifically blood disorders. Since the biconcave shape of red blood cell (RBC) is altered during the different stages of hematological disorders, we believe that the three-dimensional (3-D) morphological features of erythrocyte provide better classification results than conventional two-dimensional (2-D) features. Therefore, we introduce a set of 3-D features related to the morphological and chemical properties of RBC profile and try to evaluate the discrimination power of these features against 2-D features with a neural network classifier. The 3-D features include erythrocyte surface area, volume, average cell thickness, sphericity index, sphericity coefficient and functionality factor, MCH and MCHSD, and two newly introduced features extracted from the ring section of RBC at the single-cell level. In contrast, the 2-D features are RBC projected surface area, perimeter, radius, elongation, and projected surface area to perimeter ratio. All features are obtained from images visualized by off-axis digital holographic microscopy with a numerical reconstruction algorithm, and four categories of biconcave (doughnut shape), flat-disc, stomatocyte, and echinospherocyte RBCs are interested. Our experimental results demonstrate that the 3-D features can be more useful in RBC classification than the 2-D features. Finally, we choose the best feature set of the 2-D and 3-D features by sequential forward feature selection technique, which yields better discrimination results. We believe that the final feature set evaluated with a neural network classification strategy can improve the RBC classification accuracy.
Reorienting in Images of a Three-Dimensional Environment
ERIC Educational Resources Information Center
Kelly, Debbie M.; Bischof, Walter F.
2005-01-01
Adult humans searched for a hidden goal in images depicting 3-dimensional rooms. Images contained either featural cues, geometric cues, or both, which could be used to determine the correct location of the goal. In Experiment 1, participants learned to use featural and geometric information equally well. However, men and women showed significant…
Model-Free Conditional Independence Feature Screening For Ultrahigh Dimensional Data.
Wang, Luheng; Liu, Jingyuan; Li, Yong; Li, Runze
2017-03-01
Feature screening plays an important role in ultrahigh dimensional data analysis. This paper is concerned with conditional feature screening when one is interested in detecting the association between the response and ultrahigh dimensional predictors (e.g., genetic makers) given a low-dimensional exposure variable (such as clinical variables or environmental variables). To this end, we first propose a new index to measure conditional independence, and further develop a conditional screening procedure based on the newly proposed index. We systematically study the theoretical property of the proposed procedure and establish the sure screening and ranking consistency properties under some very mild conditions. The newly proposed screening procedure enjoys some appealing properties. (a) It is model-free in that its implementation does not require a specification on the model structure; (b) it is robust to heavy-tailed distributions or outliers in both directions of response and predictors; and (c) it can deal with both feature screening and the conditional screening in a unified way. We study the finite sample performance of the proposed procedure by Monte Carlo simulations and further illustrate the proposed method through two real data examples.
Higher-order gravity in higher dimensions: geometrical origins of four-dimensional cosmology?
NASA Astrophysics Data System (ADS)
Troisi, Antonio
2017-03-01
Determining the cosmological field equations is still very much debated and led to a wide discussion around different theoretical proposals. A suitable conceptual scheme could be represented by gravity models that naturally generalize Einstein theory like higher-order gravity theories and higher-dimensional ones. Both of these two different approaches allow one to define, at the effective level, Einstein field equations equipped with source-like energy-momentum tensors of geometrical origin. In this paper, the possibility is discussed to develop a five-dimensional fourth-order gravity model whose lower-dimensional reduction could provide an interpretation of cosmological four-dimensional matter-energy components. We describe the basic concepts of the model, the complete field equations formalism and the 5-D to 4-D reduction procedure. Five-dimensional f( R) field equations turn out to be equivalent, on the four-dimensional hypersurfaces orthogonal to the extra coordinate, to an Einstein-like cosmological model with three matter-energy tensors related with higher derivative and higher-dimensional counter-terms. By considering the gravity model with f(R)=f_0R^n the possibility is investigated to obtain five-dimensional power law solutions. The effective four-dimensional picture and the behaviour of the geometrically induced sources are finally outlined in correspondence to simple cases of such higher-dimensional solutions.
String theory and aspects of higher dimensional gravity
NASA Astrophysics Data System (ADS)
Copsey, Keith
2007-05-01
String theory generically requires that there are more than the four dimensions easily observable. It has become clear in recent years that gravity in more than four dimensions presents qualitative new features and this thesis is dedicated to exploring some of these phenomena. I discuss the thermodynamics of new types of black holes with new types of charges and study aspects of the AdS-CFT correspondence dual to gravitational phenomena unique to higher dimensions. I further describe the construction of a broad new class of solutions in more than four dimensions containing dynamical minimal spheres ("bubbles of nothing") in asymptotically flat and AdS space without any asymptotic Kaluza-Klein direction.
Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition
Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao
2017-01-01
Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943
Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection
NASA Astrophysics Data System (ADS)
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
2014-07-01
Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.
NASA Astrophysics Data System (ADS)
Jiang, Li; Xuan, Jianping; Shi, Tielin
2013-12-01
Generally, the vibration signals of faulty machinery are non-stationary and nonlinear under complicated operating conditions. Therefore, it is a big challenge for machinery fault diagnosis to extract optimal features for improving classification accuracy. This paper proposes semi-supervised kernel Marginal Fisher analysis (SSKMFA) for feature extraction, which can discover the intrinsic manifold structure of dataset, and simultaneously consider the intra-class compactness and the inter-class separability. Based on SSKMFA, a novel approach to fault diagnosis is put forward and applied to fault recognition of rolling bearings. SSKMFA directly extracts the low-dimensional characteristics from the raw high-dimensional vibration signals, by exploiting the inherent manifold structure of both labeled and unlabeled samples. Subsequently, the optimal low-dimensional features are fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories and severities of bearings. The experimental results demonstrate that the proposed approach improves the fault recognition performance and outperforms the other four feature extraction methods.
Bommert, Andrea; Rahnenführer, Jörg; Lang, Michel
2017-01-01
Finding a good predictive model for a high-dimensional data set can be challenging. For genetic data, it is not only important to find a model with high predictive accuracy, but it is also important that this model uses only few features and that the selection of these features is stable. This is because, in bioinformatics, the models are used not only for prediction but also for drawing biological conclusions which makes the interpretability and reliability of the model crucial. We suggest using three target criteria when fitting a predictive model to a high-dimensional data set: the classification accuracy, the stability of the feature selection, and the number of chosen features. As it is unclear which measure is best for evaluating the stability, we first compare a variety of stability measures. We conclude that the Pearson correlation has the best theoretical and empirical properties. Also, we find that for the stability assessment behaviour it is most important that a measure contains a correction for chance or large numbers of chosen features. Then, we analyse Pareto fronts and conclude that it is possible to find models with a stable selection of few features without losing much predictive accuracy.
Guo, Xinyu; Dominick, Kelli C; Minai, Ali A; Li, Hailong; Erickson, Craig A; Lu, Long J
2017-01-01
The whole-brain functional connectivity (FC) pattern obtained from resting-state functional magnetic resonance imaging data are commonly applied to study neuropsychiatric conditions such as autism spectrum disorder (ASD) by using different machine learning models. Recent studies indicate that both hyper- and hypo- aberrant ASD-associated FCs were widely distributed throughout the entire brain rather than only in some specific brain regions. Deep neural networks (DNN) with multiple hidden layers have shown the ability to systematically extract lower-to-higher level information from high dimensional data across a series of neural hidden layers, significantly improving classification accuracy for such data. In this study, a DNN with a novel feature selection method (DNN-FS) is developed for the high dimensional whole-brain resting-state FC pattern classification of ASD patients vs. typical development (TD) controls. The feature selection method is able to help the DNN generate low dimensional high-quality representations of the whole-brain FC patterns by selecting features with high discriminating power from multiple trained sparse auto-encoders. For the comparison, a DNN without the feature selection method (DNN-woFS) is developed, and both of them are tested with different architectures (i.e., with different numbers of hidden layers/nodes). Results show that the best classification accuracy of 86.36% is generated by the DNN-FS approach with 3 hidden layers and 150 hidden nodes (3/150). Remarkably, DNN-FS outperforms DNN-woFS for all architectures studied. The most significant accuracy improvement was 9.09% with the 3/150 architecture. The method also outperforms other feature selection methods, e.g., two sample t -test and elastic net. In addition to improving the classification accuracy, a Fisher's score-based biomarker identification method based on the DNN is also developed, and used to identify 32 FCs related to ASD. These FCs come from or cross different pre-defined brain networks including the default-mode, cingulo-opercular, frontal-parietal, and cerebellum. Thirteen of them are statically significant between ASD and TD groups (two sample t -test p < 0.05) while 19 of them are not. The relationship between the statically significant FCs and the corresponding ASD behavior symptoms is discussed based on the literature and clinician's expert knowledge. Meanwhile, the potential reason of obtaining 19 FCs which are not statistically significant is also provided.
Application of Fourier analysis to multispectral/spatial recognition
NASA Technical Reports Server (NTRS)
Hornung, R. J.; Smith, J. A.
1973-01-01
One approach for investigating spectral response from materials is to consider spatial features of the response. This might be accomplished by considering the Fourier spectrum of the spatial response. The Fourier Transform may be used in a one-dimensional to multidimensional analysis of more than one channel of data. The two-dimensional transform represents the Fraunhofer diffraction pattern of the image in optics and has certain invariant features. Physically the diffraction pattern contains spatial features which are possibly unique to a given configuration or classification type. Different sampling strategies may be used to either enhance geometrical differences or extract additional features.
NASA Astrophysics Data System (ADS)
Dong, S.; Yan, Q.; Xu, Y.; Bai, J.
2018-04-01
In order to promote the construction of digital geo-spatial framework in China and accelerate the construction of informatization mapping system, three-dimensional geographic information model emerged. The three-dimensional geographic information model based on oblique photogrammetry technology has higher accuracy, shorter period and lower cost than traditional methods, and can more directly reflect the elevation, position and appearance of the features. At this stage, the technology of producing three-dimensional geographic information models based on oblique photogrammetry technology is rapidly developing. The market demand and model results have been emerged in a large amount, and the related quality inspection needs are also getting larger and larger. Through the study of relevant literature, it is found that there are a lot of researches on the basic principles and technical characteristics of this technology, and relatively few studies on quality inspection and analysis. On the basis of summarizing the basic principle and technical characteristics of oblique photogrammetry technology, this paper introduces the inspection contents and inspection methods of three-dimensional geographic information model based on oblique photogrammetry technology. Combined with the actual inspection work, this paper summarizes the quality problems of three-dimensional geographic information model based on oblique photogrammetry technology, analyzes the causes of the problems and puts forward the quality control measures. It provides technical guidance for the quality inspection of three-dimensional geographic information model data products based on oblique photogrammetry technology in China and provides technical support for the vigorous development of three-dimensional geographic information model based on oblique photogrammetry technology.
Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.
Universal dynamical properties preclude standard clustering in a large class of biochemical data.
Gomez, Florian; Stoop, Ralph L; Stoop, Ruedi
2014-09-01
Clustering of chemical and biochemical data based on observed features is a central cognitive step in the analysis of chemical substances, in particular in combinatorial chemistry, or of complex biochemical reaction networks. Often, for reasons unknown to the researcher, this step produces disappointing results. Once the sources of the problem are known, improved clustering methods might revitalize the statistical approach of compound and reaction search and analysis. Here, we present a generic mechanism that may be at the origin of many clustering difficulties. The variety of dynamical behaviors that can be exhibited by complex biochemical reactions on variation of the system parameters are fundamental system fingerprints. In parameter space, shrimp-like or swallow-tail structures separate parameter sets that lead to stable periodic dynamical behavior from those leading to irregular behavior. We work out the genericity of this phenomenon and demonstrate novel examples for their occurrence in realistic models of biophysics. Although we elucidate the phenomenon by considering the emergence of periodicity in dependence on system parameters in a low-dimensional parameter space, the conclusions from our simple setting are shown to continue to be valid for features in a higher-dimensional feature space, as long as the feature-generating mechanism is not too extreme and the dimension of this space is not too high compared with the amount of available data. For online versions of super-paramagnetic clustering see http://stoop.ini.uzh.ch/research/clustering. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Tian, Jie; Liu, Qianqi; Wang, Xi; Xing, Ping; Yang, Zhuowen; Wu, Changjun
2017-01-20
As breast cancer tissues are stiffer than normal tissues, shear wave elastography (SWE) can locally quantify tissue stiffness and provide histological information. Moreover, tissue stiffness can be observed on three-dimensional (3D) colour-coded elasticity maps. Our objective was to evaluate the diagnostic performances of quantitative features in differentiating breast masses by two-dimensional (2D) and 3D SWE. Two hundred ten consecutive women with 210 breast masses were examined with B-mode ultrasound (US) and SWE. Quantitative features of 3D and 2D SWE were assessed, including elastic modulus standard deviation (E SD E ) measured on SWE mode images and E SD U measured on B-mode images, as well as maximum elasticity (E max ). Adding quantitative features to B-mode US improved the diagnostic performance (p < 0.05) and reduced false-positive biopsies (p < 0.0001). The area under the receiver operating characteristic curve (AUC) of 3D SWE was similar to that of 2D SWE for E SD E (p = 0.026) and E SD U (p = 0.159) but inferior to that of 2D SWE for E max (p = 0.002). Compared with E SD U , E SD E showed a higher AUC on 2D (p = 0.0038) and 3D SWE (p = 0.0057). Our study indicates that quantitative features of 3D and 2D SWE can significantly improve the diagnostic performance of B-mode US, especially 3D SWE E SD E , which shows considerable clinical value.
The identification of two unusual types of homemade ammunition.
Lee, Hsieh-Chang; Meng, Hsien-Hui
2012-07-01
Illegal homemade ammunition is commonly used by criminals to commit crimes in Taiwan. Two unusual types of homemade ammunition that most closely resembling genuine ammunition are studied here. Their genuine counterparts are studied as the control samples for the purpose of comparison. Unfired ammunition is disassembled, and the morphological, dimensional, and compositional features of the bullet and cartridge case are examined. Statistical tests are employed to distinguish the dimensional differences between homemade and genuine ammunitions. Manufacturing marks on head stamps of the cartridge case are carefully examined. Compositional features of propellant powders, primer mixtures, and gunshot residues are also analyzed. The results reveal that the morphological, dimensional, and compositional features of major parts of the ammunition can be employed to differentiate homemade cartridges from genuine ones. Among these features, tool marks on the head stamps left by the bunter can be used to trace the origin of ammunition. © 2012 American Academy of Forensic Sciences.
Ensemble of sparse classifiers for high-dimensional biological data.
Kim, Sunghan; Scalzo, Fabien; Telesca, Donatello; Hu, Xiao
2015-01-01
Biological data are often high in dimension while the number of samples is small. In such cases, the performance of classification can be improved by reducing the dimension of data, which is referred to as feature selection. Recently, a novel feature selection method has been proposed utilising the sparsity of high-dimensional biological data where a small subset of features accounts for most variance of the dataset. In this study we propose a new classification method for high-dimensional biological data, which performs both feature selection and classification within a single framework. Our proposed method utilises a sparse linear solution technique and the bootstrap aggregating algorithm. We tested its performance on four public mass spectrometry cancer datasets along with two other conventional classification techniques such as Support Vector Machines and Adaptive Boosting. The results demonstrate that our proposed method performs more accurate classification across various cancer datasets than those conventional classification techniques.
Automatic identification of abstract online groups
Engel, David W; Gregory, Michelle L; Bell, Eric B; Cowell, Andrew J; Piatt, Andrew W
2014-04-15
Online abstract groups, in which members aren't explicitly connected, can be automatically identified by computer-implemented methods. The methods involve harvesting records from social media and extracting content-based and structure-based features from each record. Each record includes a social-media posting and is associated with one or more entities. Each feature is stored on a data storage device and includes a computer-readable representation of an attribute of one or more records. The methods further involve grouping records into record groups according to the features of each record. Further still the methods involve calculating an n-dimensional surface representing each record group and defining an outlier as a record having feature-based distances measured from every n-dimensional surface that exceed a threshold value. Each of the n-dimensional surfaces is described by a footprint that characterizes the respective record group as an online abstract group.
Intrinsic two-dimensional features as textons
NASA Technical Reports Server (NTRS)
Barth, E.; Zetzsche, C.; Rentschler, I.
1998-01-01
We suggest that intrinsic two-dimensional (i2D) features, computationally defined as the outputs of nonlinear operators that model the activity of end-stopped neurons, play a role in preattentive texture discrimination. We first show that for discriminable textures with identical power spectra the predictions of traditional models depend on the type of nonlinearity and fail for energy measures. We then argue that the concept of intrinsic dimensionality, and the existence of end-stopped neurons, can help us to understand the role of the nonlinearities. Furthermore, we show examples in which models without strong i2D selectivity fail to predict the correct ranking order of perceptual segregation. Our arguments regarding the importance of i2D features resemble the arguments of Julesz and co-workers regarding textons such as terminators and crossings. However, we provide a computational framework that identifies textons with the outputs of nonlinear operators that are selective to i2D features.
Blended particle filters for large-dimensional chaotic dynamical systems
Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.
2014-01-01
A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886
Wang, Jingjing; Sun, Tao; Gao, Ni; Menon, Desmond Dev; Luo, Yanxia; Gao, Qi; Li, Xia; Wang, Wei; Zhu, Huiping; Lv, Pingxin; Liang, Zhigang; Tao, Lixin; Liu, Xiangtong; Guo, Xiuhua
2014-01-01
Objective To determine the value of contourlet textural features obtained from solitary pulmonary nodules in two dimensional CT images used in diagnoses of lung cancer. Materials and Methods A total of 6,299 CT images were acquired from 336 patients, with 1,454 benign pulmonary nodule images from 84 patients (50 male, 34 female) and 4,845 malignant from 252 patients (150 male, 102 female). Further to this, nineteen patient information categories, which included seven demographic parameters and twelve morphological features, were also collected. A contourlet was used to extract fourteen types of textural features. These were then used to establish three support vector machine models. One comprised a database constructed of nineteen collected patient information categories, another included contourlet textural features and the third one contained both sets of information. Ten-fold cross-validation was used to evaluate the diagnosis results for the three databases, with sensitivity, specificity, accuracy, the area under the curve (AUC), precision, Youden index, and F-measure were used as the assessment criteria. In addition, the synthetic minority over-sampling technique (SMOTE) was used to preprocess the unbalanced data. Results Using a database containing textural features and patient information, sensitivity, specificity, accuracy, AUC, precision, Youden index, and F-measure were: 0.95, 0.71, 0.89, 0.89, 0.92, 0.66, and 0.93 respectively. These results were higher than results derived using the database without textural features (0.82, 0.47, 0.74, 0.67, 0.84, 0.29, and 0.83 respectively) as well as the database comprising only textural features (0.81, 0.64, 0.67, 0.72, 0.88, 0.44, and 0.85 respectively). Using the SMOTE as a pre-processing procedure, new balanced database generated, including observations of 5,816 benign ROIs and 5,815 malignant ROIs, and accuracy was 0.93. Conclusion Our results indicate that the combined contourlet textural features of solitary pulmonary nodules in CT images with patient profile information could potentially improve the diagnosis of lung cancer. PMID:25250576
Effective traffic features selection algorithm for cyber-attacks samples
NASA Astrophysics Data System (ADS)
Li, Yihong; Liu, Fangzheng; Du, Zhenyu
2018-05-01
By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.
Firefly Mating Algorithm for Continuous Optimization Problems
Ritthipakdee, Amarita; Premasathian, Nol; Jitkongchuen, Duangjai
2017-01-01
This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima. PMID:28808442
Firefly Mating Algorithm for Continuous Optimization Problems.
Ritthipakdee, Amarita; Thammano, Arit; Premasathian, Nol; Jitkongchuen, Duangjai
2017-01-01
This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima.
Li, Ziyi; Safo, Sandra E; Long, Qi
2017-07-11
Sparse principal component analysis (PCA) is a popular tool for dimensionality reduction, pattern recognition, and visualization of high dimensional data. It has been recognized that complex biological mechanisms occur through concerted relationships of multiple genes working in networks that are often represented by graphs. Recent work has shown that incorporating such biological information improves feature selection and prediction performance in regression analysis, but there has been limited work on extending this approach to PCA. In this article, we propose two new sparse PCA methods called Fused and Grouped sparse PCA that enable incorporation of prior biological information in variable selection. Our simulation studies suggest that, compared to existing sparse PCA methods, the proposed methods achieve higher sensitivity and specificity when the graph structure is correctly specified, and are fairly robust to misspecified graph structures. Application to a glioblastoma gene expression dataset identified pathways that are suggested in the literature to be related with glioblastoma. The proposed sparse PCA methods Fused and Grouped sparse PCA can effectively incorporate prior biological information in variable selection, leading to improved feature selection and more interpretable principal component loadings and potentially providing insights on molecular underpinnings of complex diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Tianyu; Mani, Ramesh G.; Wegscheider, Werner
2013-12-04
We present the results of a concurrent experimental study of microwave reflection and transport in the GaAs/AlGaAs two dimensional electron gas system and correlate observed features in the reflection with the observed transport features. The experimental results are compared with expectations based on theory.
Higher (odd) dimensional quantum Hall effect and extended dimensional hierarchy
NASA Astrophysics Data System (ADS)
Hasebe, Kazuki
2017-07-01
We demonstrate dimensional ladder of higher dimensional quantum Hall effects by exploiting quantum Hall effects on arbitrary odd dimensional spheres. Non-relativistic and relativistic Landau models are analyzed on S 2 k - 1 in the SO (2 k - 1) monopole background. The total sub-band degeneracy of the odd dimensional lowest Landau level is shown to be equal to the winding number from the base-manifold S 2 k - 1 to the one-dimension higher SO (2 k) gauge group. Based on the chiral Hopf maps, we clarify the underlying quantum Nambu geometry for odd dimensional quantum Hall effect and the resulting quantum geometry is naturally embedded also in one-dimension higher quantum geometry. An origin of such dimensional ladder connecting even and odd dimensional quantum Hall effects is illuminated from a viewpoint of the spectral flow of Atiyah-Patodi-Singer index theorem in differential topology. We also present a BF topological field theory as an effective field theory in which membranes with different dimensions undergo non-trivial linking in odd dimensional space. Finally, an extended version of the dimensional hierarchy for higher dimensional quantum Hall liquids is proposed, and its relationship to quantum anomaly and D-brane physics is discussed.
Dimensional assessment of anxiety disorders in parents and children for DSM-5.
Möller, Eline L; Majdandžić, Mirjana; Craske, Michelle G; Bögels, Susan M
2014-09-01
The current shift in the DSM towards the inclusion of a dimensional component allows clinicians and researchers to demonstrate not only the presence or absence of psychopathology in an individual, but also the degree to which the disorder and its symptoms are manifested. This study evaluated the psychometric properties and utility of a set of brief dimensional scales that assess DSM-based core features of anxiety disorders, for children and their parents. The dimensional scales and the Screen for Child Anxiety Related Emotional Disorders (SCARED-71), a questionnaire to assess symptoms of all anxiety disorders, were administered to a community sample of children (n = 382), aged 8-13 years, and their mothers (n = 285) and fathers (n = 255). The dimensional scales assess six anxiety disorders: specific phobia, agoraphobia, panic disorder, social anxiety disorder, generalized anxiety disorder, and separation anxiety disorder. Children rated their own anxiety and parents their child's anxiety. The dimensional scales demonstrated high internal consistency (α > 0.78, except for father reported child panic disorder, for reason of lack of variation), and moderate to high levels of convergent validity (rs = 0.29-0.73). Children who exceeded the SCARED cutoffs scored higher on the dimensional scales than those who did not, providing preliminary support for the clinical sensitivity of the scales. Given their strong psychometric properties and utility for both child and parent report, addition of the dimensional scales to the DSM-5 might be an effective way to incorporate dimensional measurement into the categorical DSM-5 assessment of anxiety disorders in children. Copyright © 2014 American Psychiatric Association. All rights reserved.
Ospina, Raydonal; Frery, Alejandro C.
2016-01-01
We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups. PMID:27907014
Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok
2016-12-05
High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.
Phase space interrogation of the empirical response modes for seismically excited structures
NASA Astrophysics Data System (ADS)
Paul, Bibhas; George, Riya C.; Mishra, Sudib K.
2017-07-01
Conventional Phase Space Interrogation (PSI) for structural damage assessment relies on exciting the structure with low dimensional chaotic waveform, thereby, significantly limiting their applicability to large structures. The PSI technique is presently extended for structure subjected to seismic excitations. The high dimensionality of the phase space for seismic response(s) are overcome by the Empirical Mode Decomposition (EMD), decomposing the responses to a number of intrinsic low dimensional oscillatory modes, referred as Intrinsic Mode Functions (IMFs). Along with their low dimensionality, a few IMFs, retain sufficient information of the system dynamics to reflect the damage induced changes. The mutually conflicting nature of low-dimensionality and the sufficiency of dynamic information are taken care by the optimal choice of the IMF(s), which is shown to be the third/fourth IMFs. The optimal IMF(s) are employed for the reconstruction of the Phase space attractor following Taken's embedding theorem. The widely referred Changes in Phase Space Topology (CPST) feature is then employed on these Phase portrait(s) to derive the damage sensitive feature, referred as the CPST of the IMFs (CPST-IMF). The legitimacy of the CPST-IMF is established as a damage sensitive feature by assessing its variation with a number of damage scenarios benchmarked in the IASC-ASCE building. The damage localization capability, remarkable tolerance to noise contamination and the robustness under different seismic excitations of the feature are demonstrated.
Fetit, Ahmed E; Novak, Jan; Peet, Andrew C; Arvanitits, Theodoros N
2015-09-01
The aim of this study was to assess the efficacy of three-dimensional texture analysis (3D TA) of conventional MR images for the classification of childhood brain tumours in a quantitative manner. The dataset comprised pre-contrast T1 - and T2-weighted MRI series obtained from 48 children diagnosed with brain tumours (medulloblastoma, pilocytic astrocytoma and ependymoma). 3D and 2D TA were carried out on the images using first-, second- and higher order statistical methods. Six supervised classification algorithms were trained with the most influential 3D and 2D textural features, and their performances in the classification of tumour types, using the two feature sets, were compared. Model validation was carried out using the leave-one-out cross-validation (LOOCV) approach, as well as stratified 10-fold cross-validation, in order to provide additional reassurance. McNemar's test was used to test the statistical significance of any improvements demonstrated by 3D-trained classifiers. Supervised learning models trained with 3D textural features showed improved classification performances to those trained with conventional 2D features. For instance, a neural network classifier showed 12% improvement in area under the receiver operator characteristics curve (AUC) and 19% in overall classification accuracy. These improvements were statistically significant for four of the tested classifiers, as per McNemar's tests. This study shows that 3D textural features extracted from conventional T1 - and T2-weighted images can improve the diagnostic classification of childhood brain tumours. Long-term benefits of accurate, yet non-invasive, diagnostic aids include a reduction in surgical procedures, improvement in surgical and therapy planning, and support of discussions with patients' families. It remains necessary, however, to extend the analysis to a multicentre cohort in order to assess the scalability of the techniques used. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Eigen, D. J.; Fromm, F. R.; Northouse, R. A.
1974-01-01
A new clustering algorithm is presented that is based on dimensional information. The algorithm includes an inherent feature selection criterion, which is discussed. Further, a heuristic method for choosing the proper number of intervals for a frequency distribution histogram, a feature necessary for the algorithm, is presented. The algorithm, although usable as a stand-alone clustering technique, is then utilized as a global approximator. Local clustering techniques and configuration of a global-local scheme are discussed, and finally the complete global-local and feature selector configuration is shown in application to a real-time adaptive classification scheme for the analysis of remote sensed multispectral scanner data.
A stereo remote sensing feature selection method based on artificial bee colony algorithm
NASA Astrophysics Data System (ADS)
Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi
2014-05-01
To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.
Machine Learning for Big Data: A Study to Understand Limits at Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R.; Del-Castillo-Negrete, Carlos Emilio
This report aims to empirically understand the limits of machine learning when applied to Big Data. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical data mining and machine learning under more scrutiny, evaluation and application for gleaning insights from the data than ever before. Much is expected from algorithms without understanding their limitations at scale while dealing with massive datasets. In that context, we pose and address the following questions How does a machine learning algorithm perform on measuresmore » such as accuracy and execution time with increasing sample size and feature dimensionality? Does training with more samples guarantee better accuracy? How many features to compute for a given problem? Do more features guarantee better accuracy? Do efforts to derive and calculate more features and train on larger samples worth the effort? As problems become more complex and traditional binary classification algorithms are replaced with multi-task, multi-class categorization algorithms do parallel learners perform better? What happens to the accuracy of the learning algorithm when trained to categorize multiple classes within the same feature space? Towards finding answers to these questions, we describe the design of an empirical study and present the results. We conclude with the following observations (i) accuracy of the learning algorithm increases with increasing sample size but saturates at a point, beyond which more samples do not contribute to better accuracy/learning, (ii) the richness of the feature space dictates performance - both accuracy and training time, (iii) increased dimensionality often reflected in better performance (higher accuracy in spite of longer training times) but the improvements are not commensurate the efforts for feature computation and training and (iv) accuracy of the learning algorithms drop significantly with multi-class learners training on the same feature matrix and (v) learning algorithms perform well when categories in labeled data are independent (i.e., no relationship or hierarchy exists among categories).« less
Kurosumi, M; Mizukoshi, K
2018-05-01
The types of shape feature that constitutes a face have not been comprehensively established, and most previous studies of age-related changes in facial shape have focused on individual characteristics, such as wrinkle, sagging skin, etc. In this study, we quantitatively measured differences in face shape between individuals and investigated how shape features changed with age. We analyzed three-dimensionally the faces of 280 Japanese women aged 20-69 years and used principal component analysis to establish the shape features that characterized individual differences. We also evaluated the relationships between each feature and age, clarifying the shape features characteristic of different age groups. Changes in facial shape in middle age were a decreased volume of the upper face and increased volume of the whole cheeks and around the chin. Changes in older people were an increased volume of the lower cheeks and around the chin, sagging skin, and jaw distortion. Principal component analysis was effective for identifying facial shape features that represent individual and age-related differences. This method allowed straightforward measurements, such as the increase or decrease in cheeks caused by soft tissue changes or skeletal-based changes to the forehead or jaw, simply by acquiring three-dimensional facial images. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.
Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin
We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.
Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification
Feng, Yang; Jiang, Jiancheng; Tong, Xin
2015-01-01
We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing. PMID:27185970
NASA Astrophysics Data System (ADS)
Jiang, Li; Shi, Tielin; Xuan, Jianping
2012-05-01
Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.
Chan, Louis K H; Hayward, William G
2009-02-01
In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed results are difficult to explain in its absence. The present study measured dimension-specific performance during detection and localization, tasks that require operation of dimensional modules and the master map, respectively. Results showed a dissociation between tasks in terms of both dimension-switching costs and cross-dimension attentional capture, reflecting a dimension-specific nature for detection tasks and a dimension-general nature for localization tasks. In a feature-discrimination task, results precluded an explanation based on response mode. These results are interpreted to support FIT's postulation that different mechanisms are involved in parallel and focal attention searches. This indicates that the FIT architecture should be adopted to explain the current results and that a variety of visual attention findings can be addressed within this framework. Copyright 2009 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
van de Moortele, Tristan; Nemes, Andras; Wendt, Christine; Coletti, Filippo
2016-11-01
The morphological features of the airway tree directly affect the air flow features during breathing, which determines the gas exchange and inhaled particle transport. Lung disease, Chronic Obstructive Pulmonary Disease (COPD) in this study, affects the structural features of the lungs, which in turn negatively affects the air flow through the airways. Here bronchial tree air volume geometries are segmented from Computed Tomography (CT) scans of healthy and diseased subjects. Geometrical analysis of the airway centerlines and corresponding cross-sectional areas provide insight into the specific effects of COPD on the airway structure. These geometries are also used to 3D print anatomically accurate, patient specific flow models. Three-component, three-dimensional velocity fields within these models are acquired using Magnetic Resonance Imaging (MRI). The three-dimensional flow fields provide insight into the change in flow patterns and features. Additionally, particle trajectories are determined using the velocity fields, to identify the fate of therapeutic and harmful inhaled aerosols. Correlation between disease-specific and patient-specific anatomical features with dysfunctional airflow patterns can be achieved by combining geometrical and flow analysis.
Shadows of rotating five-dimensional charged EMCS black holes
NASA Astrophysics Data System (ADS)
Amir, Muhammed; Singh, Balendra Pratap; Ghosh, Sushant G.
2018-05-01
Higher-dimensional theories admit astrophysical objects like supermassive black holes, which are rather different from standard ones, and their gravitational lensing features deviate from general relativity. It is well known that a black hole shadow is a dark region due to the falling geodesics of photons into the black hole and, if detected, a black hole shadow could be used to determine which theory of gravity is consistent with observations. Measurements of the shadow sizes around the black holes can help to evaluate various parameters of the black hole metric. We study the shapes of the shadow cast by the rotating five-dimensional charged Einstein-Maxwell-Chern-Simons (EMCS) black holes, which is characterized by four parameters, i.e., mass, two spins, and charge, in which the spin parameters are set equal. We integrate the null geodesic equations and derive an analytical formula for the shadow of the five-dimensional EMCS black hole, in turn, to show that size of black hole shadow is affected due to charge as well as spin. The shadow is a dark zone covered by a deformed circle, and the size of the shadow decreases with an increase in the charge q when compared with the five-dimensional Myers-Perry black hole. Interestingly, the distortion increases with charge q. The effect of these parameters on the shape and size of the naked singularity shadow of the five-dimensional EMCS black hole is also discussed.
van Gemert, Jan C; Veenman, Cor J; Smeulders, Arnold W M; Geusebroek, Jan-Mark
2010-07-01
This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.
NASA Astrophysics Data System (ADS)
Taşkin Kaya, Gülşen
2013-10-01
Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input-output relationships in high-dimensional systems for many problems in science and engineering. The HDMR method is developed to improve the efficiency of the deducing high dimensional behaviors. The method is formed by a particular organization of low dimensional component functions, in which each function is the contribution of one or more input variables to the output variables.
Effects of anisotropy on the two-dimensional inversion procedure
NASA Astrophysics Data System (ADS)
Heise, Wiebke; Pous, Jaume
2001-12-01
In this paper we show some of the effects that appear in magnetotelluric measurements over 2-D anisotropic structures, and propose a procedure to recover the anisotropy using 2-D inversion algorithms for isotropic models. First, we see how anisotropy affects the usual interpretation steps: dimensionality analysis and 2-D inversion. Two models containing general 2-D azimuthal anisotropic features were chosen to illustrate this approach: an anisotropic block and an anisotropic layer, both forming part of general 2-D models. In addition, a third model with dipping anisotropy was studied. For each model we examined the influence of various anisotropy strikes and resistivity contrasts on the dimensionality analysis and on the behaviour of the induction arrows. We found that, when the anisotropy ratio is higher than five, even if the strike is frequency-dependent it is possible to decide on a direction close to the direction of anisotropy. Then, if the data are rotated to this angle, a 2-D inversion reproduces the anisotropy reasonably well by means of macro-anisotropy. This strategy was tested on field data where anisotropy had been previously recognized.
Numerical study of blast characteristics from detonation of homogeneous explosives
NASA Astrophysics Data System (ADS)
Balakrishnan, Kaushik; Genin, Franklin; Nance, Doug V.; Menon, Suresh
2010-04-01
A new robust numerical methodology is used to investigate the propagation of blast waves from homogeneous explosives. The gas-phase governing equations are solved using a hybrid solver that combines a higher-order shock capturing scheme with a low-dissipation central scheme. Explosives of interest include Nitromethane, Trinitrotoluene, and High-Melting Explosive. The shock overpressure and total impulse are estimated at different radial locations and compared for the different explosives. An empirical scaling correlation is presented for the shock overpressure, incident positive phase pressure impulse, and total impulse. The role of hydrodynamic instabilities to the blast effects of explosives is also investigated in three dimensions, and significant mixing between the detonation products and air is observed. This mixing results in afterburn, which is found to augment the impulse characteristics of explosives. Furthermore, the impulse characteristics are also observed to be three-dimensional in the region of the mixing layer. This paper highlights that while some blast features can be successfully predicted from simple one-dimensional studies, the growth of hydrodynamic instabilities and the impulsive loading of homogeneous explosives require robust three-dimensional investigation.
Cnidarian Nerve Nets and Neuromuscular Efficiency.
Satterlie, Richard A
2015-12-01
Cnidarians are considered "nerve net animals" even though their nervous systems include various forms of condensation and centralization. Yet, their broad, two-dimensional muscle sheets are innervated by diffuse nerve nets. Do the motor nerve nets represent a primitive organization of multicellular nervous systems, do they represent a consequence of radial symmetry, or do they offer an efficient way to innervate a broad, two-dimensional muscle sheet, in which excitation of the muscle sheet can come from multiple sites of initiation? Regarding the primitive nature of cnidarian nervous systems, distinct neuronal systems exhibit some adaptations that are well known in higher animals, such as the use of oversized neurons with increased speed of conduction, and condensation of neurites into nerve-like tracts. A comparison of neural control of two-dimensional muscle sheets in a mollusc and jellyfish suggests that a possible primitive feature of cnidarian neurons may be a lack of regional specialization into conducting and transmitting regions. © The Author 2015. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Spectral Dimensionality and Scale of Urban Radiance
NASA Technical Reports Server (NTRS)
Small, Christopher
2001-01-01
Characterization of urban radiance and reflectance is important for understanding the effects of solar energy flux on the urban environment as well as for satellite mapping of urban settlement patterns. Spectral mixture analyses of Landsat and Ikonos imagery suggest that the urban radiance field can very often be described with combinations of three or four spectral endmembers. Dimensionality estimates of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) radiance measurements of urban areas reveal the existence of 30 to 60 spectral dimensions. The extent to which broadband imagery collected by operational satellites can represent the higher dimensional mixing space is a function of both the spatial and spectral resolution of the sensor. AVIRIS imagery offers the spatial and spectral resolution necessary to investigate the scale dependence of the spectral dimensionality. Dimensionality estimates derived from Minimum Noise Fraction (MNF) eigenvalue distributions show a distinct scale dependence for AVIRIS radiance measurements of Milpitas, California. Apparent dimensionality diminishes from almost 40 to less than 10 spectral dimensions between scales of 8000 m and 300 m. The 10 to 30 m scale of most features in urban mosaics results in substantial spectral mixing at the 20 m scale of high altitude AVIRIS pixels. Much of the variance at pixel scales is therefore likely to result from actual differences in surface reflectance at pixel scales. Spatial smoothing and spectral subsampling of AVIRIS spectra both result in substantial loss of information and reduction of apparent dimensionality, but the primary spectral endmembers in all cases are analogous to those found in global analyses of Landsat and Ikonos imagery of other urban areas.
The study of integration about measurable image and 4D production
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun
2008-12-01
In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.
Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497
Decorrelation of the true and estimated classifier errors in high-dimensional settings.
Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R
2007-01-01
The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.
Role of hydrodynamic viscosity on phonon transport in suspended graphene
NASA Astrophysics Data System (ADS)
Li, Xun; Lee, Sangyeop
2018-03-01
When phonon transport is in the hydrodynamic regime, the thermal conductivity exhibits peculiar dependences on temperatures (T ) and sample widths (W ). These features were used in the past to experimentally confirm the hydrodynamic phonon transport in three-dimensional bulk materials. Suspended graphene was recently predicted to exhibit strong hydrodynamic features in thermal transport at much higher temperature than the three-dimensional bulk materials, but its experimental confirmation requires quantitative guidance by theory and simulation. Here we quantitatively predict those peculiar dependences using the Monte Carlo solution of the Peierls-Boltzmann equation with an ab initio full three-phonon scattering matrix. Thermal conductivity is found to increase as Tα where α ranges from 1.89 to 2.49 depending on a sample width at low temperatures, much larger than 1.68 of the ballistic case. The thermal conductivity has a width dependence of W1.17 at 100 K, clearly distinguished from the sublinear dependence of the ballistic-diffusive regime. These peculiar features are explained with a phonon viscous damping effect of the hydrodynamic regime. We derive an expression for the phonon hydrodynamic viscosity from the Peierls-Boltzmann equation, and discuss the fact that the phonon viscous damping explains well those peculiar dependences of thermal conductivity at 100 K. The phonon viscous damping still causes significant thermal resistance when a temperature is 300 K and a sample width is around 1 µm, even though the hydrodynamic regime is not dominant over other regimes at this condition.
Three Dimensional Imaging with Multiple Wavelength Speckle Interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernacki, Bruce E.; Cannon, Bret D.; Schiffern, John T.
2014-05-28
We present the design, modeling, construction, and results of a three-dimensional imager based upon multiple-wavelength speckle interferometry. A surface under test is illuminated with tunable laser light in a Michelson interferometer configuration while a speckled image is acquired at each laser frequency step. The resulting hypercube is Fourier transformed in the frequency dimension and the beat frequencies that result map the relative offsets of surface features. Synthetic wavelengths resulting from the laser tuning can probe features ranging from 18 microns to hundreds of millimeters. Three dimensional images will be presented along with modeling results.
Huo, Guanying
2017-01-01
As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614
Shock induced damage in copper: A before and after, three-dimensional study
NASA Astrophysics Data System (ADS)
Menasche, David B.; Lind, Jonathan; Li, Shiu Fai; Kenesei, Peter; Bingert, John F.; Lienert, Ulrich; Suter, Robert M.
2016-04-01
We report on the microstructural features associated with the formation of incipient spall and damage in a fully recrystallized, high purity copper sample. Before and after ballistic shock loading, approximately 0.8 mm3 of the sample's crystal lattice orientation field is mapped using non-destructive near-field High Energy Diffraction Microscopy. Absorption contrast tomography is used to image voids after loading. This non-destructive interrogation of damage initiation allows for novel characterization of spall points vis-a-vis microstructural features and a fully 3D examination of microstructural topology and its influence on incipient damage. The spalled region is registered with and mapped back onto the pre-shock orientation field. As expected, the great majority of voids occur at grain boundaries and higher order microstructural features; however, we find no statistical preference for particular grain boundary types. The damaged region contains a large volume of Σ-3 (60 °<111 >) connected domains with a large area fraction of incoherent Σ-3 boundaries.
Three-dimensional freak waves and higher-order wave-wave resonances
NASA Astrophysics Data System (ADS)
Badulin, S. I.; Ivonin, D. V.; Dulov, V. A.
2012-04-01
Quite often the freak wave phenomenon is associated with the mechanism of modulational (Benjamin-Feir) instability resulted from resonances of four waves with close directions and scales. This weakly nonlinear model reflects some important features of the phenomenon and is discussing in a great number of studies as initial stage of evolution of essentially nonlinear water waves. Higher-order wave-wave resonances attract incomparably less attention. More complicated mathematics and physics explain this disregard partially only. The true reason is a lack of adequate experimental background for the study of essentially three-dimensional water wave dynamics. We start our study with the classic example of New Year Wave. Two extreme events: the famous wave 26.5 meters and one of smaller 18.5 meters height (formally, not freak) of the same record, are shown to have pronounced features of essentially three-dimensional five-wave resonant interactions. The quasi-spectra approach is used for the data analysis in order to resolve adequately frequencies near the spectral peak fp ≈ 0.057Hz and, thus, to analyze possible modulations of the dominant wave component. In terms of the quasi-spectra the above two anomalous waves show co-existence of the peak harmonic and one at frequency f5w = 3/2fp that corresponds to maximum of five-wave instability of weakly nonlinear waves. No pronounced marks of usually discussed Benjamin-Feir instability are found in the record that is easy to explain: the spectral peak frequency fp corresponds to the non-dimensional depth parameter kD ≈ 0.92 (k - wavenumber, D ≈ 70 meters - depth at the Statoil platform Draupner site) that is well below the shallow water limit of the instability kD = 1.36. A unique data collection of wave records of the Marine Hydrophysical Institute in the Katsiveli platform (Black Sea) has been analyzed in view of the above findings of possible impact of the five-wave instability on freak wave occurrence. The data cover period October 14 - November 6, 2009 almost continuously. Antenna of 6 resistance wave gauges (a pentagon with one center gauge) is used to gain information on wave directions. Wave conditions vary from perfect still to storms with significant wave heights up to Hs = 1.7 meters and wind speeds 15m/s. Measurements with frequency 10Hz for dominant frequencies 0.1 - 0.2Hz fixed 40 freak wave events (criterium H/Hs > 2) and showed no dependence on Hs definitely. Data processing within frequency quasi-spectra approach and directional spectra reconstructions found pronounced features of essentially three-dimensional anomalous waves. All the events are associated with dramatic widening of instant frequency spectra in the range fp - f5w and stronger directional spreading. On the contrary, the classic Benjamin-Feir modulations show no definite links with the events and can be likely treated as dynamically neutral part of wave field. The apparent contradiction with the recent study (Saprykina, Dulov, Kuznetsov, Smolov, 2010) based on the same data collection can be explained partially by features of data processing. Physical roots of the inconsistency should be detailed in further studies. The work was supported by the Russian government contract 11.G34.31.0035 (signed 25 November 2010), Russian Foundation for Basic Research grant 11-05-01114-a, Ukrainian State Agency of Science, Innovations and Information under Contract M/412-2011 and ONR grant N000141010991. Authors gratefully acknowledge continuing support of these foundations.
The density-matrix renormalization group: a short introduction.
Schollwöck, Ulrich
2011-07-13
The density-matrix renormalization group (DMRG) method has established itself over the last decade as the leading method for the simulation of the statics and dynamics of one-dimensional strongly correlated quantum lattice systems. The DMRG is a method that shares features of a renormalization group procedure (which here generates a flow in the space of reduced density operators) and of a variational method that operates on a highly interesting class of quantum states, so-called matrix product states (MPSs). The DMRG method is presented here entirely in the MPS language. While the DMRG generally fails in larger two-dimensional systems, the MPS picture suggests a straightforward generalization to higher dimensions in the framework of tensor network states. The resulting algorithms, however, suffer from difficulties absent in one dimension, apart from a much more unfavourable efficiency, such that their ultimate success remains far from clear at the moment.
On flaw tolerance of nacre: a theoretical study
Shao, Yue; Zhao, Hong-Ping; Feng, Xi-Qiao
2014-01-01
As a natural composite, nacre has an elegant staggered ‘brick-and-mortar’ microstructure consisting of mineral platelets glued by organic macromolecules, which endows the material with superior mechanical properties to achieve its biological functions. In this paper, a microstructure-based crack-bridging model is employed to investigate how the strength of nacre is affected by pre-existing structural defects. Our analysis demonstrates that owing to its special microstructure and the toughening effect of platelets, nacre has a superior flaw-tolerance feature. The maximal crack size that does not evidently reduce the tensile strength of nacre is up to tens of micrometres, about three orders higher than that of pure aragonite. Through dimensional analysis, a non-dimensional parameter is proposed to quantify the flaw-tolerance ability of nacreous materials in a wide range of structural parameters. This study provides us some inspirations for optimal design of advanced biomimetic composites. PMID:24402917
Bronze-mean hexagonal quasicrystal
NASA Astrophysics Data System (ADS)
Dotera, Tomonari; Bekku, Shinichi; Ziherl, Primož
2017-10-01
The most striking feature of conventional quasicrystals is their non-traditional symmetry characterized by icosahedral, dodecagonal, decagonal or octagonal axes. The symmetry and the aperiodicity of these materials stem from an irrational ratio of two or more length scales controlling their structure, the best-known examples being the Penrose and the Ammann-Beenker tiling as two-dimensional models related to the golden and the silver mean, respectively. Surprisingly, no other metallic-mean tilings have been discovered so far. Here we propose a self-similar bronze-mean hexagonal pattern, which may be viewed as a projection of a higher-dimensional periodic lattice with a Koch-like snowflake projection window. We use numerical simulations to demonstrate that a disordered variant of this quasicrystal can be materialized in soft polymeric colloidal particles with a core-shell architecture. Moreover, by varying the geometry of the pattern we generate a continuous sequence of structures, which provide an alternative interpretation of quasicrystalline approximants observed in several metal-silicon alloys.
A Review of Three-Dimensional Printing in Tissue Engineering.
Sears, Nick A; Seshadri, Dhruv R; Dhavalikar, Prachi S; Cosgriff-Hernandez, Elizabeth
2016-08-01
Recent advances in three-dimensional (3D) printing technologies have led to a rapid expansion of applications from the creation of anatomical training models for complex surgical procedures to the printing of tissue engineering constructs. In addition to achieving the macroscale geometry of organs and tissues, a print layer thickness as small as 20 μm allows for reproduction of the microarchitectures of bone and other tissues. Techniques with even higher precision are currently being investigated to enable reproduction of smaller tissue features such as hepatic lobules. Current research in tissue engineering focuses on the development of compatible methods (printers) and materials (bioinks) that are capable of producing biomimetic scaffolds. In this review, an overview of current 3D printing techniques used in tissue engineering is provided with an emphasis on the printing mechanism and the resultant scaffold characteristics. Current practical challenges and technical limitations are emphasized and future trends of bioprinting are discussed.
Topological phases in frustrated synthetic ladders with an odd number of legs
NASA Astrophysics Data System (ADS)
Barbarino, Simone; Dalmonte, Marcello; Fazio, Rosario; Santoro, Giuseppe E.
2018-01-01
The realization of the Hofstadter model in a strongly anisotropic ladder geometry has now become possible in one-dimensional optical lattices with a synthetic dimension. In this work, we show how the Hofstadter Hamiltonian in such ladder configurations hosts a topological phase of matter which is radically different from its two-dimensional counterpart. This topological phase stems directly from the hybrid nature of the ladder geometry and is protected by a properly defined inversion symmetry. We start our analysis by considering the paradigmatic case of a three-leg ladder which supports a topological phase exhibiting the typical features of topological states in one dimension: robust fermionic edge modes, a degenerate entanglement spectrum, and a nonzero Zak phase; then, we generalize our findings—addressable in the state-of-the-art cold-atom experiments—to ladders with a higher number of legs.
Attentional Bias in Human Category Learning: The Case of Deep Learning.
Hanson, Catherine; Caglar, Leyla Roskan; Hanson, Stephen José
2018-01-01
Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically correlated (integral) within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974). In contrast to humans, a single hidden layer backpropagation (BP) neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993). This "failure" to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways: (1) by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2) by investigating whether a Deep Learning (DL) network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc.), would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that using the same low dimensional stimuli, Deep Learning (DL), unlike BP but similar to humans, learns separable category structures more quickly than integral category structures. Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars) are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error) is reached resulting in rapid asymptotic learning.
Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ochilov, S.; Alam, M. S.; Bal, A.
2006-05-01
Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.
Features of Discontinuous Galerkin Algorithms in Gkeyll, and Exponentially-Weighted Basis Functions
NASA Astrophysics Data System (ADS)
Hammett, G. W.; Hakim, A.; Shi, E. L.
2016-10-01
There are various versions of Discontinuous Galerkin (DG) algorithms that have interesting features that could help with challenging problems of higher-dimensional kinetic problems (such as edge turbulence in tokamaks and stellarators). We are developing the gyrokinetic code Gkeyll based on DG methods. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communication costs (which are a bottleneck for exascale computing). The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which alternatively can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature employed in popular δf continuum gyrokinetic codes. We show some tests for a 1D Spitzer-Härm heat flux problem, which requires good resolution for the tail. For two velocity dimensions, this approach could lead to a factor of 10 or more speedup. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.
Garcia-Vicente, Ana María; Molina, David; Pérez-Beteta, Julián; Amo-Salas, Mariano; Martínez-González, Alicia; Bueno, Gloria; Tello-Galán, María Jesús; Soriano-Castrejón, Ángel
2017-12-01
To study the influence of dual time point 18F-FDG PET/CT in textural features and SUV-based variables and their relation among them. Fifty-six patients with locally advanced breast cancer (LABC) were prospectively included. All of them underwent a standard 18F-FDG PET/CT (PET-1) and a delayed acquisition (PET-2). After segmentation, SUV variables (SUVmax, SUVmean, and SUVpeak), metabolic tumor volume (MTV), and total lesion glycolysis (TLG) were obtained. Eighteen three-dimensional (3D) textural measures were computed including: run-length matrices (RLM) features, co-occurrence matrices (CM) features, and energies. Differences between all PET-derived variables obtained in PET-1 and PET-2 were studied. Significant differences were found between the SUV-based parameters and MTV obtained in the dual time point PET/CT, with higher values of SUV-based variables and lower MTV in the PET-2 with respect to the PET-1. In relation with the textural parameters obtained in dual time point acquisition, significant differences were found for the short run emphasis, low gray-level run emphasis, short run high gray-level emphasis, run percentage, long run emphasis, gray-level non-uniformity, homogeneity, and dissimilarity. Textural variables showed relations with MTV and TLG. Significant differences of textural features were found in dual time point 18F-FDG PET/CT. Thus, a dynamic behavior of metabolic characteristics should be expected, with higher heterogeneity in delayed PET acquisition compared with the standard PET. A greater heterogeneity was found in bigger tumors.
Quantized vortices and superflow in arbitrary dimensions: structure, energetics and dynamics
NASA Astrophysics Data System (ADS)
Goldbart, Paul M.; Bora, Florin
2009-05-01
The structure and energetics of superflow around quantized vortices, and the motion inherited by these vortices from this superflow, are explored in the general setting of a superfluid in arbitrary dimensions. The vortices may be idealized as objects of codimension 2, such as one-dimensional loops and two-dimensional closed surfaces, respectively, in the cases of three- and four-dimensional superfluidity. By using the analogy between the vortical superflow and Ampère-Maxwell magnetostatics, the equilibrium superflow containing any specified collection of vortices is constructed. The energy of the superflow is found to take on a simple form for vortices that are smooth and asymptotically large, compared with the vortex core size. The motion of vortices is analyzed in general, as well as for the special cases of hyper-spherical and weakly distorted hyper-planar vortices. In all dimensions, vortex motion reflects vortex geometry. In dimension 4 and higher, this includes not only extrinsic but also intrinsic aspects of the vortex shape, which enter via the first and second fundamental forms of classical geometry. For hyper-spherical vortices, which generalize the vortex rings of three-dimensional superfluidity, the energy-momentum relation is determined. Simple scaling arguments recover the essential features of these results, up to numerical and logarithmic factors.
Fractal geometry in an expanding, one-dimensional, Newtonian universe.
Miller, Bruce N; Rouet, Jean-Louis; Le Guirriec, Emmanuel
2007-09-01
Observations of galaxies over large distances reveal the possibility of a fractal distribution of their positions. The source of fractal behavior is the lack of a length scale in the two body gravitational interaction. However, even with new, larger, sample sizes from recent surveys, it is difficult to extract information concerning fractal properties with confidence. Similarly, three-dimensional N-body simulations with a billion particles only provide a thousand particles per dimension, far too small for accurate conclusions. With one-dimensional models these limitations can be overcome by carrying out simulations with on the order of a quarter of a million particles without compromising the computation of the gravitational force. Here the multifractal properties of two of these models that incorporate different features of the dynamical equations governing the evolution of a matter dominated universe are compared. For each model at least two scaling regions are identified. By employing criteria from dynamical systems theory it is shown that only one of them can be geometrically significant. The results share important similarities with galaxy observations, such as hierarchical clustering and apparent bifractal geometry. They also provide insights concerning possible constraints on length and time scales for fractal structure. They clearly demonstrate that fractal geometry evolves in the mu (position, velocity) space. The observed patterns are simply a shadow (projection) of higher-dimensional structure.
Two-dimensional wavelet transform feature extraction for porous silicon chemical sensors.
Murguía, José S; Vergara, Alexander; Vargas-Olmos, Cecilia; Wong, Travis J; Fonollosa, Jordi; Huerta, Ramón
2013-06-27
Designing reliable, fast responding, highly sensitive, and low-power consuming chemo-sensory systems has long been a major goal in chemo-sensing. This goal, however, presents a difficult challenge because having a set of chemo-sensory detectors exhibiting all these aforementioned ideal conditions are still largely un-realizable to-date. This paper presents a unique perspective on capturing more in-depth insights into the physicochemical interactions of two distinct, selectively chemically modified porous silicon (pSi) film-based optical gas sensors by implementing an innovative, based on signal processing methodology, namely the two-dimensional discrete wavelet transform. Specifically, the method consists of using the two-dimensional discrete wavelet transform as a feature extraction method to capture the non-stationary behavior from the bi-dimensional pSi rugate sensor response. Utilizing a comprehensive set of measurements collected from each of the aforementioned optically based chemical sensors, we evaluate the significance of our approach on a complex, six-dimensional chemical analyte discrimination/quantification task problem. Due to the bi-dimensional aspects naturally governing the optical sensor response to chemical analytes, our findings provide evidence that the proposed feature extractor strategy may be a valuable tool to deepen our understanding of the performance of optically based chemical sensors as well as an important step toward attaining their implementation in more realistic chemo-sensing applications. Copyright © 2013 Elsevier B.V. All rights reserved.
Protein sectors: evolutionary units of three-dimensional structure
Halabi, Najeeb; Rivoire, Olivier; Leibler, Stanislas; Ranganathan, Rama
2011-01-01
Proteins display a hierarchy of structural features at primary, secondary, tertiary, and higher-order levels, an organization that guides our current understanding of their biological properties and evolutionary origins. Here, we reveal a structural organization distinct from this traditional hierarchy by statistical analysis of correlated evolution between amino acids. Applied to the S1A serine proteases, the analysis indicates a decomposition of the protein into three quasi-independent groups of correlated amino acids that we term “protein sectors”. Each sector is physically connected in the tertiary structure, has a distinct functional role, and constitutes an independent mode of sequence divergence in the protein family. Functionally relevant sectors are evident in other protein families as well, suggesting that they may be general features of proteins. We propose that sectors represent a structural organization of proteins that reflects their evolutionary histories. PMID:19703402
Multiscale Anomaly Detection and Image Registration Algorithms for Airborne Landmine Detection
2008-05-01
with the sensed image. The two- dimensional correlation coefficient r for two matrices A and B both of size M ×N is given by r = ∑ m ∑ n (Amn...correlation based method by matching features in a high- dimensional feature- space . The current implementation of the SIFT algorithm uses a brute-force...by repeatedly convolving the image with a Guassian kernel. Each plane of the scale
A Generic multi-dimensional feature extraction method using multiobjective genetic programming.
Zhang, Yang; Rockett, Peter I
2009-01-01
In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.
NASA Astrophysics Data System (ADS)
Wang, Dong
2016-03-01
Gears are the most commonly used components in mechanical transmission systems. Their failures may cause transmission system breakdown and result in economic loss. Identification of different gear crack levels is important to prevent any unexpected gear failure because gear cracks lead to gear tooth breakage. Signal processing based methods mainly require expertize to explain gear fault signatures which is usually not easy to be achieved by ordinary users. In order to automatically identify different gear crack levels, intelligent gear crack identification methods should be developed. The previous case studies experimentally proved that K-nearest neighbors based methods exhibit high prediction accuracies for identification of 3 different gear crack levels under different motor speeds and loads. In this short communication, to further enhance prediction accuracies of existing K-nearest neighbors based methods and extend identification of 3 different gear crack levels to identification of 5 different gear crack levels, redundant statistical features are constructed by using Daubechies 44 (db44) binary wavelet packet transform at different wavelet decomposition levels, prior to the use of a K-nearest neighbors method. The dimensionality of redundant statistical features is 620, which provides richer gear fault signatures. Since many of these statistical features are redundant and highly correlated with each other, dimensionality reduction of redundant statistical features is conducted to obtain new significant statistical features. At last, the K-nearest neighbors method is used to identify 5 different gear crack levels under different motor speeds and loads. A case study including 3 experiments is investigated to demonstrate that the developed method provides higher prediction accuracies than the existing K-nearest neighbors based methods for recognizing different gear crack levels under different motor speeds and loads. Based on the new significant statistical features, some other popular statistical models including linear discriminant analysis, quadratic discriminant analysis, classification and regression tree and naive Bayes classifier, are compared with the developed method. The results show that the developed method has the highest prediction accuracies among these statistical models. Additionally, selection of the number of new significant features and parameter selection of K-nearest neighbors are thoroughly investigated.
Guo, Xinyu; Dominick, Kelli C.; Minai, Ali A.; Li, Hailong; Erickson, Craig A.; Lu, Long J.
2017-01-01
The whole-brain functional connectivity (FC) pattern obtained from resting-state functional magnetic resonance imaging data are commonly applied to study neuropsychiatric conditions such as autism spectrum disorder (ASD) by using different machine learning models. Recent studies indicate that both hyper- and hypo- aberrant ASD-associated FCs were widely distributed throughout the entire brain rather than only in some specific brain regions. Deep neural networks (DNN) with multiple hidden layers have shown the ability to systematically extract lower-to-higher level information from high dimensional data across a series of neural hidden layers, significantly improving classification accuracy for such data. In this study, a DNN with a novel feature selection method (DNN-FS) is developed for the high dimensional whole-brain resting-state FC pattern classification of ASD patients vs. typical development (TD) controls. The feature selection method is able to help the DNN generate low dimensional high-quality representations of the whole-brain FC patterns by selecting features with high discriminating power from multiple trained sparse auto-encoders. For the comparison, a DNN without the feature selection method (DNN-woFS) is developed, and both of them are tested with different architectures (i.e., with different numbers of hidden layers/nodes). Results show that the best classification accuracy of 86.36% is generated by the DNN-FS approach with 3 hidden layers and 150 hidden nodes (3/150). Remarkably, DNN-FS outperforms DNN-woFS for all architectures studied. The most significant accuracy improvement was 9.09% with the 3/150 architecture. The method also outperforms other feature selection methods, e.g., two sample t-test and elastic net. In addition to improving the classification accuracy, a Fisher's score-based biomarker identification method based on the DNN is also developed, and used to identify 32 FCs related to ASD. These FCs come from or cross different pre-defined brain networks including the default-mode, cingulo-opercular, frontal-parietal, and cerebellum. Thirteen of them are statically significant between ASD and TD groups (two sample t-test p < 0.05) while 19 of them are not. The relationship between the statically significant FCs and the corresponding ASD behavior symptoms is discussed based on the literature and clinician's expert knowledge. Meanwhile, the potential reason of obtaining 19 FCs which are not statistically significant is also provided. PMID:28871217
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1980-01-01
A generalized three dimensional perspective software capability was developed within the framework of a low cost computer oriented geographically based information system using the Earth Resources Laboratory Applications Software (ELAS) operating subsystem. This perspective software capability, developed primarily to support data display requirements at the NASA/NSTL Earth Resources Laboratory, provides a means of displaying three dimensional feature space object data in two dimensional picture plane coordinates and makes it possible to overlay different types of information on perspective drawings to better understand the relationship of physical features. An example topographic data base is constructed and is used as the basic input to the plotting module. Examples are shown which illustrate oblique viewing angles that convey spatial concepts and relationships represented by the topographic data planes.
Calabrese, Rossella; Raia, Nicole; Huang, Wenwen; Ghezzi, Chiara E; Simon, Marc; Staii, Cristian; Weiss, Anthony S; Kaplan, David L
2017-09-01
The response of human bone marrow-derived mesenchymal stem cells (hMSCs) encapsulated in three-dimensional (3D) charged protein hydrogels was studied. Combining silk fibroin (S) with recombinant human tropoelastin (E) or silk ionomers (I) provided protein composite alloys with tunable physicochemical and biological features for regulating the bioactivity of encapsulated hMSCs. The effects of the biomaterial charges on hMSC viability, proliferation and chondrogenic or osteogenic differentiation were assessed. The silk-tropoelastin or silk-ionomers hydrogels supported hMSC viability, proliferation and differentiation. Gene expression of markers for chondrogenesis and osteogenesis, as well as biochemical and histological analysis, showed that hydrogels with different S/E and S/I ratios had different effects on cell fate. The negatively charged hydrogels upregulated hMSC chondrogenesis or osteogenesis, with or without specific differentiation media, and hydrogels with higher tropoelastin content inhibited the differentiation potential even in the presence of the differentiation media. The results provide insight on charge-tunable features of protein-based biomaterials to control hMSC differentiation in 3D hydrogels, as well as providing a new set of hydrogels for the compatible encapsulation and utility for cell functions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao
2014-01-01
Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470
Wing-section optimization for supersonic viscous flow
NASA Technical Reports Server (NTRS)
Item, Cem C.; Baysal, Oktay (Editor)
1995-01-01
To improve the shape of a supersonic wing, an automated method that also includes higher fidelity to the flow physics is desirable. With this impetus, an aerodynamic optimization methodology incorporating thin-layer Navier-Stokes equations and sensitivity analysis had been previously developed. Prior to embarking upon the wind design task, the present investigation concentrated on testing the feasibility of the methodology, and the identification of adequate problem formulations, by defining two-dimensional, cost-effective test cases. Starting with two distinctly different initial airfoils, two independent shape optimizations resulted in shapes with similar features: slightly cambered, parabolic profiles with sharp leading- and trailing-edges. Secondly, the normal section to the subsonic portion of the leading edge, which had a high normal angle-of-attack, was considered. The optimization resulted in a shape with twist and camber which eliminated the adverse pressure gradient, hence, exploiting the leading-edge thrust. The wing section shapes obtained in all the test cases had the features predicted by previous studies. Therefore, it was concluded that the flowfield analyses and sensitivity coefficients were computed and fed to the present gradient-based optimizer correctly. Also, as a result of the present two-dimensional study, suggestions were made for the problem formulations which should contribute to an effective wing shape optimization.
Le Pape, Fiona; Cosnuau-Kemmat, Lucie; Richard, Gaëlle; Dubrana, Frédéric; Férec, Claude; Zal, Franck; Leize, Elisabeth; Delépine, Pascal
2017-04-01
Human mesenchymal stem cells (MSCs) are promising candidates for therapeutic applications such as tissue engineering. However, one of the main challenges is to improve oxygen supply to hypoxic areas to reduce oxygen gradient formation while preserving MSC differentiation potential and viability. For this purpose, a marine hemoglobin, HEMOXCell, was evaluated as an oxygen carrier for culturing human bone marrow MSCs in vitro for future three-dimensional culture applications. Impact of HEMOXCell on cell growth and viability was assessed in human platelet lysate (hPL)-supplemented media. Maintenance of MSC features, such as multipotency and expression of MSC specific markers, was further investigated by biochemical assays and flow cytometry analysis. Our experimental results highlight its oxygenator potential and indicate that an optimal concentration of 0.025 g/L HEMOXCell induces a 25%-increase of the cell growth rate, preserves MSC phenotype, and maintains MSC differentiation properties; a two-fold higher concentration induces cell detachment without altering cell viability. Our data suggest the potential interest of HEMOXCell as a natural oxygen carrier for tissue engineering applications to oxygenate hypoxic areas and to maintain cell viability, functions and "stemness." These features will be further tested within three-dimensional scaffolds. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Dimensionality reduction in epidemic spreading models
NASA Astrophysics Data System (ADS)
Frasca, M.; Rizzo, A.; Gallo, L.; Fortuna, L.; Porfiri, M.
2015-09-01
Complex dynamical systems often exhibit collective dynamics that are well described by a reduced set of key variables in a low-dimensional space. Such a low-dimensional description offers a privileged perspective to understand the system behavior across temporal and spatial scales. In this work, we propose a data-driven approach to establish low-dimensional representations of large epidemic datasets by using a dimensionality reduction algorithm based on isometric features mapping (ISOMAP). We demonstrate our approach on synthetic data for epidemic spreading in a population of mobile individuals. We find that ISOMAP is successful in embedding high-dimensional data into a low-dimensional manifold, whose topological features are associated with the epidemic outbreak. Across a range of simulation parameters and model instances, we observe that epidemic outbreaks are embedded into a family of closed curves in a three-dimensional space, in which neighboring points pertain to instants that are close in time. The orientation of each curve is unique to a specific outbreak, and the coordinates correlate with the number of infected individuals. A low-dimensional description of epidemic spreading is expected to improve our understanding of the role of individual response on the outbreak dynamics, inform the selection of meaningful global observables, and, possibly, aid in the design of control and quarantine procedures.
Inter- and Intra-Dimensional Dependencies in Implicit Phonotactic Learning
ERIC Educational Resources Information Center
Moreton, Elliott
2012-01-01
Is phonological learning subject to the same inductive biases as learning in other domains? Previous studies of non-linguistic learning found that intra-dimensional dependencies (between two instances of the same feature) were learned more easily than inter-dimensional ones. This study compares implicit learning of intra- and inter-dimensional…
Using Three-Dimensional Interactive Graphics To Teach Equipment Procedures.
ERIC Educational Resources Information Center
Hamel, Cheryl J.; Ryan-Jones, David L.
1997-01-01
Focuses on how three-dimensional graphical and interactive features of computer-based instruction can enhance learning and support human cognition during technical training of equipment procedures. Presents guidelines for using three-dimensional interactive graphics to teach equipment procedures based on studies of the effects of graphics, motion,…
Li, Jinyan; Fong, Simon; Wong, Raymond K; Millham, Richard; Wong, Kelvin K L
2017-06-28
Due to the high-dimensional characteristics of dataset, we propose a new method based on the Wolf Search Algorithm (WSA) for optimising the feature selection problem. The proposed approach uses the natural strategy established by Charles Darwin; that is, 'It is not the strongest of the species that survives, but the most adaptable'. This means that in the evolution of a swarm, the elitists are motivated to quickly obtain more and better resources. The memory function helps the proposed method to avoid repeat searches for the worst position in order to enhance the effectiveness of the search, while the binary strategy simplifies the feature selection problem into a similar problem of function optimisation. Furthermore, the wrapper strategy gathers these strengthened wolves with the classifier of extreme learning machine to find a sub-dataset with a reasonable number of features that offers the maximum correctness of global classification models. The experimental results from the six public high-dimensional bioinformatics datasets tested demonstrate that the proposed method can best some of the conventional feature selection methods up to 29% in classification accuracy, and outperform previous WSAs by up to 99.81% in computational time.
Walters, Glenn D; Diamond, Pamela M; Magaletta, Philip R
2010-03-01
Three indicators derived from the Personality Assessment Inventory (PAI) Alcohol Problems scale (ALC)-tolerance/high consumption, loss of control, and negative social and psychological consequences-were subjected to taxometric analysis-mean above minus below a cut (MAMBAC), maximum covariance (MAXCOV), and latent mode factor analysis (L-Mode)-in 1,374 federal prison inmates (905 males, 469 females). Whereas the total sample yielded ambiguous results, the male subsample produced dimensional results, and the female subsample produced taxonic results. Interpreting these findings in light of previous taxometric research on alcohol abuse and dependence it is speculated that while alcohol use disorders may be taxonic in female offenders, they are probably both taxonic and dimensional in male offenders. Two models of male alcohol use disorder in males are considered, one in which the diagnostic features are categorical and the severity of symptomatology is dimensional, and one in which some diagnostic features (e.g., withdrawal) are taxonic and other features (e.g., social problems) are dimensional.
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969
NASA Astrophysics Data System (ADS)
Madokoro, H.; Yamanashi, A.; Sato, K.
2013-08-01
This paper presents an unsupervised scene classification method for actualizing semantic recognition of indoor scenes. Background and foreground features are respectively extracted using Gist and color scale-invariant feature transform (SIFT) as feature representations based on context. We used hue, saturation, and value SIFT (HSV-SIFT) because of its simple algorithm with low calculation costs. Our method creates bags of features for voting visual words created from both feature descriptors to a two-dimensional histogram. Moreover, our method generates labels as candidates of categories for time-series images while maintaining stability and plasticity together. Automatic labeling of category maps can be realized using labels created using adaptive resonance theory (ART) as teaching signals for counter propagation networks (CPNs). We evaluated our method for semantic scene classification using KTH's image database for robot localization (KTH-IDOL), which is popularly used for robot localization and navigation. The mean classification accuracies of Gist, gray SIFT, one class support vector machines (OC-SVM), position-invariant robust features (PIRF), and our method are, respectively, 39.7, 58.0, 56.0, 63.6, and 79.4%. The result of our method is 15.8% higher than that of PIRF. Moreover, we applied our method for fine classification using our original mobile robot. We obtained mean classification accuracy of 83.2% for six zones.
Gravity, antigravity and gravitational shielding in (2+1) dimensions
NASA Astrophysics Data System (ADS)
Accioly, Antonio; Helayël-Neto, José; Lobo, Matheus
2009-07-01
Higher-derivative terms are introduced into three-dimensional gravity, thereby allowing for a dynamical theory. The resulting system, viewed as a classical field model, is endowed with a novel and peculiar feature: its nonrelativistic potential describes three gravitational regimes. Depending on the choice of the parameters in the action functional, one obtains gravity, antigravity or gravitational shielding. Interesting enough, this potential is very similar, mutatis mutandis, to the potential for the interaction of two superconducting vortices. Furthermore, the gravitational deflection angle of a light ray, unlike that of Einstein gravity in (2+1) dimensions, is dependent on the impact parameter.
A class of parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.
Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.
Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi
2017-09-22
DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.
An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-01-01
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-07-07
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.
Fiáth, Richárd; Beregszászi, Patrícia; Horváth, Domonkos; Wittner, Lucia; Aarts, Arno A. A.; Ruther, Patrick; Neves, Hercules P.; Bokor, Hajnalka; Acsády, László
2016-01-01
Recording simultaneous activity of a large number of neurons in distributed neuronal networks is crucial to understand higher order brain functions. We demonstrate the in vivo performance of a recently developed electrophysiological recording system comprising a two-dimensional, multi-shank, high-density silicon probe with integrated complementary metal-oxide semiconductor electronics. The system implements the concept of electronic depth control (EDC), which enables the electronic selection of a limited number of recording sites on each of the probe shafts. This innovative feature of the system permits simultaneous recording of local field potentials (LFP) and single- and multiple-unit activity (SUA and MUA, respectively) from multiple brain sites with high quality and without the actual physical movement of the probe. To evaluate the in vivo recording capabilities of the EDC probe, we recorded LFP, MUA, and SUA in acute experiments from cortical and thalamic brain areas of anesthetized rats and mice. The advantages of large-scale recording with the EDC probe are illustrated by investigating the spatiotemporal dynamics of pharmacologically induced thalamocortical slow-wave activity in rats and by the two-dimensional tonotopic mapping of the auditory thalamus. In mice, spatial distribution of thalamic responses to optogenetic stimulation of the neocortex was examined. Utilizing the benefits of the EDC system may result in a higher yield of useful data from a single experiment compared with traditional passive multielectrode arrays, and thus in the reduction of animals needed for a research study. PMID:27535370
NASA Technical Reports Server (NTRS)
Newman, M. B.; Filstrup, A. W.
1973-01-01
Linear (8 node), parabolic (20 node), cubic (32 node) and mixed (some edges linear, some parabolic and some cubic) have been inserted into NASTRAN, level 15.1. First the dummy element feature was used to check out the stiffness matrix generation routines for the linear element in NASTRAN. Then, the necessary modules of NASTRAN were modified to include the new family of elements. The matrix assembly was changed so that the stiffness matrix of each isoparametric element is only generated once as the time to generate these higher order elements tends to be much longer than the other elements in NASTRAN. This paper presents some of the experiences and difficulties of inserting a new element or family of elements into NASTRAN.
The life-cycle of upper-tropospheric jet streams identified with a novel data segmentation algorithm
NASA Astrophysics Data System (ADS)
Limbach, S.; Schömer, E.; Wernli, H.
2010-09-01
Jet streams are prominent features of the upper-tropospheric atmospheric flow. Through the thermal wind relationship these regions with intense horizontal wind speed (typically larger than 30 m/s) are associated with pronounced baroclinicity, i.e., with regions where extratropical cyclones develop due to baroclinic instability processes. Individual jet streams are non-stationary elongated features that can extend over more than 2000 km in the along-flow and 200-500 km in the across-flow direction, respectively. Their lifetime can vary between a few days and several weeks. In recent years, feature-based algorithms have been developed that allow compiling synoptic climatologies and typologies of upper-tropospheric jet streams based upon objective selection criteria and climatological reanalysis datasets. In this study a novel algorithm to efficiently identify jet streams using an extended region-growing segmentation approach is introduced. This algorithm iterates over a 4-dimensional field of horizontal wind speed from ECMWF analyses and decides at each grid point whether all prerequisites for a jet stream are met. In a single pass the algorithm keeps track of all adjacencies of these grid points and creates the 4-dimensional connected segments associated with each jet stream. In addition to the detection of these sets of connected grid points, the algorithm analyzes the development over time of the distinct 3-dimensional features each segment consists of. Important events in the development of these features, for example mergings and splittings, are detected and analyzed on a per-grid-point and per-feature basis. The output of the algorithm consists of the actual sets of grid-points augmented with information about the particular events, and of the so-called event graphs, which are an abstract representation of the distinct 3-dimensional features and events of each segment. This technique provides comprehensive information about the frequency of upper-tropospheric jet streams, their preferred regions of genesis, merging, splitting, and lysis, and statistical information about their size, amplitude and lifetime. The presentation will introduce the technique, provide example visualizations of the time evolution of the identified 3-dimensional jet stream features, and present results from a first multi-month "climatology" of upper-tropospheric jets. In the future, the technique can be applied to longer datasets, for instance reanalyses and output from global climate model simulations - and provide detailed information about key characteristics of jet stream life cycles.
Yang, Zhutian; Qiu, Wei; Sun, Hongjian; Nallanathan, Arumugam
2016-01-01
Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for radar emitter signal recognition. To address this challenge, multi-component radar emitter recognition under a complicated noise environment is studied in this paper. A novel radar emitter recognition approach based on the three-dimensional distribution feature and transfer learning is proposed. The cubic feature for the time-frequency-energy distribution is proposed to describe the intra-pulse modulation information of radar emitters. Furthermore, the feature is reconstructed by using transfer learning in order to obtain the robust feature against signal noise rate (SNR) variation. Last, but not the least, the relevance vector machine is used to classify radar emitter signals. Simulations demonstrate that the approach proposed in this paper has better performances in accuracy and robustness than existing approaches. PMID:26927111
Yang, Zhutian; Qiu, Wei; Sun, Hongjian; Nallanathan, Arumugam
2016-02-25
Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for radar emitter signal recognition. To address this challenge, multi-component radar emitter recognition under a complicated noise environment is studied in this paper. A novel radar emitter recognition approach based on the three-dimensional distribution feature and transfer learning is proposed. The cubic feature for the time-frequency-energy distribution is proposed to describe the intra-pulse modulation information of radar emitters. Furthermore, the feature is reconstructed by using transfer learning in order to obtain the robust feature against signal noise rate (SNR) variation. Last, but not the least, the relevance vector machine is used to classify radar emitter signals. Simulations demonstrate that the approach proposed in this paper has better performances in accuracy and robustness than existing approaches.
Drug-target interaction prediction using ensemble learning and dimensionality reduction.
Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong
2017-10-01
Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction. Copyright © 2017 Elsevier Inc. All rights reserved.
Case study of 3D fingerprints applications
Liu, Feng; Liang, Jinrong; Shen, Linlin; Yang, Meng; Zhang, David; Lai, Zhihui
2017-01-01
Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition. PMID:28399141
Case study of 3D fingerprints applications.
Liu, Feng; Liang, Jinrong; Shen, Linlin; Yang, Meng; Zhang, David; Lai, Zhihui
2017-01-01
Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition.
Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam
2012-01-15
Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang
2017-12-01
Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.
Chen, Yifei; Sun, Yuxing; Han, Bing-Qing
2015-01-01
Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.
NASA Astrophysics Data System (ADS)
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
Thermodynamics of higher dimensional black holes with higher order thermal fluctuations
NASA Astrophysics Data System (ADS)
Pourhassan, B.; Kokabi, K.; Rangyan, S.
2017-12-01
In this paper, we consider higher order corrections of the entropy, which coming from thermal fluctuations, and find their effect on the thermodynamics of higher dimensional charged black holes. Leading order thermal fluctuation is logarithmic term in the entropy while higher order correction is proportional to the inverse of original entropy. We calculate some thermodynamics quantities and obtain the effect of logarithmic and higher order corrections of entropy on them. Validity of the first law of thermodynamics investigated and Van der Waals equation of state of dual picture studied. We find that five-dimensional black hole behaves as Van der Waals, but higher dimensional case have not such behavior. We find that thermal fluctuations are important in stability of black hole hence affect unstable/stable black hole phase transition.
A Solution Adaptive Technique Using Tetrahedral Unstructured Grids
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2000-01-01
An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.
Clustering of Multi-Temporal Fully Polarimetric L-Band SAR Data for Agricultural Land Cover Mapping
NASA Astrophysics Data System (ADS)
Tamiminia, H.; Homayouni, S.; Safari, A.
2015-12-01
Recently, the unique capabilities of Polarimetric Synthetic Aperture Radar (PolSAR) sensors make them an important and efficient tool for natural resources and environmental applications, such as land cover and crop classification. The aim of this paper is to classify multi-temporal full polarimetric SAR data using kernel-based fuzzy C-means clustering method, over an agricultural region. This method starts with transforming input data into the higher dimensional space using kernel functions and then clustering them in the feature space. Feature space, due to its inherent properties, has the ability to take in account the nonlinear and complex nature of polarimetric data. Several SAR polarimetric features extracted using target decomposition algorithms. Features from Cloude-Pottier, Freeman-Durden and Yamaguchi algorithms used as inputs for the clustering. This method was applied to multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Canada, during June and July in 2012. The results demonstrate the efficiency of this approach with respect to the classical methods. In addition, using multi-temporal data in the clustering process helped to investigate the phenological cycle of plants and significantly improved the performance of agricultural land cover mapping.
Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity
Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo
2016-01-01
In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214
Glazoff, Michael V.; Gering, Kevin L.; Garnier, John E.; Rashkeev, Sergey N.; Pyt'ev, Yuri Petrovich
2016-05-17
Embodiments discussed herein in the form of methods, systems, and computer-readable media deal with the application of advanced "projectional" morphological algorithms for solving a broad range of problems. In a method of performing projectional morphological analysis, an N-dimensional input signal is supplied. At least one N-dimensional form indicative of at least one feature in the N-dimensional input signal is identified. The N-dimensional input signal is filtered relative to the at least one N-dimensional form and an N-dimensional output signal is generated indicating results of the filtering at least as differences in the N-dimensional input signal relative to the at least one N-dimensional form.
A Fast, Open EEG Classification Framework Based on Feature Compression and Channel Ranking
Han, Jiuqi; Zhao, Yuwei; Sun, Hongji; Chen, Jiayun; Ke, Ang; Xu, Gesen; Zhang, Hualiang; Zhou, Jin; Wang, Changyong
2018-01-01
Superior feature extraction, channel selection and classification methods are essential for designing electroencephalography (EEG) classification frameworks. However, the performance of most frameworks is limited by their improper channel selection methods and too specifical design, leading to high computational complexity, non-convergent procedure and narrow expansibility. In this paper, to remedy these drawbacks, we propose a fast, open EEG classification framework centralized by EEG feature compression, low-dimensional representation, and convergent iterative channel ranking. First, to reduce the complexity, we use data clustering to compress the EEG features channel-wise, packing the high-dimensional EEG signal, and endowing them with numerical signatures. Second, to provide easy access to alternative superior methods, we structurally represent each EEG trial in a feature vector with its corresponding numerical signature. Thus, the recorded signals of many trials shrink to a low-dimensional structural matrix compatible with most pattern recognition methods. Third, a series of effective iterative feature selection approaches with theoretical convergence is introduced to rank the EEG channels and remove redundant ones, further accelerating the EEG classification process and ensuring its stability. Finally, a classical linear discriminant analysis (LDA) model is employed to classify a single EEG trial with selected channels. Experimental results on two real world brain-computer interface (BCI) competition datasets demonstrate the promising performance of the proposed framework over state-of-the-art methods. PMID:29713262
NASA Astrophysics Data System (ADS)
Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.
2012-01-01
The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.
Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform
NASA Astrophysics Data System (ADS)
Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah
2017-02-01
Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.
Flow of quasi-two dimensional water in graphene channels
NASA Astrophysics Data System (ADS)
Fang, Chao; Wu, Xihui; Yang, Fengchang; Qiao, Rui
2018-02-01
When liquids confined in slit channels approach a monolayer, they become two-dimensional (2D) fluids. Using molecular dynamics simulations, we study the flow of quasi-2D water confined in slit channels featuring pristine graphene walls and graphene walls with hydroxyl groups. We focus on to what extent the flow of quasi-2D water can be described using classical hydrodynamics and what are the effective transport properties of the water and the channel. First, the in-plane shearing of quasi-2D water confined between pristine graphene can be described using the classical hydrodynamic equation, and the viscosity of the water is ˜50% higher than that of the bulk water in the channel studied here. Second, the flow of quasi-2D water around a single hydroxyl group is perturbed at a position of tens of cluster radius from its center, as expected for low Reynolds number flows. Even though water is not pinned at the edge of the hydroxyl group, the hydroxyl group screens the flow greatly, with a single, isolated hydroxyl group rendering drag similar to ˜90 nm2 pristine graphene walls. Finally, the flow of quasi-2D water through graphene channels featuring randomly distributed hydroxyl groups resembles the fluid flow through porous media. The effective friction factor of the channel increases linearly with the hydroxyl groups' area density up to 0.5 nm-2 but increases nonlinearly at higher densities. The effective friction factor of the channel can be fitted to a modified Carman equation at least up to a hydroxyl area density of 2.0 nm-2. These findings help understand the liquid transport in 2D material-based nanochannels for applications including desalination.
A method for automatic feature points extraction of human vertebrae three-dimensional model
NASA Astrophysics Data System (ADS)
Wu, Zhen; Wu, Junsheng
2017-05-01
A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Juefei; Szafruga, Urszula B.; Kuzyk, Mark G.
We use numerical optimization to study the properties of (1) the class of one-dimensional potential energy functions and (2) systems of point nuclei in two dimensions that yield the largest intrinsic hyperpolarizabilities, which we find to be within 30% of the fundamental limit. In all cases, we use a one-electron model. It is found that a broad range of optimized potentials, each of very different character, yield the same intrinsic hyperpolarizability ceiling of 0.709. Furthermore, all optimized potential energy functions share common features such as (1) the value of the normalized transition dipole moment to the dominant state, which forcesmore » the hyperpolarizability to be dominated by only two excited states and (2) the energy ratio between the two dominant states. All optimized potentials are found to obey the three-level ansatz to within about 1%. Many of these potential energy functions may be implementable in multiple quantum well structures. The subset of potentials with undulations reaffirm that modulation of conjugation may be an approach for making better organic molecules, though there appear to be many others. Additionally, our results suggest that one-dimensional molecules may have larger diagonal intrinsic hyperpolarizability {beta}{sub xxx}{sup int} than higher-dimensional systems.« less
Carvalho, Fernando R; Zampieri, Eduardo H; Caetano, Wilker; Silva, Rafael
2017-05-19
Organic-based nanomaterials can be self-assembled by strong and directional intermolecular forces such as π-π interactions. Experimental information about the stability, size, and geometry of the formed structures is very limited for species that easily aggregate, even at very low concentrations. Differential pulse voltammetry (DPV) can unveil the formation, growth, and also the stability window of ordered, one-dimensional, lamellar self-aggregates formed by supramolecular π stacking of phenothiazines at micromolar (10 -6 mol L -1 ) concentrations. The self-diffusion features of the species at different concentrations are determined by DPV and used to probe the π staking process through the concept of the frictional resistance. It is observed that toluidine blue and methylene blue start to self-aggregate around 9 μmol L -1 , and that the self-aggregation process occurs by one-dimensional growth as the concentration of the phenothiazines is increased up to around 170 μmol L -1 for toluidine blue and 200 μmol L -1 for methylene blue. At higher concentrations, the aggregation process leads to structures with lower anisometry. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.
1987-01-01
Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.
Effective equations for matter-wave gap solitons in higher-order transversal states.
Mateo, A Muñoz; Delgado, V
2013-10-01
We demonstrate that an important class of nonlinear stationary solutions of the three-dimensional (3D) Gross-Pitaevskii equation (GPE) exhibiting nontrivial transversal configurations can be found and characterized in terms of an effective one-dimensional (1D) model. Using a variational approach we derive effective equations of lower dimensionality for BECs in (m,n(r)) transversal states (states featuring a central vortex of charge m as well as n(r) concentric zero-density rings at every z plane) which provides us with a good approximate solution of the original 3D problem. Since the specifics of the transversal dynamics can be absorbed in the renormalization of a couple of parameters, the functional form of the equations obtained is universal. The model proposed finds its principal application in the study of the existence and classification of 3D gap solitons supported by 1D optical lattices, where in addition to providing a good estimate for the 3D wave functions it is able to make very good predictions for the μ(N) curves characterizing the different fundamental families. We have corroborated the validity of our model by comparing its predictions with those from the exact numerical solution of the full 3D GPE.
DNA Brick Crystals with Prescribed Depth
Ke, Yonggang; Ong, Luvena L.; Sun, Wei; Song, Jie; Dong, Mingdong; Shih, William M.; Yin, Peng
2014-01-01
We describe a general framework for constructing two-dimensional crystals with prescribed depth and sophisticated three-dimensional features. These crystals may serve as scaffolds for the precise spatial arrangements of functional materials for diverse applications. The crystals are self-assembled from single-stranded DNA components called DNA bricks. We demonstrate the experimental construction of DNA brick crystals that can grow to micron-size in the lateral dimensions with precisely controlled depth up to 80 nanometers. They can be designed to display user-specified sophisticated three-dimensional nanoscale features, such as continuous or discontinuous cavities and channels, and to pack DNA helices at parallel and perpendicular angles relative to the plane of the crystals. PMID:25343605
Majumdar, Kingshuk
2011-03-23
The effects of interlayer coupling and spatial anisotropy on the spin-wave excitation spectra of a three-dimensional spatially anisotropic, frustrated spin-½ Heisenberg antiferromagnet (HAFM) are investigated for the two ordered phases using second-order spin-wave expansion. We show that the second-order corrections to the spin-wave energies are significant and find that the energy spectra of the three-dimensional HAFM have similar qualitative features to the energy spectra of the two-dimensional HAFM on a square lattice. We also discuss the features that can provide experimental measures for the strength of the interlayer coupling, spatial anisotropy parameter, and magnetic frustration.
NASA Astrophysics Data System (ADS)
Wang, Bingjie; Sun, Qi; Pi, Shaohua; Wu, Hongyan
2014-09-01
In this paper, feature extraction and pattern recognition of the distributed optical fiber sensing signal have been studied. We adopt Mel-Frequency Cepstral Coefficient (MFCC) feature extraction, wavelet packet energy feature extraction and wavelet packet Shannon entropy feature extraction methods to obtain sensing signals (such as speak, wind, thunder and rain signals, etc.) characteristic vectors respectively, and then perform pattern recognition via RBF neural network. Performances of these three feature extraction methods are compared according to the results. We choose MFCC characteristic vector to be 12-dimensional. For wavelet packet feature extraction, signals are decomposed into six layers by Daubechies wavelet packet transform, in which 64 frequency constituents as characteristic vector are respectively extracted. In the process of pattern recognition, the value of diffusion coefficient is introduced to increase the recognition accuracy, while keeping the samples for testing algorithm the same. Recognition results show that wavelet packet Shannon entropy feature extraction method yields the best recognition accuracy which is up to 97%; the performance of 12-dimensional MFCC feature extraction method is less satisfactory; the performance of wavelet packet energy feature extraction method is the worst.
Higher-dimensional Bianchi type-VIh cosmologies
NASA Astrophysics Data System (ADS)
Lorenz-Petzold, D.
1985-09-01
The higher-dimensional perfect fluid equations of a generalization of the (1 + 3)-dimensional Bianchi type-VIh space-time are discussed. Bianchi type-V and Bianchi type-III space-times are also included as special cases. It is shown that the Chodos-Detweiler (1980) mechanism of cosmological dimensional-reduction is possible in these cases.
A fast image matching algorithm based on key points
NASA Astrophysics Data System (ADS)
Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng
2014-05-01
Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.
A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions
Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan; ...
2017-04-24
Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less
A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan
Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less
NASA Astrophysics Data System (ADS)
Poncelet, Carine; Merz, Ralf; Merz, Bruno; Parajka, Juraj; Oudin, Ludovic; Andréassian, Vazken; Perrin, Charles
2017-08-01
Most of previous assessments of hydrologic model performance are fragmented, based on small number of catchments, different methods or time periods and do not link the results to landscape or climate characteristics. This study uses large-sample hydrology to identify major catchment controls on daily runoff simulations. It is based on a conceptual lumped hydrological model (GR6J), a collection of 29 catchment characteristics, a multinational set of 1103 catchments located in Austria, France, and Germany and four runoff model efficiency criteria. Two analyses are conducted to assess how features and criteria are linked: (i) a one-dimensional analysis based on the Kruskal-Wallis test and (ii) a multidimensional analysis based on regression trees and investigating the interplay between features. The catchment features most affecting model performance are the flashiness of precipitation and streamflow (computed as the ratio of absolute day-to-day fluctuations by the total amount in a year), the seasonality of evaporation, the catchment area, and the catchment aridity. Nonflashy, nonseasonal, large, and nonarid catchments show the best performance for all the tested criteria. We argue that this higher performance is due to fewer nonlinear responses (higher correlation between precipitation and streamflow) and lower input and output variability for such catchments. Finally, we show that, compared to national sets, multinational sets increase results transferability because they explore a wider range of hydroclimatic conditions.
Effects of band selection on endmember extraction for forestry applications
NASA Astrophysics Data System (ADS)
Karathanassi, Vassilia; Andreou, Charoula; Andronis, Vassilis; Kolokoussis, Polychronis
2014-10-01
In spectral unmixing theory, data reduction techniques play an important role as hyperspectral imagery contains an immense amount of data, posing many challenging problems such as data storage, computational efficiency, and the so called "curse of dimensionality". Feature extraction and feature selection are the two main approaches for dimensionality reduction. Feature extraction techniques are used for reducing the dimensionality of the hyperspectral data by applying transforms on hyperspectral data. Feature selection techniques retain the physical meaning of the data by selecting a set of bands from the input hyperspectral dataset, which mainly contain the information needed for spectral unmixing. Although feature selection techniques are well-known for their dimensionality reduction potentials they are rarely used in the unmixing process. The majority of the existing state-of-the-art dimensionality reduction methods set criteria to the spectral information, which is derived by the whole wavelength, in order to define the optimum spectral subspace. These criteria are not associated with any particular application but with the data statistics, such as correlation and entropy values. However, each application is associated with specific land c over materials, whose spectral characteristics present variations in specific wavelengths. In forestry for example, many applications focus on tree leaves, in which specific pigments such as chlorophyll, xanthophyll, etc. determine the wavelengths where tree species, diseases, etc., can be detected. For such applications, when the unmixing process is applied, the tree species, diseases, etc., are considered as the endmembers of interest. This paper focuses on investigating the effects of band selection on the endmember extraction by exploiting the information of the vegetation absorbance spectral zones. More precisely, it is explored whether endmember extraction can be optimized when specific sets of initial bands related to leaf spectral characteristics are selected. Experiments comprise application of well-known signal subspace estimation and endmember extraction methods on a hyperspectral imagery that presents a forest area. Evaluation of the extracted endmembers showed that more forest species can be extracted as endmembers using selected bands.
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press. With a CD: data, software, guides. (2009). 2. Kanevski M. Spatial Predictions of Soil Contamination Using General Regression Neural Networks. Systems Research and Information Systems, Volume 8, number 4, 1999. 3. Robert S., Foresti L., Kanevski M. Spatial prediction of monthly wind speeds in complex terrain with adaptive general regression neural networks. International Journal of Climatology, 33 pp. 1793-1804, 2013.
Dall'Asta, Andrea; Schievano, Silvia; Bruse, Jan L; Paramasivam, Gowrishankar; Kaihura, Christine Tita; Dunaway, David; Lees, Christoph C
2017-07-01
The antenatal detection of facial dysmorphism using 3-dimensional ultrasound may raise the suspicion of an underlying genetic condition but infrequently leads to a definitive antenatal diagnosis. Despite advances in array and noninvasive prenatal testing, not all genetic conditions can be ascertained from such testing. The aim of this study was to investigate the feasibility of quantitative assessment of fetal face features using prenatal 3-dimensional ultrasound volumes and statistical shape modeling. STUDY DESIGN: Thirteen normal and 7 abnormal stored 3-dimensional ultrasound fetal face volumes were analyzed, at a median gestation of 29 +4 weeks (25 +0 to 36 +1 ). The 20 3-dimensional surface meshes generated were aligned and served as input for a statistical shape model, which computed the mean 3-dimensional face shape and 3-dimensional shape variations using principal component analysis. Ten shape modes explained more than 90% of the total shape variability in the population. While the first mode accounted for overall size differences, the second highlighted shape feature changes from an overall proportionate toward a more asymmetric face shape with a wide prominent forehead and an undersized, posteriorly positioned chin. Analysis of the Mahalanobis distance in principal component analysis shape space suggested differences between normal and abnormal fetuses (median and interquartile range distance values, 7.31 ± 5.54 for the normal group vs 13.27 ± 9.82 for the abnormal group) (P = .056). This feasibility study demonstrates that objective characterization and quantification of fetal facial morphology is possible from 3-dimensional ultrasound. This technique has the potential to assist in utero diagnosis, particularly of rare conditions in which facial dysmorphology is a feature. Copyright © 2017 Elsevier Inc. All rights reserved.
Tonal Interface to MacroMolecules (TIMMol): A Textual and Tonal Tool for Molecular Visualization
ERIC Educational Resources Information Center
Cordes, Timothy J.; Carlson, C. Britt; Forest, Katrina T.
2008-01-01
We developed the three-dimensional visualization software, Tonal Interface to MacroMolecules or TIMMol, for studying atomic coordinates of protein structures. Key features include audio tones indicating x, y, z location, identification of the cursor location in one-dimensional and three-dimensional space, textual output that can be easily linked…
Elucidating high-dimensional cancer hallmark annotation via enriched ontology.
Yan, Shankai; Wong, Ka-Chun
2017-09-01
Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.
Li, Yushuang; Yang, Jiasheng; Zhang, Yi
2016-01-01
In this paper, we have proposed a novel alignment-free method for comparing the similarity of protein sequences. We first encode a protein sequence into a 440 dimensional feature vector consisting of a 400 dimensional Pseudo-Markov transition probability vector among the 20 amino acids, a 20 dimensional content ratio vector, and a 20 dimensional position ratio vector of the amino acids in the sequence. By evaluating the Euclidean distances among the representing vectors, we compare the similarity of protein sequences. We then apply this method into the ND5 dataset consisting of the ND5 protein sequences of 9 species, and the F10 and G11 datasets representing two of the xylanases containing glycoside hydrolase families, i.e., families 10 and 11. As a result, our method achieves a correlation coefficient of 0.962 with the canonical protein sequence aligner ClustalW in the ND5 dataset, much higher than those of other 5 popular alignment-free methods. In addition, we successfully separate the xylanases sequences in the F10 family and the G11 family and illustrate that the F10 family is more heat stable than the G11 family, consistent with a few previous studies. Moreover, we prove mathematically an identity equation involving the Pseudo-Markov transition probability vector and the amino acids content ratio vector. PMID:27918587
Ultra-fast framing camera tube
Kalibjian, Ralph
1981-01-01
An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.
NASA Astrophysics Data System (ADS)
Kawata, Y.; Niki, N.; Ohmatsu, H.; Aokage, K.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.
2015-03-01
Advantages of CT scanners with high resolution have allowed the improved detection of lung cancers. In the recent release of positive results from the National Lung Screening Trial (NLST) in the US showing that CT screening does in fact have a positive impact on the reduction of lung cancer related mortality. While this study does show the efficacy of CT based screening, physicians often face the problems of deciding appropriate management strategies for maximizing patient survival and for preserving lung function. Several key manifold-learning approaches efficiently reveal intrinsic low-dimensional structures latent in high-dimensional data spaces. This study was performed to investigate whether the dimensionality reduction can identify embedded structures from the CT histogram feature of non-small-cell lung cancer (NSCLC) space to improve the performance in predicting the likelihood of RFS for patients with NSCLC.
Classification of AB O 3 perovskite solids: a machine learning study
Pilania, G.; Balachandran, P. V.; Gubernatis, J. E.; ...
2015-07-23
Here we explored the use of machine learning methods for classifying whether a particularABO 3chemistry forms a perovskite or non-perovskite structured solid. Starting with three sets of feature pairs (the tolerance and octahedral factors, theAandBionic radii relative to the radius of O, and the bond valence distances between theAandBions from the O atoms), we used machine learning to create a hyper-dimensional partial dependency structure plot using all three feature pairs or any two of them. Doing so increased the accuracy of our predictions by 2–3 percentage points over using any one pair. We also included the Mendeleev numbers of theAandBatomsmore » to this set of feature pairs. Moreover, doing this and using the capabilities of our machine learning algorithm, the gradient tree boosting classifier, enabled us to generate a new type of structure plot that has the simplicity of one based on using just the Mendeleev numbers, but with the added advantages of having a higher accuracy and providing a measure of likelihood of the predicted structure.« less
Spike sorting based upon machine learning algorithms (SOMA).
Horton, P M; Nicol, A U; Kendrick, K M; Feng, J F
2007-02-15
We have developed a spike sorting method, using a combination of various machine learning algorithms, to analyse electrophysiological data and automatically determine the number of sampled neurons from an individual electrode, and discriminate their activities. We discuss extensions to a standard unsupervised learning algorithm (Kohonen), as using a simple application of this technique would only identify a known number of clusters. Our extra techniques automatically identify the number of clusters within the dataset, and their sizes, thereby reducing the chance of misclassification. We also discuss a new pre-processing technique, which transforms the data into a higher dimensional feature space revealing separable clusters. Using principal component analysis (PCA) alone may not achieve this. Our new approach appends the features acquired using PCA with features describing the geometric shapes that constitute a spike waveform. To validate our new spike sorting approach, we have applied it to multi-electrode array datasets acquired from the rat olfactory bulb, and from the sheep infero-temporal cortex, and using simulated data. The SOMA sofware is available at http://www.sussex.ac.uk/Users/pmh20/spikes.
Feeling form: the neural basis of haptic shape perception.
Yau, Jeffrey M; Kim, Sung Soo; Thakur, Pramodsingh H; Bensmaia, Sliman J
2016-02-01
The tactile perception of the shape of objects critically guides our ability to interact with them. In this review, we describe how shape information is processed as it ascends the somatosensory neuraxis of primates. At the somatosensory periphery, spatial form is represented in the spatial patterns of activation evoked across populations of mechanoreceptive afferents. In the cerebral cortex, neurons respond selectively to particular spatial features, like orientation and curvature. While feature selectivity of neurons in the earlier processing stages can be understood in terms of linear receptive field models, higher order somatosensory neurons exhibit nonlinear response properties that result in tuning for more complex geometrical features. In fact, tactile shape processing bears remarkable analogies to its visual counterpart and the two may rely on shared neural circuitry. Furthermore, one of the unique aspects of primate somatosensation is that it contains a deformable sensory sheet. Because the relative positions of cutaneous mechanoreceptors depend on the conformation of the hand, the haptic perception of three-dimensional objects requires the integration of cutaneous and proprioceptive signals, an integration that is observed throughout somatosensory cortex. Copyright © 2016 the American Physiological Society.
In vivo Degradation of Three-Dimensional Silk Fibroin Scaffolds
Wang, Yongzhong; Rudym, Darya D.; Walsh, Ashley; Abrahamsen, Lauren; Kim, Hyeon-Joo; Kim, Hyun Suk; Kirker-Head, Carl; Kaplan, David L.
2011-01-01
Three-dimensional porous scaffolds prepared from regenerated silk fibroin using either an all aqueous process or a process involving an organic solvent, hexafluoroisopropanol (HFIP) have shown promise in cell culture and tissue engineering applications. However, their biocompatibility and in vivo degradation has not been fully established. The present study was conducted to systematically investigate how processing method (aqueous vs. organic solvent) and processing variables (silk fibroin concentration and pore size) affect the short-term (up to 2 months) and long-term (up to 1 year) in vivo behavior of the protein scaffolds in both nude and Lewis rats. The samples were analyzed by histology for scaffold morphological changes and tissue ingrowth, and by real-time RT-PCR and immunohistochemistry for immune responses. Throughout the period of implantation, all scaffolds were well-tolerated by the host animals and immune responses to the implants were mild. Most scaffolds prepared from the all aqueous process degraded to completion between two and six months, while those prepared from organic solvent (hexafluoroisopropanol (HFIP)) process persisted beyond one year. Due to widespread cellular invasion throughout the scaffold, the degradation of aqueous-derived scaffolds appears to be more homogeneous than that of HFIP-derived scaffolds. In general and especially for the HFIP-derived scaffolds, a higher original silk fibroin concentration (e.g. 17%) and smaller pore size (e.g. 100–200 µm) resulted in lower levels of tissue ingrowth and slower degradation. These results demonstrate that the in vivo behavior of the three-dimensional silk fibroin scaffolds is related to the morphological and structural features that resulted from different scaffold preparation processes. The insights gained in this study can serve as a guide for processing scenarios to match desired morphological and structural features and degradation time with tissue-specific applications. PMID:18502501
Tighe, Patrick J; Lucas, Stephen D; Edwards, David A; Boezaart, André P; Aytug, Haldun; Bihorac, Azra
2012-10-01
The purpose of this project was to determine whether machine-learning classifiers could predict which patients would require a preoperative acute pain service (APS) consultation. Retrospective cohort. University teaching hospital. The records of 9,860 surgical patients posted between January 1 and June 30, 2010 were reviewed. Request for APS consultation. A cohort of machine-learning classifiers was compared according to its ability or inability to classify surgical cases as requiring a request for a preoperative APS consultation. Classifiers were then optimized utilizing ensemble techniques. Computational efficiency was measured with the central processing unit processing times required for model training. Classifiers were tested using the full feature set, as well as the reduced feature set that was optimized using a merit-based dimensional reduction strategy. Machine-learning classifiers correctly predicted preoperative requests for APS consultations in 92.3% (95% confidence intervals [CI], 91.8-92.8) of all surgical cases. Bayesian methods yielded the highest area under the receiver operating curve (0.87, 95% CI 0.84-0.89) and lowest training times (0.0018 seconds, 95% CI, 0.0017-0.0019 for the NaiveBayesUpdateable algorithm). An ensemble of high-performing machine-learning classifiers did not yield a higher area under the receiver operating curve than its component classifiers. Dimensional reduction decreased the computational requirements for multiple classifiers, but did not adversely affect classification performance. Using historical data, machine-learning classifiers can predict which surgical cases should prompt a preoperative request for an APS consultation. Dimensional reduction improved computational efficiency and preserved predictive performance. Wiley Periodicals, Inc.
Li, Zhongke; Yang, Huifang; Lü, Peijun; Wang, Yong; Sun, Yuchun
2015-01-01
Background and Objective To develop a real-time recording system based on computer binocular vision and two-dimensional image feature extraction to accurately record mandibular movement in three dimensions. Methods A computer-based binocular vision device with two digital cameras was used in conjunction with a fixed head retention bracket to track occlusal movement. Software was developed for extracting target spatial coordinates in real time based on two-dimensional image feature recognition. A plaster model of a subject’s upper and lower dentition were made using conventional methods. A mandibular occlusal splint was made on the plaster model, and then the occlusal surface was removed. Temporal denture base resin was used to make a 3-cm handle extending outside the mouth connecting the anterior labial surface of the occlusal splint with a detection target with intersecting lines designed for spatial coordinate extraction. The subject's head was firmly fixed in place, and the occlusal splint was fully seated on the mandibular dentition. The subject was then asked to make various mouth movements while the mandibular movement target locus point set was recorded. Comparisons between the coordinate values and the actual values of the 30 intersections on the detection target were then analyzed using paired t-tests. Results The three-dimensional trajectory curve shapes of the mandibular movements were consistent with the respective subject movements. Mean XYZ coordinate values and paired t-test results were as follows: X axis: -0.0037 ± 0.02953, P = 0.502; Y axis: 0.0037 ± 0.05242, P = 0.704; and Z axis: 0.0007 ± 0.06040, P = 0.952. The t-test result showed that the coordinate values of the 30 cross points were considered statistically no significant. (P<0.05) Conclusions Use of a real-time recording system of three-dimensional mandibular movement based on computer binocular vision and two-dimensional image feature recognition technology produced a recording accuracy of approximately ± 0.1 mm, and is therefore suitable for clinical application. Certainly, further research is necessary to confirm the clinical applications of the method. PMID:26375800
Thomas, Minta; De Brabanter, Kris; De Moor, Bart
2014-05-10
DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.
Effect of finite sample size on feature selection and classification: a simulation study.
Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping
2010-02-01
The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.
Vacuum Stability in Split SUSY and Little Higgs Models
NASA Astrophysics Data System (ADS)
Datta, Alakabha; Zhang, Xinmin
We study the stability of the effective Higgs potential in the split supersymmetry and Little Higgs models. In particular, we study the effects of higher dimensional operators in the effective potential on the Higgs mass predictions. We find that the size and sign of the higher dimensional operators can significantly change the Higgs mass required to maintain vacuum stability in Split SUSY models. In the Little Higgs models the effects of higher dimensional operators can be large because of a relatively lower cutoff scale. Working with a specific model we find that a contribution from the higher dimensional operator with coefficient of O(1) can destabilize the vacuum.
Unlabored system motion by specially conditioned electromagnetic fields in higher dimensional realms
NASA Astrophysics Data System (ADS)
David Froning, H.; Meholic, Gregory V.
2010-01-01
This third of three papers explores the possibility of swift, stress-less system transitions between slower-than-light and faster-than-light speeds with negligible net expenditure of system energetics. The previous papers derived a realm of higher dimensionality than 4-D spacetime that enabled such unlabored motion; and showed that fields that could propel and guide systems on unlabored paths in the higher dimensional realm must be fields that have been conditioned to SU(2) (or higher) Lie group symmetry. This paper shows that the system's surrounding vacuum dielectric ɛμ, within the higher dimensional realm's is a vector (not scalar) quantity with fixed magnitude ɛ0μ0 and changing direction within the realm with changing system speed. Thus, ɛμ generated by the system's EM field must remain tuned to vacuum ɛ0μ0 in both magnitude and direction during swift, unlabored system transitions between slower and faster than light speeds. As a result, the system's changing path and speed is such that the magnitude of the higher dimensional realm's ɛ0μ0 is not disturbed. And it is shown that a system's flight trajectories associated with its swift, unlabored transitions between zero and infinite speed can be represented by curved paths traced-out within the higher dimensional realm.
Narin, B; Ozyörük, Y; Ulas, A
2014-05-30
This paper describes a two-dimensional code developed for analyzing two-phase deflagration-to-detonation transition (DDT) phenomenon in granular, energetic, solid, explosive ingredients. The two-dimensional model is constructed in full two-phase, and based on a highly coupled system of partial differential equations involving basic flow conservation equations and some constitutive relations borrowed from some one-dimensional studies that appeared in open literature. The whole system is solved using an optimized high-order accurate, explicit, central-difference scheme with selective-filtering/shock capturing (SF-SC) technique, to augment central-diffencing and prevent excessive dispersion. The sources of the equations describing particle-gas interactions in terms of momentum and energy transfers make the equation system quite stiff, and hence its explicit integration difficult. To ease the difficulties, a time-split approach is used allowing higher time steps. In the paper, the physical model for the sources of the equation system is given for a typical explosive, and several numerical calculations are carried out to assess the developed code. Microscale intergranular and/or intragranular effects including pore collapse, sublimation, pyrolysis, etc. are not taken into account for ignition and growth, and a basic temperature switch is applied in calculations to control ignition in the explosive domain. Results for one-dimensional DDT phenomenon are in good agreement with experimental and computational results available in literature. A typical shaped-charge wave-shaper case study is also performed to test the two-dimensional features of the code and it is observed that results are in good agreement with those of commercial software. Copyright © 2014 Elsevier B.V. All rights reserved.
Experimental Investigation of Shock-Shock Interactions Over a 2-D Wedge at M=6
NASA Technical Reports Server (NTRS)
Jones, Michelle L.
2013-01-01
The effects of fin-leading-edge radius and sweep angle on peak heating rates due to shock-shock interactions were investigated in the NASA Langley Research Center 20-inch Mach 6 Air Tunnel. The fin model leading edges, which represent cylindrical leading edges or struts on hypersonic vehicles, were varied from 0.25 inches to 0.75 inches in radius. A 9deg wedge generated a planar oblique shock at 16.7deg to the flow that intersected the fin bow shock, producing a shock-shock interaction that impinged on the fin leading edge. The fin angle of attack was varied from 0deg (normal to the free-stream) to 15deg and 25deg swept forward. Global temperature data was obtained from the surface of the fused silica fins through phosphor thermography. Metal oil flow models with the same geometries as the fused silica models were used to visualize the streamline patterns for each angle of attack. High-speed zoom-schlieren videos were recorded to show the features and temporal unsteadiness of the shock-shock interactions. The temperature data were analyzed using one-dimensional semi-infinite as well as one- and two-dimensional finite-volume methods to determine the proper heat transfer analysis approach to minimize errors from lateral heat conduction due to the presence of strong surface temperature gradients induced by the shock interactions. The general trends in the leading-edge heat transfer behavior were similar for the three shock-shock interactions, respectively, between the test articles with varying leading-edge radius. The dimensional peak heat transfer coefficient augmentation increased with decreasing leading-edge radius. The dimensional peak heat transfer output from the two-dimensional code was about 20% higher than the value from a standard, semi-infinite one-dimensional method.
Invariant resolutions for several Fueter operators
NASA Astrophysics Data System (ADS)
Colombo, Fabrizio; Souček, Vladimir; Struppa, Daniele C.
2006-07-01
A proper generalization of complex function theory to higher dimension is Clifford analysis and an analogue of holomorphic functions of several complex variables were recently described as the space of solutions of several Dirac equations. The four-dimensional case has special features and is closely connected to functions of quaternionic variables. In this paper we present an approach to the Dolbeault sequence for several quaternionic variables based on symmetries and representation theory. In particular we prove that the resolution of the Cauchy-Fueter system obtained algebraically, via Gröbner bases techniques, is equivalent to the one obtained by R.J. Baston (J. Geom. Phys. 1992).
Electrophoretic and field-effect graphene for all-electrical DNA array technology.
Xu, Guangyu; Abbott, Jeffrey; Qin, Ling; Yeung, Kitty Y M; Song, Yi; Yoon, Hosang; Kong, Jing; Ham, Donhee
2014-09-05
Field-effect transistor biomolecular sensors based on low-dimensional nanomaterials boast sensitivity, label-free operation and chip-scale construction. Chemical vapour deposition graphene is especially well suited for multiplexed electronic DNA array applications, since its large two-dimensional morphology readily lends itself to top-down fabrication of transistor arrays. Nonetheless, graphene field-effect transistor DNA sensors have been studied mainly at single-device level. Here we create, from chemical vapour deposition graphene, field-effect transistor arrays with two features representing steps towards multiplexed DNA arrays. First, a robust array yield--seven out of eight transistors--is achieved with a 100-fM sensitivity, on par with optical DNA microarrays and at least 10 times higher than prior chemical vapour deposition graphene transistor DNA sensors. Second, each graphene acts as an electrophoretic electrode for site-specific probe DNA immobilization, and performs subsequent site-specific detection of target DNA as a field-effect transistor. The use of graphene as both electrode and transistor suggests a path towards all-electrical multiplexed graphene DNA arrays.
Study design in high-dimensional classification analysis.
Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen
2016-10-01
Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Competition in high dimensional spaces using a sparse approximation of neural fields.
Quinton, Jean-Charles; Girau, Bernard; Lefort, Mathieu
2011-01-01
The Continuum Neural Field Theory implements competition within topologically organized neural networks with lateral inhibitory connections. However, due to the polynomial complexity of matrix-based implementations, updating dense representations of the activity becomes computationally intractable when an adaptive resolution or an arbitrary number of input dimensions is required. This paper proposes an alternative to self-organizing maps with a sparse implementation based on Gaussian mixture models, promoting a trade-off in redundancy for higher computational efficiency and alleviating constraints on the underlying substrate.This version reproduces the emergent attentional properties of the original equations, by directly applying them within a continuous approximation of a high dimensional neural field. The model is compatible with preprocessed sensory flows but can also be interfaced with artificial systems. This is particularly important for sensorimotor systems, where decisions and motor actions must be taken and updated in real-time. Preliminary tests are performed on a reactive color tracking application, using spatially distributed color features.
Symmetry-enforced stability of interacting Weyl and Dirac semimetals
NASA Astrophysics Data System (ADS)
Carlström, Johan; Bergholtz, Emil J.
2018-04-01
The nodal and effectively relativistic dispersion featuring in a range of novel materials including two-dimensional graphene and three-dimensional Dirac and Weyl semimetals has attracted enormous interest during the past decade. Here, by studying the structure and symmetry of the diagrammatic expansion, we show that these nodal touching points are in fact perturbatively stable to all orders with respect to generic two-body interactions. For effective low-energy theories relevant for single and multilayer graphene, type-I and type-II Weyl and Dirac semimetals, as well as Weyl points with higher topological charge, this stability is shown to be a direct consequence of a spatial symmetry that anticommutes with the effective Hamiltonian while leaving the interaction invariant. A more refined argument is applied to the honeycomb lattice model of graphene showing that its Dirac points are also perturbatively stable to all orders. We also give examples of nodal Hamiltonians that acquire a gap from interactions as a consequence of symmetries different from those of Weyl and Dirac materials.
Dissipative N-point-vortex Models in the Plane
NASA Astrophysics Data System (ADS)
Shashikanth, Banavara N.
2010-02-01
A method is presented for constructing point vortex models in the plane that dissipate the Hamiltonian function at any prescribed rate and yet conserve the level sets of the invariants of the Hamiltonian model arising from the SE (2) symmetries. The method is purely geometric in that it uses the level sets of the Hamiltonian and the invariants to construct the dissipative field and is based on elementary classical geometry in ℝ3. Extension to higher-dimensional spaces, such as the point vortex phase space, is done using exterior algebra. The method is in fact general enough to apply to any smooth finite-dimensional system with conserved quantities, and, for certain special cases, the dissipative vector field constructed can be associated with an appropriately defined double Nambu-Poisson bracket. The most interesting feature of this method is that it allows for an infinite sequence of such dissipative vector fields to be constructed by repeated application of a symmetric linear operator (matrix) at each point of the intersection of the level sets.
Experimental tests of linear and nonlinear three-dimensional equilibrium models in DIII-D
King, Josh D.; Strait, Edward J.; Lazerson, Samuel A.; ...
2015-07-01
DIII-D experiments using new detailed magnetic diagnostics show that linear, ideal magnetohydrodynamics (MHD) theory quantitatively describes the magnetic structure (as measured externally) of three-dimensional (3D) equilibria resulting from applied fields with toroidal mode number n = 1, while a nonlinear solution to ideal MHD force balance, using the VMEC code, requires the inclusion of n ≥ 1 to achieve similar agreement. Moreover, these tests are carried out near ITER baseline parameters, providing a validated basis on which to exploit 3D fields for plasma control development. We determine scans of the applied poloidal spectrum and edge safety factors which confirm thatmore » low-pressure, n = 1 non-axisymmetric tokamak equilibria are a single, dominant, stable eigenmode. But, at higher beta, near the ideal kink mode stability limit in the absence of a conducting wall, the qualitative features of the 3D structure are observed to vary in a way that is not captured by ideal MHD.« less
Development and Application of a Three-Dimensional Finite Element Vapor Intrusion Model
Pennell, Kelly G.; Bozkurt, Ozgur; Suuberg, Eric M.
2010-01-01
Details of a three-dimensional finite element model of soil vapor intrusion, including the overall modeling process and the stepwise approach, are provided. The model is a quantitative modeling tool that can help guide vapor intrusion characterization efforts. It solves the soil gas continuity equation coupled with the chemical transport equation, allowing for both advective and diffusive transport. Three-dimensional pressure, velocity, and chemical concentration fields are produced from the model. Results from simulations involving common site features, such as impervious surfaces, porous foundation sub-base material, and adjacent structures are summarized herein. The results suggest that site-specific features are important to consider when characterizing vapor intrusion risks. More importantly, the results suggest that soil gas or subslab gas samples taken without proper regard for particular site features may not be suitable for evaluating vapor intrusion risks; rather, careful attention needs to be given to the many factors that affect chemical transport into and around buildings. PMID:19418819
Using learning automata to determine proper subset size in high-dimensional spaces
NASA Astrophysics Data System (ADS)
Seyyedi, Seyyed Hossein; Minaei-Bidgoli, Behrouz
2017-03-01
In this paper, we offer a new method called FSLA (Finding the best candidate Subset using Learning Automata), which combines the filter and wrapper approaches for feature selection in high-dimensional spaces. Considering the difficulties of dimension reduction in high-dimensional spaces, FSLA's multi-objective functionality is to determine, in an efficient manner, a feature subset that leads to an appropriate tradeoff between the learning algorithm's accuracy and efficiency. First, using an existing weighting function, the feature list is sorted and selected subsets of the list of different sizes are considered. Then, a learning automaton verifies the performance of each subset when it is used as the input space of the learning algorithm and estimates its fitness upon the algorithm's accuracy and the subset size, which determines the algorithm's efficiency. Finally, FSLA introduces the fittest subset as the best choice. We tested FSLA in the framework of text classification. The results confirm its promising performance of attaining the identified goal.
Venus - Three-Dimensional Perspective View of Alpha Region
1996-12-02
A portion of Alpha Regio is displayed in this three-dimensional perspective view of the surface of Venus from NASA Magellan spacecraft. In 1963, Alpha Regio was the first feature on Venus to be identified from Earth-based radar.
Mohammed, Ameer; Zamani, Majid; Bayford, Richard; Demosthenous, Andreas
2017-12-01
In Parkinson's disease (PD), on-demand deep brain stimulation is required so that stimulation is regulated to reduce side effects resulting from continuous stimulation and PD exacerbation due to untimely stimulation. Also, the progressive nature of PD necessitates the use of dynamic detection schemes that can track the nonlinearities in PD. This paper proposes the use of dynamic feature extraction and dynamic pattern classification to achieve dynamic PD detection taking into account the demand for high accuracy, low computation, and real-time detection. The dynamic feature extraction and dynamic pattern classification are selected by evaluating a subset of feature extraction, dimensionality reduction, and classification algorithms that have been used in brain-machine interfaces. A novel dimensionality reduction technique, the maximum ratio method (MRM) is proposed, which provides the most efficient performance. In terms of accuracy and complexity for hardware implementation, a combination having discrete wavelet transform for feature extraction, MRM for dimensionality reduction, and dynamic k-nearest neighbor for classification was chosen as the most efficient. It achieves a classification accuracy of 99.29%, an F1-score of 97.90%, and a choice probability of 99.86%.
Feature weight estimation for gene selection: a local hyperlinear learning approach
2014-01-01
Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071
Self-organizing neural networks--an alternative way of cluster analysis in clinical chemistry.
Reibnegger, G; Wachter, H
1996-04-15
Supervised learning schemes have been employed by several workers for training neural networks designed to solve clinical problems. We demonstrate that unsupervised techniques can also produce interesting and meaningful results. Using a data set on the chemical composition of milk from 22 different mammals, we demonstrate that self-organizing feature maps (Kohonen networks) as well as a modified version of error backpropagation technique yield results mimicking conventional cluster analysis. Both techniques are able to project a potentially multi-dimensional input vector onto a two-dimensional space whereby neighborhood relationships remain conserved. Thus, these techniques can be used for reducing dimensionality of complicated data sets and for enhancing comprehensibility of features hidden in the data matrix.
Relevance feedback-based building recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Allinson, Nigel M.
2010-07-01
Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.
EEG channels reduction using PCA to increase XGBoost's accuracy for stroke detection
NASA Astrophysics Data System (ADS)
Fitriah, N.; Wijaya, S. K.; Fanany, M. I.; Badri, C.; Rezal, M.
2017-07-01
In Indonesia, based on the result of Basic Health Research 2013, the number of stroke patients had increased from 8.3 ‰ (2007) to 12.1 ‰ (2013). These days, some researchers are using electroencephalography (EEG) result as another option to detect the stroke disease besides CT Scan image as the gold standard. A previous study on the data of stroke and healthy patients in National Brain Center Hospital (RS PON) used Brain Symmetry Index (BSI), Delta-Alpha Ratio (DAR), and Delta-Theta-Alpha-Beta Ratio (DTABR) as the features for classification by an Extreme Learning Machine (ELM). The study got 85% accuracy with sensitivity above 86 % for acute ischemic stroke detection. Using EEG data means dealing with many data dimensions, and it can reduce the accuracy of classifier (the curse of dimensionality). Principal Component Analysis (PCA) could reduce dimensionality and computation cost without decreasing classification accuracy. XGBoost, as the scalable tree boosting classifier, can solve real-world scale problems (Higgs Boson and Allstate dataset) with using a minimal amount of resources. This paper reuses the same data from RS PON and features from previous research, preprocessed with PCA and classified with XGBoost, to increase the accuracy with fewer electrodes. The specific fewer electrodes improved the accuracy of stroke detection. Our future work will examine the other algorithm besides PCA to get higher accuracy with less number of channels.
Hasan, Che Zawiyah Che; Jailani, Rozita; Md Tahir, Nooritawati; Ilias, Suryani
2017-07-01
Minimal information is known about the three-dimensional (3D) ground reaction forces (GRF) on the gait patterns of individuals with autism spectrum disorders (ASD). The purpose of this study was to investigate whether the 3D GRF components differ significantly between children with ASD and the peer controls. 15 children with ASD and 25 typically developing (TD) children had participated in the study. Two force plates were used to measure the 3D GRF data during walking. Time-series parameterization techniques were employed to extract 17 discrete features from the 3D GRF waveforms. By using independent t-test and Mann-Whitney U test, significant differences (p<0.05) between the ASD and TD groups were found for four GRF features. Children with ASD demonstrated higher maximum braking force, lower relative time to maximum braking force, and lower relative time to zero force during mid-stance. Children with ASD were also found to have reduced the second peak of vertical GRF in the terminal stance. These major findings suggest that children with ASD experience significant difficulties in supporting their body weight and endure gait instability during the stance phase. The findings of this research are useful to both clinicians and parents who wish to provide these children with appropriate treatments and rehabilitation programs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fiáth, Richárd; Beregszászi, Patrícia; Horváth, Domonkos; Wittner, Lucia; Aarts, Arno A A; Ruther, Patrick; Neves, Hercules P; Bokor, Hajnalka; Acsády, László; Ulbert, István
2016-11-01
Recording simultaneous activity of a large number of neurons in distributed neuronal networks is crucial to understand higher order brain functions. We demonstrate the in vivo performance of a recently developed electrophysiological recording system comprising a two-dimensional, multi-shank, high-density silicon probe with integrated complementary metal-oxide semiconductor electronics. The system implements the concept of electronic depth control (EDC), which enables the electronic selection of a limited number of recording sites on each of the probe shafts. This innovative feature of the system permits simultaneous recording of local field potentials (LFP) and single- and multiple-unit activity (SUA and MUA, respectively) from multiple brain sites with high quality and without the actual physical movement of the probe. To evaluate the in vivo recording capabilities of the EDC probe, we recorded LFP, MUA, and SUA in acute experiments from cortical and thalamic brain areas of anesthetized rats and mice. The advantages of large-scale recording with the EDC probe are illustrated by investigating the spatiotemporal dynamics of pharmacologically induced thalamocortical slow-wave activity in rats and by the two-dimensional tonotopic mapping of the auditory thalamus. In mice, spatial distribution of thalamic responses to optogenetic stimulation of the neocortex was examined. Utilizing the benefits of the EDC system may result in a higher yield of useful data from a single experiment compared with traditional passive multielectrode arrays, and thus in the reduction of animals needed for a research study. Copyright © 2016 the American Physiological Society.
Binary classification of items of interest in a repeatable process
Abell, Jeffrey A.; Spicer, John Patrick; Wincek, Michael Anthony; Wang, Hui; Chakraborty, Debejyo
2014-06-24
A system includes host and learning machines in electrical communication with sensors positioned with respect to an item of interest, e.g., a weld, and memory. The host executes instructions from memory to predict a binary quality status of the item. The learning machine receives signals from the sensor(s), identifies candidate features, and extracts features from the candidates that are more predictive of the binary quality status relative to other candidate features. The learning machine maps the extracted features to a dimensional space that includes most of the items from a passing binary class and excludes all or most of the items from a failing binary class. The host also compares the received signals for a subsequent item of interest to the dimensional space to thereby predict, in real time, the binary quality status of the subsequent item of interest.
z -Weyl gravity in higher dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moon, Taeyoon; Oh, Phillial, E-mail: dpproject@skku.edu, E-mail: ploh@skku.edu
We consider higher dimensional gravity in which the four dimensional spacetime and extra dimensions are not treated on an equal footing. The anisotropy is implemented in the ADM decomposition of higher dimensional metric by requiring the foliation preserving diffeomorphism invariance adapted to the extra dimensions, thus keeping the general covariance only for the four dimensional spacetime. The conformally invariant gravity can be constructed with an extra (Weyl) scalar field and a real parameter z which describes the degree of anisotropy of conformal transformation between the spacetime and extra dimensional metrics. In the zero mode effective 4D action, it reduces tomore » four-dimensional scalar-tensor theory coupled with nonlinear sigma model described by extra dimensional metrics. There are no restrictions on the value of z at the classical level and possible applications to the cosmological constant problem with a specific choice of z are discussed.« less
Mof-Tree: A Spatial Access Method To Manipulate Multiple Overlapping Features.
ERIC Educational Resources Information Center
Manolopoulos, Yannis; Nardelli, Enrico; Papadopoulos, Apostolos; Proietti, Guido
1997-01-01
Investigates the manipulation of large sets of two-dimensional data representing multiple overlapping features, and presents a new access method, the MOF-tree. Analyzes storage requirements and time with respect to window query operations involving multiple features. Examines both the pointer-based and pointerless MOF-tree representations.…
Youth with Psychopathy Features Are Not a Discrete Class: A Taxometric Analysis
ERIC Educational Resources Information Center
Murrie, Daniel C.; Marcus, David K.; Douglas, Kevin S.; Lee, Zina; Salekin, Randall T.; Vincent, Gina
2007-01-01
Background: Recently, researchers have sought to measure psychopathy-like features among youth in hopes of identifying children who may be progressing toward a particularly destructive form of adult pathology. However, it remains unclear whether psychopathy-like personality features among youth are best conceptualized as dimensional (distributed…
Confocal Imaging of porous media
NASA Astrophysics Data System (ADS)
Shah, S.; Crawshaw, D.; Boek, D.
2012-12-01
Carbonate rocks, which hold approximately 50% of the world's oil and gas reserves, have a very complicated and heterogeneous structure in comparison with sandstone reservoir rock. We present advances with different techniques to image, reconstruct, and characterize statistically the micro-geometry of carbonate pores. The main goal here is to develop a technique to obtain two dimensional and three dimensional images using Confocal Laser Scanning Microscopy. CLSM is used in epi-fluorescent imaging mode, allowing for the very high optical resolution of features well below 1μm size. Images of pore structures were captured using CLSM imaging where spaces in the carbonate samples were impregnated with a fluorescent, dyed epoxy-resin, and scanned in the x-y plane by a laser probe. We discuss the sample preparation in detail for Confocal Imaging to obtain sub-micron resolution images of heterogeneous carbonate rocks. We also discuss the technical and practical aspects of this imaging technique, including its advantages and limitation. We present several examples of this application, including studying pore geometry in carbonates, characterizing sub-resolution porosity in two dimensional images. We then describe approaches to extract statistical information about porosity using image processing and spatial correlation function. We have managed to obtain very low depth information in z -axis (~ 50μm) to develop three dimensional images of carbonate rocks with the current capabilities and limitation of CLSM technique. Hence, we have planned a novel technique to obtain higher depth information to obtain high three dimensional images with sub-micron resolution possible in the lateral and axial planes.
The three-dimensional genome organization of Drosophila melanogaster through data integration.
Li, Qingjiao; Tjong, Harianto; Li, Xiao; Gong, Ke; Zhou, Xianghong Jasmine; Chiolo, Irene; Alber, Frank
2017-07-31
Genome structures are dynamic and non-randomly organized in the nucleus of higher eukaryotes. To maximize the accuracy and coverage of three-dimensional genome structural models, it is important to integrate all available sources of experimental information about a genome's organization. It remains a major challenge to integrate such data from various complementary experimental methods. Here, we present an approach for data integration to determine a population of complete three-dimensional genome structures that are statistically consistent with data from both genome-wide chromosome conformation capture (Hi-C) and lamina-DamID experiments. Our structures resolve the genome at the resolution of topological domains, and reproduce simultaneously both sets of experimental data. Importantly, this data deconvolution framework allows for structural heterogeneity between cells, and hence accounts for the expected plasticity of genome structures. As a case study we choose Drosophila melanogaster embryonic cells, for which both data types are available. Our three-dimensional genome structures have strong predictive power for structural features not directly visible in the initial data sets, and reproduce experimental hallmarks of the D. melanogaster genome organization from independent and our own imaging experiments. Also they reveal a number of new insights about genome organization and its functional relevance, including the preferred locations of heterochromatic satellites of different chromosomes, and observations about homologous pairing that cannot be directly observed in the original Hi-C or lamina-DamID data. Our approach allows systematic integration of Hi-C and lamina-DamID data for complete three-dimensional genome structure calculation, while also explicitly considering genome structural variability.
Improved classification accuracy by feature extraction using genetic algorithms
NASA Astrophysics Data System (ADS)
Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.
2003-05-01
A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.
A three-dimensional autonomous nonlinear dynamical system modelling equatorial ocean flows
NASA Astrophysics Data System (ADS)
Ionescu-Kruse, Delia
2018-04-01
We investigate a nonlinear three-dimensional model for equatorial flows, finding exact solutions that capture the most relevant geophysical features: depth-dependent currents, poleward or equatorial surface drift and a vertical mixture of upward and downward motions.
Three-dimensional features on oscillating microbubbles streaming flows
NASA Astrophysics Data System (ADS)
Rossi, Massimiliano; Marin, Alvaro G.; Wang, Cheng; Hilgenfeldt, Sascha; Kähler, Christian J.
2013-11-01
Ultrasound-driven oscillating micro-bubbles have been used as active actuators in microfluidic devices to perform manifold tasks such as mixing, sorting and manipulation of microparticles. A common configuration consists in side-bubbles, created by trapping air pockets in blind channels perpendicular to the main channel direction. This configuration results in bubbles with a semi-cylindrical shape that creates a streaming flow generally considered quasi two-dimensional. However, recent experiments performed with three-dimensional velocimetry methods have shown how microparticles can present significant three-dimensional trajectories, especially in regions close to the bubble interface. Several reasons will be discussed such as boundary effects of the bottom/top wall, deformation of the bubble interface leading to more complex vibrational modes, or bubble-particle interactions. In the present investigation, precise measurements of particle trajectories close to the bubble interface will be performed by means of 3D Astigmatic Particle Tracking Velocimetry. The results will allow us to characterize quantitatively the three-dimensional features of the streaming flow and to estimate its implications in practical applications as particle trapping, sorting or mixing.
Advanced Discontinuous Galerkin Algorithms and First Open-Field Line Turbulence Simulations
NASA Astrophysics Data System (ADS)
Hammett, G. W.; Hakim, A.; Shi, E. L.
2016-10-01
New versions of Discontinuous Galerkin (DG) algorithms have interesting features that may help with challenging problems of higher-dimensional kinetic problems. We are developing the gyrokinetic code Gkeyll based on DG. DG also has features that may help with the next generation of Exascale computers. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communications costs (which are a bottleneck at exascale). DG uses efficient Gaussian quadrature like finite elements, but keeps the calculation local for the kinetic solver, also reducing communication. Sparse grid methods might further reduce the cost significantly in higher dimensions. The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature as used in popular δf gyrokinetic codes. Consistent basis functions avoid high-frequency numerical modes from electromagnetic terms. We will show our first results of 3 x + 2 v simulations of open-field line/SOL turbulence in a simple helical geometry (like Helimak/TORPEX), with parameters from LAPD, TORPEX, and NSTX. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.
Computer modelling of grain microstructure in three dimensions
NASA Astrophysics Data System (ADS)
Narayan, K. Lakshmi
We present a program that generates the two-dimensional micrographs of a three dimensional grain microstructure. The code utilizes a novel scanning, pixel mapping technique to secure statistical distributions of surface areas, grain sizes, aspect ratios, perimeters, number of nearest neighbors and volumes of the randomly nucleated particles. The program can be used for comparing the existing theories of grain growth, and interpretation of two-dimensional microstructure of three-dimensional samples. Special features have been included to minimize the computation time and resource requirements.
NASA Astrophysics Data System (ADS)
Jones, Terry Jay; Humphreys, Roberta M.; Helton, L. Andrew; Gui, Changfeng; Huang, Xiang
2007-06-01
We use imaging polarimetry taken with the HST Advanced Camera for Surveys High Resolution Camera to explore the three-dimensional structure of the circumstellar dust distribution around the red supergiant VY Canis Majoris. The polarization vectors of the nebulosity surrounding VY CMa show a strong centrosymmetric pattern in all directions except directly east and range from 10% to 80% in fractional polarization. In regions that are optically thin, and therefore likely to have only single scattering, we use the fractional polarization and photometric color to locate the physical position of the dust along the line of sight. Most of the individual arclike features and clumps seen in the intensity image are also features in the fractional polarization map. These features must be distinct geometric objects. If they were just local density enhancements, the fractional polarization would not change so abruptly at the edge of the feature. The location of these features in the ejecta of VY CMa using polarimetry provides a determination of their three-dimensional geometry independent of, but in close agreement with, the results from our study of their kinematics (Paper I). Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
Zhang, Yifan; Gao, Xunzhang; Peng, Xuan; Ye, Jiaqi; Li, Xiang
2018-05-16
The High Resolution Range Profile (HRRP) recognition has attracted great concern in the field of Radar Automatic Target Recognition (RATR). However, traditional HRRP recognition methods failed to model high dimensional sequential data efficiently and have a poor anti-noise ability. To deal with these problems, a novel stochastic neural network model named Attention-based Recurrent Temporal Restricted Boltzmann Machine (ARTRBM) is proposed in this paper. RTRBM is utilized to extract discriminative features and the attention mechanism is adopted to select major features. RTRBM is efficient to model high dimensional HRRP sequences because it can extract the information of temporal and spatial correlation between adjacent HRRPs. The attention mechanism is used in sequential data recognition tasks including machine translation and relation classification, which makes the model pay more attention to the major features of recognition. Therefore, the combination of RTRBM and the attention mechanism makes our model effective for extracting more internal related features and choose the important parts of the extracted features. Additionally, the model performs well with the noise corrupted HRRP data. Experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset show that our proposed model outperforms other traditional methods, which indicates that ARTRBM extracts, selects, and utilizes the correlation information between adjacent HRRPs effectively and is suitable for high dimensional data or noise corrupted data.
LRSSLMDA: Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction
Huang, Li
2017-01-01
Predicting novel microRNA (miRNA)-disease associations is clinically significant due to miRNAs’ potential roles of diagnostic biomarkers and therapeutic targets for various human diseases. Previous studies have demonstrated the viability of utilizing different types of biological data to computationally infer new disease-related miRNAs. Yet researchers face the challenge of how to effectively integrate diverse datasets and make reliable predictions. In this study, we presented a computational model named Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction (LRSSLMDA), which projected miRNAs/diseases’ statistical feature profile and graph theoretical feature profile to a common subspace. It used Laplacian regularization to preserve the local structures of the training data and a L1-norm constraint to select important miRNA/disease features for prediction. The strength of dimensionality reduction enabled the model to be easily extended to much higher dimensional datasets than those exploited in this study. Experimental results showed that LRSSLMDA outperformed ten previous models: the AUC of 0.9178 in global leave-one-out cross validation (LOOCV) and the AUC of 0.8418 in local LOOCV indicated the model’s superior prediction accuracy; and the average AUC of 0.9181+/-0.0004 in 5-fold cross validation justified its accuracy and stability. In addition, three types of case studies further demonstrated its predictive power. Potential miRNAs related to Colon Neoplasms, Lymphoma, Kidney Neoplasms, Esophageal Neoplasms and Breast Neoplasms were predicted by LRSSLMDA. Respectively, 98%, 88%, 96%, 98% and 98% out of the top 50 predictions were validated by experimental evidences. Therefore, we conclude that LRSSLMDA would be a valuable computational tool for miRNA-disease association prediction. PMID:29253885
Brown, Timothy A.; Barlow, David H.
2010-01-01
A wealth of evidence attests to the extensive current and lifetime diagnostic comorbidity of the DSM-IV anxiety and mood disorders. Research has shown that the considerable cross-sectional covariation of DSM-IV emotional disorders is accounted for by common higher-order dimensions such as neuroticism/behavioral inhibition (N/BI) and low positive affect/behavioral activation. Longitudinal studies have indicated that the temporal covariation of these disorders can be explained by changes in N/BI and in some cases, initial levels of N/BI are predictive of the temporal course of emotional disorders. Moreover, the marked phenotypal overlap of the DSM-IV anxiety and mood disorder constructs is a frequent source of diagnostic unreliability (e.g., temporal overlap in the shared features of generalized anxiety disorder and mood disorders, situation specificity of panic attacks in panic disorder and specific phobia). Although dimensional approaches have been considered as a method to address the drawbacks associated with the extant prototypical nosology (e.g., inadequate assessment of individual differences in disorder severity), these proposals do not reconcile key problems in current classification such as modest reliability and high comorbidity. The current paper considers an alternative approach that emphasizes empirically supported common dimensions of emotional disorders over disorder-specific criteria sets. The selection and assessment of these dimensions are discussed along with how these methods could be implemented to promote more reliable and valid diagnosis, prognosis, and treatment planning. For instance, the advantages of this classification system are discussed in context of current transdiagnostic treatment protocols that are efficaciously applied to a variety of disorders by targeting their shared features. PMID:19719339
NASA Astrophysics Data System (ADS)
Bányai, László; Mentes, Gyula; Újvári, Gábor; Kovács, Miklós; Czap, Zoltán; Gribovszki, Katalin; Papp, Gábor
2014-04-01
Five years of geodetic monitoring data at Dunaszekcső, Hungary, are processed to evaluate recurrent landsliding, which is a characteristic geomorphological process affecting the high banks of the Middle Danube valley in Hungary. The integrated geodetic observations provide accurate three dimensional coordinate time series, and these data are used to calculate the kinematic features of point movements and the rigid body behavior of point blocks. Additional datasets include borehole tiltmeter data and hydrological recordings of the Danube and soil water wells. These data, together with two dimensional final element analyses, are utilized to gain a better understanding of the physical, soil mechanical background and stability features of the high bank. Here we indicate that the main trigger of movements is changing groundwater levels, whose effect is an order of magnitude higher than that of river water level changes. Varying displacement rates of the sliding blocks are interpreted as having been caused by basal pore water pressure changes originating from shear zone volume changes, floods of the River Danube through later seepage and rain infiltration. Both data and modeling point to the complex nature of bank sliding at Dunaszekcső. Some features imply that the movements are rotational, some reveal slumping. By contrast, all available observational and modeling data point to the retrogressive development of the high bank at Dunaszekcső. Regarding mitigation, the detailed analysis of three basic parameters (the direction of displacement vectors, tilting, and the acceleration component of the kinematic function) is suggested because these parameters indicate the zone where the largest lateral displacements can be expected and point to the advent of the rapid landsliding phase that affects high banks along the River Danube.
Bartesaghi, Alberto; Sapiro, Guillermo; Subramaniam, Sriram
2006-01-01
Electron tomography allows for the determination of the three-dimensional structures of cells and tissues at resolutions significantly higher than that which is possible with optical microscopy. Electron tomograms contain, in principle, vast amounts of information on the locations and architectures of large numbers of subcellular assemblies and organelles. The development of reliable quantitative approaches for the analysis of features in tomograms is an important problem, and a challenging prospect due to the low signal-to-noise ratios that are inherent to biological electron microscopic images. This is, in part, a consequence of the tremendous complexity of biological specimens. We report on a new method for the automated segmentation of HIV particles and selected cellular compartments in electron tomograms recorded from fixed, plastic-embedded sections derived from HIV-infected human macrophages. Individual features in the tomogram are segmented using a novel robust algorithm that finds their boundaries as global minimal surfaces in a metric space defined by image features. The optimization is carried out in a transformed spherical domain with the center an interior point of the particle of interest, providing a proper setting for the fast and accurate minimization of the segmentation energy. This method provides tools for the semi-automated detection and statistical evaluation of HIV particles at different stages of assembly in the cells and presents opportunities for correlation with biochemical markers of HIV infection. The segmentation algorithm developed here forms the basis of the automated analysis of electron tomograms and will be especially useful given the rapid increases in the rate of data acquisition. It could also enable studies of much larger data sets, such as those which might be obtained from the tomographic analysis of HIV-infected cells from studies of large populations. PMID:16190467
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
Wang, Guohua; Liu, Qiong
2015-01-01
Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians’ head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians’ size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only. PMID:26703611
Wang, Guohua; Liu, Qiong
2015-12-21
Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians' head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians' size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only.
EM in high-dimensional spaces.
Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim
2005-06-01
This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.
An unconventional depiction of viewpoint in rock art.
Pettigrew, Jack; Scott-Virtue, Lee
2015-01-01
Rock art in Africa sometimes takes advantage of three-dimensional features of the rock wall, such as fissures or protuberances, that can be incorporated into the artistic composition (Lewis-Williams, 2002). More commonly, rock artists choose uniform walls on which two-dimensional depictions may represent three-dimensional figures or objects. In this report we present such a two-dimensional depiction in rock art that we think reveals an intention by the artist to represent an unusual three-dimensional viewpoint, namely, with the two human figures facing into the rock wall, instead of the accustomed Western viewpoint facing out!
Behavior analysis of video object in complicated background
NASA Astrophysics Data System (ADS)
Zhao, Wenting; Wang, Shigang; Liang, Chao; Wu, Wei; Lu, Yang
2016-10-01
This paper aims to achieve robust behavior recognition of video object in complicated background. Features of the video object are described and modeled according to the depth information of three-dimensional video. Multi-dimensional eigen vector are constructed and used to process high-dimensional data. Stable object tracing in complex scenes can be achieved with multi-feature based behavior analysis, so as to obtain the motion trail. Subsequently, effective behavior recognition of video object is obtained according to the decision criteria. What's more, the real-time of algorithms and accuracy of analysis are both improved greatly. The theory and method on the behavior analysis of video object in reality scenes put forward by this project have broad application prospect and important practical significance in the security, terrorism, military and many other fields.
Xarray: multi-dimensional data analysis in Python
NASA Astrophysics Data System (ADS)
Hoyer, Stephan; Hamman, Joe; Maussion, Fabien
2017-04-01
xarray (http://xarray.pydata.org) is an open source project and Python package that provides a toolkit and data structures for N-dimensional labeled arrays, which are the bread and butter of modern geoscientific data analysis. Key features of the package include label-based indexing and arithmetic, interoperability with the core scientific Python packages (e.g., pandas, NumPy, Matplotlib, Cartopy), out-of-core computation on datasets that don't fit into memory, a wide range of input/output options, and advanced multi-dimensional data manipulation tools such as group-by and resampling. In this contribution we will present the key features of the library and demonstrate its great potential for a wide range of applications, from (big-)data processing on super computers to data exploration in front of a classroom.
Dynamical features and electric field strengths of double layers driven by currents. [in auroras
NASA Technical Reports Server (NTRS)
Singh, N.; Thiemann, H.; Schunk, R. W.
1985-01-01
In recent years, a number of papers have been concerned with 'ion-acoustic' double layers. In the present investigation, results from numerical simulations are presented to show that the shapes and forms of current-driven double layers evolve dynamically with the fluctuations in the current through the plasma. It is shown that double layers with a potential dip can form even without the excitation of ion-acoustic modes. Double layers in two-and one-half-dimensional simulations are discussed, taking into account the simulation technique, the spatial and temporal features of plasma, and the dynamical behavior of the parallel potential distribution. Attention is also given to double layers in one-dimensional simulations, and electrical field strengths predicted by two-and one-half-dimensional simulations.
Janousova, Eva; Schwarz, Daniel; Kasparek, Tomas
2015-06-30
We investigated a combination of three classification algorithms, namely the modified maximum uncertainty linear discriminant analysis (mMLDA), the centroid method, and the average linkage, with three types of features extracted from three-dimensional T1-weighted magnetic resonance (MR) brain images, specifically MR intensities, grey matter densities, and local deformations for distinguishing 49 first episode schizophrenia male patients from 49 healthy male subjects. The feature sets were reduced using intersubject principal component analysis before classification. By combining the classifiers, we were able to obtain slightly improved results when compared with single classifiers. The best classification performance (81.6% accuracy, 75.5% sensitivity, and 87.8% specificity) was significantly better than classification by chance. We also showed that classifiers based on features calculated using more computation-intensive image preprocessing perform better; mMLDA with classification boundary calculated as weighted mean discriminative scores of the groups had improved sensitivity but similar accuracy compared to the original MLDA; reducing a number of eigenvectors during data reduction did not always lead to higher classification accuracy, since noise as well as the signal important for classification were removed. Our findings provide important information for schizophrenia research and may improve accuracy of computer-aided diagnostics of neuropsychiatric diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhuo, Shuangmu; Yan, Jie; Kang, Yuzhan; Xu, Shuoyu; Peng, Qiwen; So, Peter T. C.; Yu, Hanry
2014-07-01
Various structural features on the liver surface reflect functional changes in the liver. The visualization of these surface features with molecular specificity is of particular relevance to understanding the physiology and diseases of the liver. Using multi-photon microscopy (MPM), we have developed a label-free, three-dimensional quantitative and sensitive method to visualize various structural features of liver surface in living rat. MPM could quantitatively image the microstructural features of liver surface with respect to the sinuosity of collagen fiber, the elastic fiber structure, the ratio between elastin and collagen, collagen content, and the metabolic state of the hepatocytes that are correlative with the pathophysiologically induced changes in the regions of interest. This study highlights the potential of this technique as a useful tool for pathophysiological studies and possible diagnosis of the liver diseases with further development.
Explosive hazard detection using MIMO forward-looking ground penetrating radar
NASA Astrophysics Data System (ADS)
Shaw, Darren; Ho, K. C.; Stone, Kevin; Keller, James M.; Popescu, Mihail; Anderson, Derek T.; Luke, Robert H.; Burns, Brian
2015-05-01
This paper proposes a machine learning algorithm for subsurface object detection on multiple-input-multiple-output (MIMO) forward-looking ground-penetrating radar (FLGPR). By detecting hazards using FLGPR, standoff distances of up to tens of meters can be acquired, but this is at the degradation of performance due to high false alarm rates. The proposed system utilizes an anomaly detection prescreener to identify potential object locations. Alarm locations have multiple one-dimensional (ML) spectral features, two-dimensional (2D) spectral features, and log-Gabor statistic features extracted. The ability of these features to reduce the number of false alarms and increase the probability of detection is evaluated for both co-polarizations present in the Akela MIMO array. Classification is performed by a Support Vector Machine (SVM) with lane-based cross-validation for training and testing. Class imbalance and optimized SVM kernel parameters are considered during classifier training.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuo, Shuangmu, E-mail: shuangmuzhuo@gmail.com, E-mail: hanry-yu@nuhs.edu.sg; Institute of Laser and Optoelectronics Technology, Fujian Normal University, Fuzhou 350007; Yan, Jie
2014-07-14
Various structural features on the liver surface reflect functional changes in the liver. The visualization of these surface features with molecular specificity is of particular relevance to understanding the physiology and diseases of the liver. Using multi-photon microscopy (MPM), we have developed a label-free, three-dimensional quantitative and sensitive method to visualize various structural features of liver surface in living rat. MPM could quantitatively image the microstructural features of liver surface with respect to the sinuosity of collagen fiber, the elastic fiber structure, the ratio between elastin and collagen, collagen content, and the metabolic state of the hepatocytes that are correlativemore » with the pathophysiologically induced changes in the regions of interest. This study highlights the potential of this technique as a useful tool for pathophysiological studies and possible diagnosis of the liver diseases with further development.« less
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Effects of achievement contexts on the meaning structure of emotion words.
Gentsch, Kornelia; Loderer, Kristina; Soriano, Cristina; Fontaine, Johnny R J; Eid, Michael; Pekrun, Reinhard; Scherer, Klaus R
2018-03-01
Little is known about the impact of context on the meaning of emotion words. In the present study, we used a semantic profiling instrument (GRID) to investigate features representing five emotion components (appraisal, bodily reaction, expression, action tendencies, and feeling) of 11 emotion words in situational contexts involving success or failure. We compared these to the data from an earlier study in which participants evaluated the typicality of features out of context. Profile analyses identified features for which typicality changed as a function of context for all emotion words, except contentment, with appraisal features being most frequently affected. Those context effects occurred for both hypothesised basic and non-basic emotion words. Moreover, both data sets revealed a four-dimensional structure. The four dimensions were largely similar (valence, power, arousal, and novelty). The results suggest that context may not change the underlying dimensionality but affects facets of the meaning of emotion words.
Seismic Tremors and Three-Dimensional Magma Wagging
NASA Astrophysics Data System (ADS)
Liao, Y.; Bercovici, D.
2015-12-01
Seismic tremor is a feature shared by many silicic volcanoes and is a precursor of volcanic eruption. Many of the characteristics of tremors, including their frequency band from 0.5 Hz to 7 Hz, are common for volcanoes with very different geophysical and geochemical properties. The ubiquitous characteristics of tremor imply that it results from some generation mechanism that is common to all volcanoes, instead of being unique to each volcano. Here we present new analysis on the magma-wagging mechanism that has been proposed to generate tremor. The model is based on the suggestion given by previous work (Jellinek & Bercovici 2011; Bercovici et.al. 2013) that the magma column is surrounded by a compressible, bubble-rich foam annulus while rising inside the volcanic conduit, and that the lateral oscillation of the magma inside the annulus causes observable tremor. Unlike the previous two-dimensional wagging model where the displacement of the magma column is restricted to one vertical plane, the three-dimensional model we employ allows the magma column to bend in different directions and has angular motion as well. Our preliminary results show that, without damping from viscous deformation of the magma column, the system retains angular momentum and develops elliptical motion (i.e., the horizontal displacement traces an ellipse). In this ''inviscid'' limit, the magma column can also develop instabilities with higher frequencies than what is found in the original two-dimensional model. Lateral motion can also be out of phase for various depths in the magma column leading to a coiled wagging motion. For the viscous-magma model, we predict a similar damping rate for the uncoiled magma column as in the two-dimensional model, and faster damping for the coiled magma column. The higher damping thus requires the existence of a forcing mechanism to sustain the oscillation, for example the gas-driven Bernoulli effect proposed by Bercovici et al (2013). Finally, using our new 3-D model, the spectrum of displacement and unsynchronized cross-correlation between displacements measured from different locations can be calculated, and this can be compared to more detailed seismic measurements on well monitored volcanoes.
Quantized vortices in arbitrary dimensions and the normal-to-superfluid phase transition
NASA Astrophysics Data System (ADS)
Bora, Florin
The structure and energetics of superflow around quantized vortices, and the motion inherited by these vortices from this superflow, are explored in the general setting of a superfluid in arbitrary dimensions. The vortices may be idealized as objects of co-dimension two, such as one-dimensional loops and two-dimensional closed surfaces, respectively, in the cases of three- and four-dimensional superfluidity. By using the analogy between vortical superflow and Ampere-Maxwell magnetostatics, the equilibrium superflow containing any specified collection of vortices is constructed. The energy of the superflow is found to take on a simple form for vortices that are smooth and asymptotically large, compared with the vortex core size. The motion of vortices is analyzed in general, as well as for the special cases of hyper-spherical and weakly distorted hyper-planar vortices. In all dimensions, vortex motion reflects vortex geometry. In dimension four and higher, this includes not only extrinsic but also intrinsic aspects of the vortex shape, which enter via the first and second fundamental forms of classical geometry. For hyper-spherical vortices, which generalize the vortex rings of three dimensional superfluidity, the energy-momentum relation is determined. Simple scaling arguments recover the essential features of these results, up to numerical and logarithmic factors. Extending these results to systems containing multiple vortices is elementary due to the linearity of the theory. The energy for multiple vortices is thus a sum of self-energies and power-law interaction terms. The statistical mechanics of a system containing vortices is addressed via the grand canonical partition function. A renormalization-group analysis in which the low energy excitations are integrated approximately, is used to compute certain critical coefficients. The exponents obtained via this approximate procedure are compared with values obtained previously by other means. For dimensions higher than three the superfluid density is found to vanish as the critical temperature is approached from below.
Integrating high dimensional bi-directional parsing models for gene mention tagging.
Hsu, Chun-Nan; Chang, Yu-Ming; Kuo, Cheng-Ju; Lin, Yu-Shi; Huang, Han-Shen; Chung, I-Fang
2008-07-01
Tagging gene and gene product mentions in scientific text is an important initial step of literature mining. In this article, we describe in detail our gene mention tagger participated in BioCreative 2 challenge and analyze what contributes to its good performance. Our tagger is based on the conditional random fields model (CRF), the most prevailing method for the gene mention tagging task in BioCreative 2. Our tagger is interesting because it accomplished the highest F-scores among CRF-based methods and second over all. Moreover, we obtained our results by mostly applying open source packages, making it easy to duplicate our results. We first describe in detail how we developed our CRF-based tagger. We designed a very high dimensional feature set that includes most of information that may be relevant. We trained bi-directional CRF models with the same set of features, one applies forward parsing and the other backward, and integrated two models based on the output scores and dictionary filtering. One of the most prominent factors that contributes to the good performance of our tagger is the integration of an additional backward parsing model. However, from the definition of CRF, it appears that a CRF model is symmetric and bi-directional parsing models will produce the same results. We show that due to different feature settings, a CRF model can be asymmetric and the feature setting for our tagger in BioCreative 2 not only produces different results but also gives backward parsing models slight but constant advantage over forward parsing model. To fully explore the potential of integrating bi-directional parsing models, we applied different asymmetric feature settings to generate many bi-directional parsing models and integrate them based on the output scores. Experimental results show that this integrated model can achieve even higher F-score solely based on the training corpus for gene mention tagging. Data sets, programs and an on-line service of our gene mention tagger can be accessed at http://aiia.iis.sinica.edu.tw/biocreative2.htm.
NASA Astrophysics Data System (ADS)
Tan, Maxine; Leader, Joseph K.; Liu, Hong; Zheng, Bin
2015-03-01
We recently investigated a new mammographic image feature based risk factor to predict near-term breast cancer risk after a woman has a negative mammographic screening. We hypothesized that unlike the conventional epidemiology-based long-term (or lifetime) risk factors, the mammographic image feature based risk factor value will increase as the time lag between the negative and positive mammography screening decreases. The purpose of this study is to test this hypothesis. From a large and diverse full-field digital mammography (FFDM) image database with 1278 cases, we collected all available sequential FFDM examinations for each case including the "current" and 1 to 3 most recently "prior" examinations. All "prior" examinations were interpreted negative, and "current" ones were either malignant or recalled negative/benign. We computed 92 global mammographic texture and density based features, and included three clinical risk factors (woman's age, family history and subjective breast density BIRADS ratings). On this initial feature set, we applied a fast and accurate Sequential Forward Floating Selection (SFFS) feature selection algorithm to reduce feature dimensionality. The features computed on both mammographic views were individually/ separately trained using two artificial neural network (ANN) classifiers. The classification scores of the two ANNs were then merged with a sequential ANN. The results show that the maximum adjusted odds ratios were 5.59, 7.98, and 15.77 for using the 3rd, 2nd, and 1st "prior" FFDM examinations, respectively, which demonstrates a higher association of mammographic image feature change and an increasing risk trend of developing breast cancer in the near-term after a negative screening.
USING TWO-DIMENSIONAL HYDRODYNAMIC MODELS AT SCALES OF ECOLOGICAL IMPORTANCE. (R825760)
Modeling of flow features that are important in assessing stream habitat conditions has been a long-standing interest of stream biologists. Recently, they have begun examining the usefulness of two-dimensional (2-D) hydrodynamic models in attaining this objective. Current modelin...
Multispectral embedding-based deep neural network for three-dimensional human pose recovery
NASA Astrophysics Data System (ADS)
Yu, Jialin; Sun, Jifeng
2018-01-01
Monocular image-based three-dimensional (3-D) human pose recovery aims to retrieve 3-D poses using the corresponding two-dimensional image features. Therefore, the pose recovery performance highly depends on the image representations. We propose a multispectral embedding-based deep neural network (MSEDNN) to automatically obtain the most discriminative features from multiple deep convolutional neural networks and then embed their penultimate fully connected layers into a low-dimensional manifold. This compact manifold can explore not only the optimum output from multiple deep networks but also the complementary properties of them. Furthermore, the distribution of each hierarchy discriminative manifold is sufficiently smooth so that the training process of our MSEDNN can be effectively implemented only using few labeled data. Our proposed network contains a body joint detector and a human pose regressor that are jointly trained. Extensive experiments conducted on four databases show that our proposed MSEDNN can achieve the best recovery performance compared with the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Balakin, A. A.; Mironov, V. A.; Skobelev, S. A.
2017-01-01
The self-action of two-dimensional and three-dimensional Bessel wave packets in a system of coupled light guides is considered using the discrete nonlinear Schrödinger equation. The features of the self-action of such wave fields are related to their initial strong spatial inhomogeneity. The numerical simulation shows that for the field amplitude exceeding a critical value, the development of an instability typical of a medium with the cubic nonlinearity is observed. Various regimes are studied: the self-channeling of a wave beam in one light guide at powers not strongly exceeding a critical value, the formation of the "kaleidoscopic" picture of a wave packet during the propagation of higher-power radiation along a stratified medium, the formation of light bullets during competition between self-focusing and modulation instabilities in the case of three-dimensional wave packets, etc. In the problem of laser pulse shortening, the situation is considered when the wave-field stratification in the transverse direction dominates. This process is accompanied by the self-compression of laser pulses in well enough separated light guides. The efficiency of conversion of the initial Bessel field distribution to two flying parallel light bullets is about 50%.
Diwadkar, V A; Carpenter, P A; Just, M A
2000-07-01
Functional MRI was used to determine how the constituents of the cortical network subserving dynamic spatial working memory respond to two types of increases in task complexity. Participants mentally maintained the most recent location of either one or three objects as the three objects moved discretely in either a two- or three-dimensional array. Cortical activation in the dorsolateral prefrontal (DLPFC) and the parietal cortex increased as a function of the number of object locations to be maintained and the dimensionality of the display. An analysis of the response characteristics of the individual voxels showed that a large proportion were activated only when both the variables imposed the higher level of demand. A smaller proportion were activated specifically in response to increases in task demand associated with each of the independent variables. A second experiment revealed the same effect of dimensionality in the parietal cortex when the movement of objects was signaled auditorily rather than visually, indicating that the additional representational demands induced by 3-D space are independent of input modality. The comodulation of activation in the prefrontal and parietal areas by the amount of computational demand suggests that the collaboration between areas is a basic feature underlying much of the functionality of spatial working memory. Copyright 2000 Academic Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balakin, A. A., E-mail: balakin.alexey@yandex.ru; Mironov, V. A.; Skobelev, S. A., E-mail: sk.sa1981@gmail.com
The self-action of two-dimensional and three-dimensional Bessel wave packets in a system of coupled light guides is considered using the discrete nonlinear Schrödinger equation. The features of the self-action of such wave fields are related to their initial strong spatial inhomogeneity. The numerical simulation shows that for the field amplitude exceeding a critical value, the development of an instability typical of a medium with the cubic nonlinearity is observed. Various regimes are studied: the self-channeling of a wave beam in one light guide at powers not strongly exceeding a critical value, the formation of the “kaleidoscopic” picture of a wavemore » packet during the propagation of higher-power radiation along a stratified medium, the formation of light bullets during competition between self-focusing and modulation instabilities in the case of three-dimensional wave packets, etc. In the problem of laser pulse shortening, the situation is considered when the wave-field stratification in the transverse direction dominates. This process is accompanied by the self-compression of laser pulses in well enough separated light guides. The efficiency of conversion of the initial Bessel field distribution to two flying parallel light bullets is about 50%.« less
Visual scan-path analysis with feature space transient fixation moments
NASA Astrophysics Data System (ADS)
Dempere-Marco, Laura; Hu, Xiao-Peng; Yang, Guang-Zhong
2003-05-01
The study of eye movements provides useful insight into the cognitive processes underlying visual search tasks. The analysis of the dynamics of eye movements has often been approached from a purely spatial perspective. In many cases, however, it may not be possible to define meaningful or consistent dynamics without considering the features underlying the scan paths. In this paper, the definition of the feature space has been attempted through the concept of visual similarity and non-linear low dimensional embedding, which defines a mapping from the image space into a low dimensional feature manifold that preserves the intrinsic similarity of image patterns. This has enabled the definition of perceptually meaningful features without the use of domain specific knowledge. Based on this, this paper introduces a new concept called Feature Space Transient Fixation Moments (TFM). The approach presented tackles the problem of feature space representation of visual search through the use of TFM. We demonstrate the practical values of this concept for characterizing the dynamics of eye movements in goal directed visual search tasks. We also illustrate how this model can be used to elucidate the fundamental steps involved in skilled search tasks through the evolution of transient fixation moments.
Classification Influence of Features on Given Emotions and Its Application in Feature Selection
NASA Astrophysics Data System (ADS)
Xing, Yin; Chen, Chuang; Liu, Li-Long
2018-04-01
In order to solve the problem that there is a large amount of redundant data in high-dimensional speech emotion features, we analyze deeply the extracted speech emotion features and select better features. Firstly, a given emotion is classified by each feature. Secondly, the recognition rate is ranked in descending order. Then, the optimal threshold of features is determined by rate criterion. Finally, the better features are obtained. When applied in Berlin and Chinese emotional data set, the experimental results show that the feature selection method outperforms the other traditional methods.
Economic demography in fuzzy spatial dilemmas and power laws
NASA Astrophysics Data System (ADS)
Fort, H.; Pérez, N.
2005-03-01
Adaptive agents, playing the iterated Prisoner's Dilemma (IPD) in a two-dimensional spatial setting and governed by Pavlovian strategies ("higher success-higher chance to stay"), are used to approach the problem of cooperation between self-interested individuals from a novel angle: We investigate the effect of different possible measures of success (MS) used by players to asses their performance in the game. These MS involve quantities such as: the player's utilities U, his cumulative score (or "capital") W, his neighborhood "welfare", etc. To handle an imprecise concept like "success" the agents use fuzzy logic. The degree of cooperation, the "economic demography" and the "efficiency" attained by the system depend dramatically on the MS. Specifically, patterns of "segregation" or "exploitation" are observed for some MS. On the other hand, power laws, that may be interpreted as signatures of critical self-organization (SOC), constitute a common feature for all the MS.
Quantitative Inspection of Remanence of Broken Wire Rope Based on Compressed Sensing.
Zhang, Juwei; Tan, Xiaojiang
2016-08-25
Most traditional strong magnetic inspection equipment has disadvantages such as big excitation devices, high weight, low detection precision, and inconvenient operation. This paper presents the design of a giant magneto-resistance (GMR) sensor array collection system. The remanence signal is collected to acquire two-dimensional magnetic flux leakage (MFL) data on the surface of wire ropes. Through the use of compressed sensing wavelet filtering (CSWF), the image expression of wire ropes MFL on the surface was obtained. Then this was taken as the input of the designed back propagation (BP) neural network to extract three kinds of MFL image geometry features and seven invariant moments of defect images. Good results were obtained. The experimental results show that nondestructive inspection through the use of remanence has higher accuracy and reliability compared with traditional inspection devices, along with smaller volume, lighter weight and higher precision.
Quantitative Inspection of Remanence of Broken Wire Rope Based on Compressed Sensing
Zhang, Juwei; Tan, Xiaojiang
2016-01-01
Most traditional strong magnetic inspection equipment has disadvantages such as big excitation devices, high weight, low detection precision, and inconvenient operation. This paper presents the design of a giant magneto-resistance (GMR) sensor array collection system. The remanence signal is collected to acquire two-dimensional magnetic flux leakage (MFL) data on the surface of wire ropes. Through the use of compressed sensing wavelet filtering (CSWF), the image expression of wire ropes MFL on the surface was obtained. Then this was taken as the input of the designed back propagation (BP) neural network to extract three kinds of MFL image geometry features and seven invariant moments of defect images. Good results were obtained. The experimental results show that nondestructive inspection through the use of remanence has higher accuracy and reliability compared with traditional inspection devices, along with smaller volume, lighter weight and higher precision. PMID:27571077
Optimal wavelength band clustering for multispectral iris recognition.
Gong, Yazhuo; Zhang, David; Shi, Pengfei; Yan, Jingqi
2012-07-01
This work explores the possibility of clustering spectral wavelengths based on the maximum dissimilarity of iris textures. The eventual goal is to determine how many bands of spectral wavelengths will be enough for iris multispectral fusion and to find these bands that will provide higher performance of iris multispectral recognition. A multispectral acquisition system was first designed for imaging the iris at narrow spectral bands in the range of 420 to 940 nm. Next, a set of 60 human iris images that correspond to the right and left eyes of 30 different subjects were acquired for an analysis. Finally, we determined that 3 clusters were enough to represent the 10 feature bands of spectral wavelengths using the agglomerative clustering based on two-dimensional principal component analysis. The experimental results suggest (1) the number, center, and composition of clusters of spectral wavelengths and (2) the higher performance of iris multispectral recognition based on a three wavelengths-bands fusion.
Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A
2014-01-01
Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).
Three-Dimensional Cataract Crystalline Lens Imaging With Swept-Source Optical Coherence Tomography.
de Castro, Alberto; Benito, Antonio; Manzanera, Silvestre; Mompeán, Juan; Cañizares, Belén; Martínez, David; Marín, Jose María; Grulkowski, Ireneusz; Artal, Pablo
2018-02-01
To image, describe, and characterize different features visible in the crystalline lens of older adults with and without cataract when imaged three-dimensionally with a swept-source optical coherence tomography (SS-OCT) system. We used a new SS-OCT laboratory prototype designed to enhance the visualization of the crystalline lens and imaged the entire anterior segment of both eyes in two groups of participants: patients scheduled to undergo cataract surgery, n = 17, age range 36 to 91 years old, and volunteers without visual complains, n = 14, age range 20 to 81 years old. Pre-cataract surgery patients were also clinically graded according to the Lens Opacification Classification System III. The three-dimensional location and shape of the visible opacities were compared with the clinical grading. Hypo- and hyperreflective features were visible in the lens of all pre-cataract surgery patients and in some of the older adults in the volunteer group. When the clinical examination revealed cortical or subcapsular cataracts, hyperreflective features were visible either in the cortex parallel to the surfaces of the lens or in the posterior pole. Other type of opacities that appeared as hyporeflective localized features were identified in the cortex of the lens. The OCT signal in the nucleus of the crystalline lens correlated with the nuclear cataract clinical grade. A dedicated OCT is a useful tool to study in vivo the subtle opacities in the cataractous crystalline lens, revealing its position and size three-dimensionally. The use of these images allows obtaining more detailed information on the age-related changes leading to cataract.
NASA Astrophysics Data System (ADS)
Viswanath, Satish; Rosen, Mark; Madabhushi, Anant
2008-03-01
Current techniques for localization of prostatic adenocarcinoma (CaP) via blinded trans-rectal ultrasound biopsy are associated with a high false negative detection rate. While high resolution endorectal in vivo Magnetic Resonance (MR) prostate imaging has been shown to have improved contrast and resolution for CaP detection over ultrasound, similarity in intensity characteristics between benign and cancerous regions on MR images contribute to a high false positive detection rate. In this paper, we present a novel unsupervised segmentation method that employs manifold learning via consensus schemes for detection of cancerous regions from high resolution 1.5 Tesla (T) endorectal in vivo prostate MRI. A significant contribution of this paper is a method to combine multiple weak, lower-dimensional representations of high dimensional feature data in a way analogous to classifier ensemble schemes, and hence create a stable and accurate reduced dimensional representation. After correcting for MR image intensity artifacts, such as bias field inhomogeneity and intensity non-standardness, our algorithm extracts over 350 3D texture features at every spatial location in the MR scene at multiple scales and orientations. Non-linear dimensionality reduction schemes such as Locally Linear Embedding (LLE) and Graph Embedding (GE) are employed to create multiple low dimensional data representations of this high dimensional texture feature space. Our novel consensus embedding method is used to average object adjacencies from within the multiple low dimensional projections so that class relationships are preserved. Unsupervised consensus clustering is then used to partition the objects in this consensus embedding space into distinct classes. Quantitative evaluation on 18 1.5 T prostate MR data against corresponding histology obtained from the multi-site ACRIN trials show a sensitivity of 92.65% and a specificity of 82.06%, which suggests that our method is successfully able to detect suspicious regions in the prostate.
Three-dimensional biofilm structure quantification.
Beyenal, Haluk; Donovan, Conrad; Lewandowski, Zbigniew; Harkin, Gary
2004-12-01
Quantitative parameters describing biofilm physical structure have been extracted from three-dimensional confocal laser scanning microscopy images and used to compare biofilm structures, monitor biofilm development, and quantify environmental factors affecting biofilm structure. Researchers have previously used biovolume, volume to surface ratio, roughness coefficient, and mean and maximum thicknesses to compare biofilm structures. The selection of these parameters is dependent on the availability of software to perform calculations. We believe it is necessary to develop more comprehensive parameters to describe heterogeneous biofilm morphology in three dimensions. This research presents parameters describing three-dimensional biofilm heterogeneity, size, and morphology of biomass calculated from confocal laser scanning microscopy images. This study extends previous work which extracted quantitative parameters regarding morphological features from two-dimensional biofilm images to three-dimensional biofilm images. We describe two types of parameters: (1) textural parameters showing microscale heterogeneity of biofilms and (2) volumetric parameters describing size and morphology of biomass. The three-dimensional features presented are average (ADD) and maximum diffusion distances (MDD), fractal dimension, average run lengths (in X, Y and Z directions), aspect ratio, textural entropy, energy and homogeneity. We discuss the meaning of each parameter and present the calculations in detail. The developed algorithms, including automatic thresholding, are implemented in software as MATLAB programs which will be available at site prior to publication of the paper.
Supporting Dynamic Quantization for High-Dimensional Data Analytics.
Guzun, Gheorghi; Canahuate, Guadalupe
2017-05-01
Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.
Discriminative clustering on manifold for adaptive transductive classification.
Zhang, Zhao; Jia, Lei; Zhang, Min; Li, Bing; Zhang, Li; Li, Fanzhang
2017-10-01
In this paper, we mainly propose a novel adaptive transductive label propagation approach by joint discriminative clustering on manifolds for representing and classifying high-dimensional data. Our framework seamlessly combines the unsupervised manifold learning, discriminative clustering and adaptive classification into a unified model. Also, our method incorporates the adaptive graph weight construction with label propagation. Specifically, our method is capable of propagating label information using adaptive weights over low-dimensional manifold features, which is different from most existing studies that usually predict the labels and construct the weights in the original Euclidean space. For transductive classification by our formulation, we first perform the joint discriminative K-means clustering and manifold learning to capture the low-dimensional nonlinear manifolds. Then, we construct the adaptive weights over the learnt manifold features, where the adaptive weights are calculated through performing the joint minimization of the reconstruction errors over features and soft labels so that the graph weights can be joint-optimal for data representation and classification. Using the adaptive weights, we can easily estimate the unknown labels of samples. After that, our method returns the updated weights for further updating the manifold features. Extensive simulations on image classification and segmentation show that our proposed algorithm can deliver the state-of-the-art performance on several public datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Online dimensionality reduction using competitive learning and Radial Basis Function network.
Tomenko, Vladimir
2011-06-01
The general purpose dimensionality reduction method should preserve data interrelations at all scales. Additional desired features include online projection of new data, processing nonlinearly embedded manifolds and large amounts of data. The proposed method, called RBF-NDR, combines these features. RBF-NDR is comprised of two modules. The first module learns manifolds by utilizing modified topology representing networks and geodesic distance in data space and approximates sampled or streaming data with a finite set of reference patterns, thus achieving scalability. Using input from the first module, the dimensionality reduction module constructs mappings between observation and target spaces. Introduction of specific loss function and synthesis of the training algorithm for Radial Basis Function network results in global preservation of data structures and online processing of new patterns. The RBF-NDR was applied for feature extraction and visualization and compared with Principal Component Analysis (PCA), neural network for Sammon's projection (SAMANN) and Isomap. With respect to feature extraction, the method outperformed PCA and yielded increased performance of the model describing wastewater treatment process. As for visualization, RBF-NDR produced superior results compared to PCA and SAMANN and matched Isomap. For the Topic Detection and Tracking corpus, the method successfully separated semantically different topics. Copyright © 2011 Elsevier Ltd. All rights reserved.
Experimental witness of genuine high-dimensional entanglement
NASA Astrophysics Data System (ADS)
Guo, Yu; Hu, Xiao-Min; Liu, Bi-Heng; Huang, Yun-Feng; Li, Chuan-Feng; Guo, Guang-Can
2018-06-01
Growing interest has been invested in exploring high-dimensional quantum systems, for their promising perspectives in certain quantum tasks. How to characterize a high-dimensional entanglement structure is one of the basic questions to take full advantage of it. However, it is not easy for us to catch the key feature of high-dimensional entanglement, for the correlations derived from high-dimensional entangled states can be possibly simulated with copies of lower-dimensional systems. Here, we follow the work of Kraft et al. [Phys. Rev. Lett. 120, 060502 (2018), 10.1103/PhysRevLett.120.060502], and present the experimental realizing of creation and detection, by the normalized witness operation, of the notion of genuine high-dimensional entanglement, which cannot be decomposed into lower-dimensional Hilbert space and thus form the entanglement structures existing in high-dimensional systems only. Our experiment leads to further exploration of high-dimensional quantum systems.
NASA Astrophysics Data System (ADS)
Lai, Chunren; Guo, Shengwen; Cheng, Lina; Wang, Wensheng; Wu, Kai
2017-02-01
It's very important to differentiate the temporal lobe epilepsy (TLE) patients from healthy people and localize the abnormal brain regions of the TLE patients. The cortical features and changes can reveal the unique anatomical patterns of brain regions from the structural MR images. In this study, structural MR images from 28 normal controls (NC), 18 left TLE (LTLE), and 21 right TLE (RTLE) were acquired, and four types of cortical feature, namely cortical thickness (CTh), cortical surface area (CSA), gray matter volume (GMV), and mean curvature (MCu), were explored for discriminative analysis. Three feature selection methods, the independent sample t-test filtering, the sparse-constrained dimensionality reduction model (SCDRM), and the support vector machine-recursive feature elimination (SVM-RFE), were investigated to extract dominant regions with significant differences among the compared groups for classification using the SVM classifier. The results showed that the SVM-REF achieved the highest performance (most classifications with more than 92% accuracy), followed by the SCDRM, and the t-test. Especially, the surface area and gray volume matter exhibited prominent discriminative ability, and the performance of the SVM was improved significantly when the four cortical features were combined. Additionally, the dominant regions with higher classification weights were mainly located in temporal and frontal lobe, including the inferior temporal, entorhinal cortex, fusiform, parahippocampal cortex, middle frontal and frontal pole. It was demonstrated that the cortical features provided effective information to determine the abnormal anatomical pattern and the proposed method has the potential to improve the clinical diagnosis of the TLE.
Danforth, Jeffrey S; Doerfler, Leonard A; Connor, Daniel F
2017-08-01
The goal was to examine whether anxiety modifies the risk for, or severity of, conduct problems in children with ADHD. Assessment included both categorical and dimensional measures of ADHD, anxiety, and conduct problems. Analyses compared conduct problems between children with ADHD features alone versus children with co-occurring ADHD and anxiety features. When assessed by dimensional rating scales, results showed that compared with children with ADHD alone, those children with ADHD co-occurring with anxiety are at risk for more intense conduct problems. When assessment included a Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) diagnosis via the Schedule for Affective Disorders and Schizophrenia for School Age Children-Epidemiologic Version (K-SADS), results showed that compared with children with ADHD alone, those children with ADHD co-occurring with anxiety neither had more intense conduct problems nor were they more likely to be diagnosed with oppositional defiant disorder or conduct disorder. Different methodological measures of ADHD, anxiety, and conduct problem features influenced the outcome of the analyses.
Efficient local representations for three-dimensional palmprint recognition
NASA Astrophysics Data System (ADS)
Yang, Bing; Wang, Xiaohua; Yao, Jinliang; Yang, Xin; Zhu, Wenhua
2013-10-01
Palmprints have been broadly used for personal authentication because they are highly accurate and incur low cost. Most previous works have focused on two-dimensional (2-D) palmprint recognition in the past decade. Unfortunately, 2-D palmprint recognition systems lose the shape information when capturing palmprint images. Moreover, such 2-D palmprint images can be easily forged or affected by noise. Hence, three-dimensional (3-D) palmprint recognition has been regarded as a promising way to further improve the performance of palmprint recognition systems. We have developed a simple, but efficient method for 3-D palmprint recognition by using local features. We first utilize shape index representation to describe the geometry of local regions in 3-D palmprint data. Then, we extract local binary pattern and Gabor wavelet features from the shape index image. The two types of complementary features are finally fused at a score level for further improvements. The experimental results on the Hong Kong Polytechnic 3-D palmprint database, which contains 8000 samples from 400 palms, illustrate the effectiveness of the proposed method.
Combustion monitoring of a water tube boiler using a discriminant radial basis network.
Sujatha, K; Pappa, N
2011-01-01
This research work includes a combination of Fisher's linear discriminant (FLD) analysis and a radial basis network (RBN) for monitoring the combustion conditions for a coal fired boiler so as to allow control of the air/fuel ratio. For this, two-dimensional flame images are required, which were captured with a CCD camera; the features of the images-average intensity, area, brightness and orientation etc of the flame-are extracted after preprocessing the images. The FLD is applied to reduce the n-dimensional feature size to a two-dimensional feature size for faster learning of the RBN. Also, three classes of images corresponding to different burning conditions of the flames have been extracted from continuous video processing. In this, the corresponding temperatures, and the carbon monoxide (CO) emissions and those of other flue gases have been obtained through measurement. Further, the training and testing of Fisher's linear discriminant radial basis network (FLDRBN), with the data collected, have been carried out and the performance of the algorithms is presented. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
A virtual display system for conveying three-dimensional acoustic information
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Wightman, Frederic L.; Foster, Scott H.
1988-01-01
The development of a three-dimensional auditory display system is discussed. Theories of human sound localization and techniques for synthesizing various features of auditory spatial perceptions are examined. Psychophysical data validating the system are presented. The human factors applications of the system are considered.
Reproducibility and Prognosis of Quantitative Features Extracted from CT Images12
Balagurunathan, Yoganand; Gu, Yuhua; Wang, Hua; Kumar, Virendra; Grove, Olya; Hawkins, Sam; Kim, Jongphil; Goldgof, Dmitry B; Hall, Lawrence O; Gatenby, Robert A; Gillies, Robert J
2014-01-01
We study the reproducibility of quantitative imaging features that are used to describe tumor shape, size, and texture from computed tomography (CT) scans of non-small cell lung cancer (NSCLC). CT images are dependent on various scanning factors. We focus on characterizing image features that are reproducible in the presence of variations due to patient factors and segmentation methods. Thirty-two NSCLC nonenhanced lung CT scans were obtained from the Reference Image Database to Evaluate Response data set. The tumors were segmented using both manual (radiologist expert) and ensemble (software-automated) methods. A set of features (219 three-dimensional and 110 two-dimensional) was computed, and quantitative image features were statistically filtered to identify a subset of reproducible and nonredundant features. The variability in the repeated experiment was measured by the test-retest concordance correlation coefficient (CCCTreT). The natural range in the features, normalized to variance, was measured by the dynamic range (DR). In this study, there were 29 features across segmentation methods found with CCCTreT and DR ≥ 0.9 and R2Bet ≥ 0.95. These reproducible features were tested for predicting radiologist prognostic score; some texture features (run-length and Laws kernels) had an area under the curve of 0.9. The representative features were tested for their prognostic capabilities using an independent NSCLC data set (59 lung adenocarcinomas), where one of the texture features, run-length gray-level nonuniformity, was statistically significant in separating the samples into survival groups (P ≤ .046). PMID:24772210
NASA Astrophysics Data System (ADS)
Xu, Z.; Guan, K.; Peng, B.; Casler, N. P.; Wang, S. W.
2017-12-01
Landscape has complex three-dimensional features. These 3D features are difficult to extract using conventional methods. Small-footprint LiDAR provides an ideal way for capturing these features. Existing approaches, however, have been relegated to raster or metric-based (two-dimensional) feature extraction from the upper or bottom layer, and thus are not suitable for resolving morphological and intensity features that could be important to fine-scale land cover mapping. Therefore, this research combines airborne LiDAR and multi-temporal Landsat imagery to classify land cover types of Williamson County, Illinois that has diverse and mixed landscape features. Specifically, we applied a 3D convolutional neural network (CNN) method to extract features from LiDAR point clouds by (1) creating occupancy grid, intensity grid at 1-meter resolution, and then (2) normalizing and incorporating data into a 3D CNN feature extractor for many epochs of learning. The learned features (e.g., morphological features, intensity features, etc) were combined with multi-temporal spectral data to enhance the performance of land cover classification based on a Support Vector Machine classifier. We used photo interpretation for training and testing data generation. The classification results show that our approach outperforms traditional methods using LiDAR derived feature maps, and promises to serve as an effective methodology for creating high-quality land cover maps through fusion of complementary types of remote sensing data.
PCA based feature reduction to improve the accuracy of decision tree c4.5 classification
NASA Astrophysics Data System (ADS)
Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.
2018-03-01
Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.
Caggiano, Alessandra
2018-03-09
Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features ( k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear ( VB max ) was achieved, with predicted values very close to the measured tool wear values.
2018-01-01
Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features (k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear (VBmax) was achieved, with predicted values very close to the measured tool wear values. PMID:29522443
[Discussion on knowledge structural system of modern acupuncture professionals].
Wang, Qin-Yu; Li, Su-He
2012-02-01
To explore the knowledge structural system that the modern acupuncture professionals should have. The current situation of personnel training for modern acupuncture professionals was multi-dimensionally and comprehensively analyzed from course offering of higher education, laws of famous physicians growth, and discipline development features of the acupuncture and moxibustion subject, and suggestions were made to the shortages. The reasonable knowledge structural system that the modern acupuncture professionals should have included establishment of good Chinese medicine thoughts, mastery of complete Chinese medicine therapy, and ability of followup of dynamic development of subject. The reformation of course design is imperative in order to promote the reasonable knowledge structural system formation of modern acupuncture professionals.
Persistence-Driven Durotaxis: Generic, Directed Motility in Rigidity Gradients
NASA Astrophysics Data System (ADS)
Novikova, Elizaveta A.; Raab, Matthew; Discher, Dennis E.; Storm, Cornelis
2017-02-01
Cells move differently on substrates with different rigidities: the persistence time of their motion is higher on stiffer substrates. We show that this behavior—in and of itself—results in a net flux of cells directed up a soft-to-stiff gradient. Using simple random walk models with varying persistence and stochastic simulations, we characterize the propensity to move in terms of the durotactic index also measured in experiments. A one-dimensional model captures the essential features and highlights the competition between diffusive spreading and linear, wavelike propagation. Persistence-driven durokinesis is generic and may be of use in the design of instructive environments for cells and other motile, mechanosensitive objects.
Hernandez, Carlos M; Arisha, Mohammed J; Ahmad, Amier; Oates, Ethan; Nanda, Navin C; Nanda, Anil; Wasan, Anita; Caleti, Beda E; Bernal, Cinthia L P; Gallardo, Sergio M
2017-07-01
Loeffler endocarditis is a complication of hypereosinophilic syndrome resulting from eosinophilic infiltration of heart tissue. We report a case of Loeffler endocarditis in which three-dimensional transthoracic and transesophageal echocardiography provided additional information to what was found by two-dimensional transthoracic echocardiography alone. Our case illustrates the usefulness of combined two- and three-dimensional echocardiography in the assessment of Loeffler endocarditis. In addition, a summary of the features of hypereosinophilic syndrome and Loeffler endocarditis is provided in tabular form. © 2017, Wiley Periodicals, Inc.
Torres, Fernanda Ferrari Esteves; Bosso-Martelo, Roberta; Espir, Camila Galletti; Cirelli, Joni Augusto; Guerreiro-Tanomaru, Juliane Maria; Tanomaru-Filho, Mario
2017-01-01
To evaluate solubility, dimensional stability, filling ability and volumetric change of root-end filling materials using conventional tests and new Micro-CT-based methods. 7. The results suggested correlated or complementary data between the proposed tests. At 7 days, BIO showed higher solubility and at 30 days, showed higher volumetric change in comparison with MTA (p<0.05). With regard to volumetric change, the tested materials were similar (p>0.05) at 7 days. At 30 days, they presented similar solubility. BIO and MTA showed higher dimensional stability than ZOE (p<0.05). ZOE and BIO showed higher filling ability (p<0.05). ZOE presented a higher dimensional change, and BIO had greater solubility after 7 days. BIO presented filling ability and dimensional stability, but greater volumetric change than MTA after 30 days. Micro-CT can provide important data on the physicochemical properties of materials complementing conventional tests.
Generalized probability theories: what determines the structure of quantum theory?
NASA Astrophysics Data System (ADS)
Janotta, Peter; Hinrichsen, Haye
2014-08-01
The framework of generalized probabilistic theories is a powerful tool for studying the foundations of quantum physics. It provides the basis for a variety of recent findings that significantly improve our understanding of the rich physical structure of quantum theory. This review paper tries to present the framework and recent results to a broader readership in an accessible manner. To achieve this, we follow a constructive approach. Starting from a few basic physically motivated assumptions we show how a given set of observations can be manifested in an operational theory. Furthermore, we characterize consistency conditions limiting the range of possible extensions. In this framework classical and quantum theory appear as special cases, and the aim is to understand what distinguishes quantum mechanics as the fundamental theory realized in nature. It turns out that non-classical features of single systems can equivalently result from higher-dimensional classical theories that have been restricted. Entanglement and non-locality, however, are shown to be genuine non-classical features.
Spatiotemporal chaos in mixed linear-nonlinear two-dimensional coupled logistic map lattice
NASA Astrophysics Data System (ADS)
Zhang, Ying-Qian; He, Yi; Wang, Xing-Yuan
2018-01-01
We investigate a new spatiotemporal dynamics with mixing degrees of nonlinear chaotic maps for spatial coupling connections based on 2DCML. Here, the coupling methods are including with linear neighborhood coupling and the nonlinear chaotic map coupling of lattices, and the former 2DCML system is only a special case in the proposed system. In this paper the criteria such Kolmogorov-Sinai entropy density and universality, bifurcation diagrams, space-amplitude and snapshot pattern diagrams are provided in order to investigate the chaotic behaviors of the proposed system. Furthermore, we also investigate the parameter ranges of the proposed system which holds those features in comparisons with those of the 2DCML system and the MLNCML system. Theoretical analysis and computer simulation indicate that the proposed system contains features such as the higher percentage of lattices in chaotic behaviors for most of parameters, less periodic windows in bifurcation diagrams and the larger range of parameters for chaotic behaviors, which is more suitable for cryptography.
NASA Astrophysics Data System (ADS)
Watanabe, Tatsuhito; Katsura, Seiichiro
A person operating a mobile robot in a remote environment receives realistic visual feedback about the condition of the road on which the robot is moving. The categorization of the road condition is necessary to evaluate the conditions for safe and comfortable driving. For this purpose, the mobile robot should be capable of recognizing and classifying the condition of the road surfaces. This paper proposes a method for recognizing the type of road surfaces on the basis of the friction between the mobile robot and the road surfaces. This friction is estimated by a disturbance observer, and a support vector machine is used to classify the surfaces. The support vector machine identifies the type of the road surface using feature vector, which is determined using the arithmetic average and variance derived from the torque values. Further, these feature vectors are mapped onto a higher dimensional space by using a kernel function. The validity of the proposed method is confirmed by experimental results.
Rizzello, Carlo Giuseppe; Curiel, José Antonio; Nionelli, Luana; Vincentini, Olimpia; Di Cagno, Raffaella; Silano, Marco; Gobbetti, Marco; Coda, Rossana
2014-02-01
This study was aimed at combining the highest degradation of gluten during wheat flour fermentation with good structural and sensory features of the related bread. As estimated by R5-ELISA, the degree of degradation of immune reactive gluten was ca. 28%. Two-dimensional electrophoresis and RP-FPLC analyses showed marked variations of the protein fractions compared to the untreated flour. The comparison was also extended to in vitro effect of the peptic/tryptic-digests towards K562 and T84 cells. The flour with the intermediate content of gluten (ICG) was used for bread making, and compared to whole gluten (WG) bread. The chemical, structural and sensory features of the ICG bread approached those of the bread made with WG flour. The protein digestibility of the ICG bread was higher than that from WG flour. Also the nutritional quality, as estimated by different indexes, was the highest for ICG bread. Copyright © 2013 Elsevier Ltd. All rights reserved.
Basal and thermal control mechanisms of the Ragnhild glaciers, East Antarctica
NASA Astrophysics Data System (ADS)
Pattyn, Frank; de Brabander, Sang; Huyghe, Ann
The Ragnhild glaciers are three enhanced-flow features situated between the Sør Rondane and Yamato Mountains in eastern Dronning Maud Land, Antarctica. We investigate the glaciological mechanisms controlling their existence and behavior, using a three-dimensional numerical thermomechanical ice-sheet model including higher-order stress gradients. This model is further extended with a steady-state model of subglacial water flow, based on the hydraulic potential gradient. Both static and dynamic simulations are capable of reproducing the enhanced ice-flow features. Although basal topography is responsible for the existence of the flow pattern, thermomechanical effects and basal sliding seem to locally soften and lubricate the ice in the main trunks. Lateral drag is a contributing factor in balancing the driving stress, as shear margins can be traced over a distance of hundreds of kilometers along west Ragnhild glacier. Different basal sliding scenarios show that central Ragnhild glacier stagnates as west Ragnhild glacier accelerates and progressively drains the whole catchment area by ice and water piracy.
Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan
2018-01-01
Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.
Generalized Gödel universes in higher dimensions and pure Lovelock gravity
NASA Astrophysics Data System (ADS)
Dadhich, Naresh; Molina, Alfred; Pons, Josep M.
2017-10-01
The Gödel universe is a homogeneous rotating dust with negative Λ which is a direct product of a three-dimensional pure rotation metric with a line. We would generalize it to higher dimensions for Einstein and pure Lovelock gravity with only one N th-order term. For higher-dimensional generalization, we have to include more rotations in the metric, and hence we shall begin with the corresponding pure rotation odd (d =2 n +1 )-dimensional metric involving n rotations, which eventually can be extended by a direct product with a line or a space of constant curvature for yielding a higher-dimensional Gödel universe. The considerations of n rotations and also of constant curvature spaces is a new line of generalization and is being considered for the first time.
Dynamics of cosmic strings with higher-dimensional windings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamauchi, Daisuke; Lake, Matthew J.; Thailand Center of Excellence in Physics, Ministry of Education,Bangkok 10400
2015-06-11
We consider F-strings with arbitrary configurations in the Minkowski directions of a higher-dimensional spacetime, which also wrap and spin around S{sup 1} subcycles of constant radius in an arbitrary internal manifold, and determine the relation between the higher-dimensional and the effective four-dimensional quantities that govern the string dynamics. We show that, for any such configuration, the motion of the windings in the compact space may render the string effectively tensionless from a four-dimensional perspective, so that it remains static with respect to the large dimensions. Such a critical configuration occurs when (locally) exactly half the square of the string lengthmore » lies in the large dimensions and half lies in the compact space. The critical solution is then seen to arise as a special case, in which the wavelength of the windings is equal to their circumference. As examples, long straight strings and circular loops are considered in detail, and the solutions to the equations of motion that satisfy the tensionless condition are presented. These solutions are then generalized to planar loops and arbitrary three-dimensional configurations. Under the process of dimensional reduction, in which higher-dimensional motion is equivalent to an effective worldsheet current (giving rise to a conserved charge), this phenomenon may be seen as the analogue of the tensionless condition which arises for superconducting and chiral-current carrying cosmic strings.« less
Dynamics of cosmic strings with higher-dimensional windings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamauchi, Daisuke; Lake, Matthew J., E-mail: yamauchi@resceu.s.u-tokyo.ac.jp, E-mail: matthewj@nu.ac.th
2015-06-01
We consider F-strings with arbitrary configurations in the Minkowski directions of a higher-dimensional spacetime, which also wrap and spin around S{sup 1} subcycles of constant radius in an arbitrary internal manifold, and determine the relation between the higher-dimensional and the effective four-dimensional quantities that govern the string dynamics. We show that, for any such configuration, the motion of the windings in the compact space may render the string effectively tensionless from a four-dimensional perspective, so that it remains static with respect to the large dimensions. Such a critical configuration occurs when (locally) exactly half the square of the string lengthmore » lies in the large dimensions and half lies in the compact space. The critical solution is then seen to arise as a special case, in which the wavelength of the windings is equal to their circumference. As examples, long straight strings and circular loops are considered in detail, and the solutions to the equations of motion that satisfy the tensionless condition are presented. These solutions are then generalized to planar loops and arbitrary three-dimensional configurations. Under the process of dimensional reduction, in which higher-dimensional motion is equivalent to an effective worldsheet current (giving rise to a conserved charge), this phenomenon may be seen as the analogue of the tensionless condition which arises for superconducting and chiral-current carrying cosmic strings.« less
Hasegawa, Tomoka; Yamamoto, Tomomaya; Hongo, Hiromi; Qiu, Zixuan; Abe, Miki; Kanesaki, Takuma; Tanaka, Kawori; Endo, Takashi; de Freitas, Paulo Henrique Luiz; Li, Minqi; Amizuka, Norio
2018-04-01
The aim of this study is to demonstrate the application of focused ion beam-scanning electron microscopy, FIB-SEM for revealing the three-dimensional features of osteocytic cytoplasmic processes in metaphyseal (immature) and diaphyseal (mature) trabeculae. Tibiae of eight-week-old male mice were fixed with aldehyde solution, and treated with block staining prior to FIB-SEM observation. While two-dimensional backscattered SEM images showed osteocytes' cytoplasmic processes in a fragmented fashion, three-dimensional reconstructions of FIB-SEM images demonstrated that osteocytes in primary metaphyseal trabeculae extended their cytoplasmic processes randomly, thus maintaining contact with neighboring osteocytes and osteoblasts. In contrast, diaphyseal osteocytes extended thin cytoplasmic processes from their cell bodies, which ran perpendicular to the bone surface. In addition, these osteocytes featured thick processes that branched into thinner, transverse cytoplasmic processes; at some point, however, these transverse processes bend at a right angle to run perpendicular to the bone surface. Osteoblasts also possessed thicker cytoplasmic processes that branched off as thinner processes, which then connected with cytoplasmic processes of neighboring osteocytes. Thus, FIB-SEM is a useful technology for visualizing the three-dimensional structures of osteocytes and their cytoplasmic processes.
Classification by Using Multispectral Point Cloud Data
NASA Astrophysics Data System (ADS)
Liao, C. T.; Huang, H. H.
2012-07-01
Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.
Paula, Stefan; Tabet, Michael R; Keenan, Susan M; Welsh, William J; Ball, W James
2003-01-17
Successful immunotherapy of cocaine addiction and overdoses requires cocaine-binding antibodies with specific properties, such as high affinity and selectivity for cocaine. We have determined the affinities of two cocaine-binding murine monoclonal antibodies (mAb: clones 3P1A6 and MM0240PA) for cocaine and its metabolites by [3H]-radioligand binding assays. mAb 3P1A6 (K(d) = 0.22 nM) displayed a 50-fold higher affinity for cocaine than mAb MM0240PA (K(d) = 11 nM) and also had a greater specificity for cocaine. For the systematic exploration of both antibodies' binding specificities, we used a set of approximately 35 cocaine analogues as structural probes by determining their relative binding affinities (RBAs) using an enzyme-linked immunosorbent competition assay. Three-dimensional quantitative structure-activity relationship (3D-QSAR) models on the basis of comparative molecular field analysis (CoMFA) techniques correlated the binding data with structural features of the ligands. The analysis indicated that despite the mAbs' differing specificities for cocaine, the relative contributions of the steric (approximately 80%) and electrostatic (approximately 20%) field interactions to ligand-binding were similar. Generated three-dimensional CoMFA contour plots then located the specific regions about cocaine where the ligand/receptor interactions occurred. While the overall binding patterns of the two mAbs had many features in common, distinct differences were observed about the phenyl ring and the methylester group of cocaine. Furthermore, using previously published data, a 3D-QSAR model was developed for cocaine binding to the dopamine reuptake transporter (DAT) that was compared to the mAb models. Although the relative steric and electrostatic field contributions were similar to those of the mAbs, the DAT cocaine-binding site showed a preference for negatively charged ligands. Besides establishing molecular level insight into the interactions that govern cocaine binding specificity by biopolymers, the three-dimensional images obtained reflect the properties of the mAbs binding pockets and provide the initial information needed for the possible design of novel antibodies with properties optimized for immunotherapy. Copyright 2003 Elsevier Science Ltd.
Online signature recognition using principal component analysis and artificial neural network
NASA Astrophysics Data System (ADS)
Hwang, Seung-Jun; Park, Seung-Je; Baek, Joong-Hwan
2016-12-01
In this paper, we propose an algorithm for on-line signature recognition using fingertip point in the air from the depth image acquired by Kinect. We extract 10 statistical features from X, Y, Z axis, which are invariant to changes in shifting and scaling of the signature trajectories in three-dimensional space. Artificial neural network is adopted to solve the complex signature classification problem. 30 dimensional features are converted into 10 principal components using principal component analysis, which is 99.02% of total variances. We implement the proposed algorithm and test to actual on-line signatures. In experiment, we verify the proposed method is successful to classify 15 different on-line signatures. Experimental result shows 98.47% of recognition rate when using only 10 feature vectors.
NASA Astrophysics Data System (ADS)
Davis, Benjamin L.; Berrier, Joel C.; Shields, Douglas W.; Kennefick, Julia; Kennefick, Daniel; Seigar, Marc S.; Lacy, Claud H. S.; Puerari, Ivânio
2012-04-01
A logarithmic spiral is a prominent feature appearing in a majority of observed galaxies. This feature has long been associated with the traditional Hubble classification scheme, but historical quotes of pitch angle of spiral galaxies have been almost exclusively qualitative. We have developed a methodology, utilizing two-dimensional fast Fourier transformations of images of spiral galaxies, in order to isolate and measure the pitch angles of their spiral arms. Our technique provides a quantitative way to measure this morphological feature. This will allow comparison of spiral galaxy pitch angle to other galactic parameters and test spiral arm genesis theories. In this work, we detail our image processing and analysis of spiral galaxy images and discuss the robustness of our analysis techniques.
A two-dimensional kinematic dynamo model of the ionospheric magnetic field at Venus
NASA Technical Reports Server (NTRS)
Cravens, T. E.; Wu, D.; Shinagawa, H.
1990-01-01
The results of a high-resolution, two-dimensional, time dependent, kinematic dynamo model of the ionospheric magnetic field of Venus are presented. Various one-dimensional models are considered and the two-dimensional model is then detailed. In this model, the two-dimensional magnetic induction equation, the magnetic diffusion-convection equation, is numerically solved using specified plasma velocities. Origins of the vertical velocity profile and of the horizontal velocities are discussed. It is argued that the basic features of the vertical magnetic field profile remain unaltered by horizontal flow effects and also that horizontal plasma flow can strongly affect the magnetic field for altitudes above 300 km.
Registration algorithm of point clouds based on multiscale normal features
NASA Astrophysics Data System (ADS)
Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua
2015-01-01
The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.
Li, Han; Liu, Yashu; Gong, Pinghua; Zhang, Changshui; Ye, Jieping
2014-01-01
Identifying patients with Mild Cognitive Impairment (MCI) who are likely to convert to dementia has recently attracted increasing attention in Alzheimer's disease (AD) research. An accurate prediction of conversion from MCI to AD can aid clinicians to initiate treatments at early stage and monitor their effectiveness. However, existing prediction systems based on the original biosignatures are not satisfactory. In this paper, we propose to fit the prediction models using pairwise biosignature interactions, thus capturing higher-order relationship among biosignatures. Specifically, we employ hierarchical constraints and sparsity regularization to prune the high-dimensional input features. Based on the significant biosignatures and underlying interactions identified, we build classifiers to predict the conversion probability based on the selected features. We further analyze the underlying interaction effects of different biosignatures based on the so-called stable expectation scores. We have used 293 MCI subjects from Alzheimer's Disease Neuroimaging Initiative (ADNI) database that have MRI measurements at the baseline to evaluate the effectiveness of the proposed method. Our proposed method achieves better classification performance than state-of-the-art methods. Moreover, we discover several significant interactions predictive of MCI-to-AD conversion. These results shed light on improving the prediction performance using interaction features. PMID:24416143
Reichhardt, Charles; Reichhardt, Cynthia Jane
2015-12-28
In this work, we numerically study the behavior of two-dimensional skyrmions in the presence of a quasi-one-dimensional sinusoidal substrate under the influence of externally applied dc and ac drives. In the overdamped limit, when both dc and ac drives are aligned in the longitudinal direction parallel to the direction of the substrate modulation, the velocity-force curves exhibit classic Shapiro step features when the frequency of the ac drive matches the washboard frequency that is dynamically generated by the motion of the skyrmions over the substrate, similar to previous observations in superconducting vortex systems. In the case of skyrmions, the additionalmore » contribution to the skyrmion motion from a nondissipative Magnus force shifts the location of the locking steps to higher dc drives, and we find that the skyrmions move at an angle with respect to the direction of the dc drive. For a longitudinal dc drive and a perpendicular or transverse ac drive, the overdamped system exhibits no Shapiro steps; however, when a finite Magnus force is present, we find pronounced transverse Shapiro steps along with complex two-dimensional periodic orbits of the skyrmions in the phase-locked regimes. Both the longitudinal and transverse ac drives produce locking steps whose widths oscillate with increasing ac drive amplitude. We examine the role of collective skyrmion interactions and find that additional fractional locking steps occur for both longitudinal and transverse ac drives. Finally, at higher skyrmion densities, the system undergoes a series of dynamical order-disorder transitions, with the skyrmions forming a moving solid on the phase locking steps and a fluctuating dynamical liquid in regimes between the steps.« less
Increased insolation threshold for runaway greenhouse processes on Earth-like planets
NASA Astrophysics Data System (ADS)
Leconte, Jérémy; Forget, Francois; Charnay, Benjamin; Wordsworth, Robin; Pottier, Alizée
2013-12-01
The increase in solar luminosity over geological timescales should warm the Earth's climate, increasing water evaporation, which will in turn enhance the atmospheric greenhouse effect. Above a certain critical insolation, this destabilizing greenhouse feedback can `run away' until the oceans have completely evaporated. Through increases in stratospheric humidity, warming may also cause evaporative loss of the oceans to space before the runaway greenhouse state occurs. The critical insolation thresholds for these processes, however, remain uncertain because they have so far been evaluated using one-dimensional models that cannot account for the dynamical and cloud feedback effects that are key stabilizing features of the Earth's climate. Here we use a three-dimensional global climate model to show that the insolation threshold for the runaway greenhouse state to occur is about 375 W m-2, which is significantly higher than previously thought. Our model is specifically developed to quantify the climate response of Earth-like planets to increased insolation in hot and extremely moist atmospheres. In contrast with previous studies, we find that clouds have a destabilizing feedback effect on the long-term warming. However, subsident, unsaturated regions created by the Hadley circulation have a stabilizing effect that is strong enough to shift the runaway greenhouse limit to higher values of insolation than are inferred from one-dimensional models. Furthermore, because of wavelength-dependent radiative effects, the stratosphere remains sufficiently cold and dry to hamper the escape of atmospheric water, even at large fluxes. This has strong implications for the possibility of liquid water existing on Venus early in its history, and extends the size of the habitable zone around other stars.
Increased insolation threshold for runaway greenhouse processes on Earth-like planets.
Leconte, Jérémy; Forget, Francois; Charnay, Benjamin; Wordsworth, Robin; Pottier, Alizée
2013-12-12
The increase in solar luminosity over geological timescales should warm the Earth's climate, increasing water evaporation, which will in turn enhance the atmospheric greenhouse effect. Above a certain critical insolation, this destabilizing greenhouse feedback can 'run away' until the oceans have completely evaporated. Through increases in stratospheric humidity, warming may also cause evaporative loss of the oceans to space before the runaway greenhouse state occurs. The critical insolation thresholds for these processes, however, remain uncertain because they have so far been evaluated using one-dimensional models that cannot account for the dynamical and cloud feedback effects that are key stabilizing features of the Earth's climate. Here we use a three-dimensional global climate model to show that the insolation threshold for the runaway greenhouse state to occur is about 375 W m(-2), which is significantly higher than previously thought. Our model is specifically developed to quantify the climate response of Earth-like planets to increased insolation in hot and extremely moist atmospheres. In contrast with previous studies, we find that clouds have a destabilizing feedback effect on the long-term warming. However, subsident, unsaturated regions created by the Hadley circulation have a stabilizing effect that is strong enough to shift the runaway greenhouse limit to higher values of insolation than are inferred from one-dimensional models. Furthermore, because of wavelength-dependent radiative effects, the stratosphere remains sufficiently cold and dry to hamper the escape of atmospheric water, even at large fluxes. This has strong implications for the possibility of liquid water existing on Venus early in its history, and extends the size of the habitable zone around other stars.
Three-dimensional Numerical Simulations of Rayleigh-Taylor Unstable Flames in Type Ia Supernovae
NASA Astrophysics Data System (ADS)
Zingale, M.; Woosley, S. E.; Rendleman, C. A.; Day, M. S.; Bell, J. B.
2005-10-01
Flame instabilities play a dominant role in accelerating the burning front to a large fraction of the speed of sound in a Type Ia supernova. We present a three-dimensional numerical simulation of a Rayleigh-Taylor unstable carbon flame, following its evolution through the transition to turbulence. A low-Mach number hydrodynamics method is used, freeing us from the harsh time step restrictions imposed by sound waves. We fully resolve the thermal structure of the flame and its reaction zone, eliminating the need for a flame model. A single density is considered, 1.5×107 g cm-3, and half-carbon, half-oxygen fuel: conditions under which the flame propagated in the flamelet regime in our related two-dimensional study. We compare to a corresponding two-dimensional simulation and show that while fire polishing keeps the small features suppressed in two dimensions, turbulence wrinkles the flame on far smaller scales in the three-dimensional case, suggesting that the transition to the distributed burning regime occurs at higher densities in three dimensions. Detailed turbulence diagnostics are provided. We show that the turbulence follows a Kolmogorov spectrum and is highly anisotropic on the large scales, with a much larger integral scale in the direction of gravity. Furthermore, we demonstrate that it becomes more isotropic as it cascades down to small scales. On the basis of the turbulent statistics and the flame properties of our simulation, we compute the Gibson scale. We show the progress of the turbulent flame through a classic combustion regime diagram, indicating that the flame just enters the distributed burning regime near the end of our simulation.
Thermographic Phosphor Measurements of Shock-Shock Interactions on a Swept Cylinder
NASA Technical Reports Server (NTRS)
Jones, Michelle L.; Berry, Scott A.
2013-01-01
The effects of fin leading-edge radius and sweep angle on peak heating rates due to shock-shock interactions were investigated in the NASA Langley Research Center 20-inch Mach 6 Air Tunnel. The fin model leading edges, which represent cylindrical leading edges or struts on hypersonic vehicles, were varied from 0.25 inches to 0.75 inches in radius. A 9deg wedge generated a planar oblique shock at 16.7deg to the flow that intersected the fin bow shock, producing a shock-shock interaction that impinged on the fin leading edge. The fin angle of attack was varied from 0deg (normal to the free-stream) to 15deg and 25deg swept forward. Global temperature data was obtained from the surface of the fused silica fins using phosphor thermography. Metal oil flow models with the same geometries as the fused silica models were used to visualize the streamline patterns for each angle of attack. High-speed zoom-schlieren videos were recorded to show the features and temporal unsteadiness of the shock-shock interactions. The temperature data were analyzed using one-dimensional semi-infinite as well as one- and two-dimensional finite-volume methods to determine the proper heat transfer analysis approach to minimize errors from lateral heat conduction due to the presence of strong surface temperature gradients induced by the shock interactions. The general trends in the leading-edge heat transfer behavior were similar for the three shock-shock interactions, respectively, between the test articles with varying leading-edge radius. The dimensional peak heat transfer coefficient augmentation increased with decreasing leading-edge radius. The dimensional peak heat transfer output from the two-dimensional code was about 20% higher than the value from a standard, semi-infinite onedimensional method.
Schure, Mark R; Davis, Joe M
2017-11-10
Orthogonality metrics (OMs) for three and higher dimensional separations are proposed as extensions of previously developed OMs, which were used to evaluate the zone utilization of two-dimensional (2D) separations. These OMs include correlation coefficients, dimensionality, information theory metrics and convex-hull metrics. In a number of these cases, lower dimensional subspace metrics exist and can be readily calculated. The metrics are used to interpret previously generated experimental data. The experimental datasets are derived from Gilar's peptide data, now modified to be three dimensional (3D), and a comprehensive 3D chromatogram from Moore and Jorgenson. The Moore and Jorgenson chromatogram, which has 25 identifiable 3D volume elements or peaks, displayed good orthogonality values over all dimensions. However, OMs based on discretization of the 3D space changed substantially with changes in binning parameters. This example highlights the importance in higher dimensions of having an abundant number of retention times as data points, especially for methods that use discretization. The Gilar data, which in a previous study produced 21 2D datasets by the pairing of 7 one-dimensional separations, was reinterpreted to produce 35 3D datasets. These datasets show a number of interesting properties, one of which is that geometric and harmonic means of lower dimensional subspace (i.e., 2D) OMs correlate well with the higher dimensional (i.e., 3D) OMs. The space utilization of the Gilar 3D datasets was ranked using OMs, with the retention times of the datasets having the largest and smallest OMs presented as graphs. A discussion concerning the orthogonality of higher dimensional techniques is given with emphasis on molecular diversity in chromatographic separations. In the information theory work, an inconsistency is found in previous studies of orthogonality using the 2D metric often identified as %O. A new choice of metric is proposed, extended to higher dimensions, characterized by mixes of ordered and random retention times, and applied to the experimental datasets. In 2D, the new metric always equals or exceeds the original one. However, results from both the original and new methods are given. Copyright © 2017 Elsevier B.V. All rights reserved.
Continuous-Flow Electrophoresis of DNA and Proteins in a Two-Dimensional Capillary-Well Sieve.
Duan, Lian; Cao, Zhen; Yobas, Levent
2017-09-19
Continuous-flow electrophoresis of macromolecules is demonstrated using an integrated capillary-well sieve arranged into a two-dimensional anisotropic array on silicon. The periodic array features thousands of entropic barriers, each resulting from an abrupt interface between a 2 μm deep well (channel) and a 70 nm capillary. These entropic barriers owing to two-dimensional confinement within the capillaries are vastly steep in relation to those arising from slits featuring one-dimensional confinement. Thus, the sieving mechanisms can sustain relatively large electric field strengths over a relatively small array area. The sieve rapidly sorts anionic macromolecules, including DNA chains and proteins in native or denatured states, into distinct trajectories according to size or charge under electric field vectors orthogonally applied. The baseline separation is achieved in less than 1 min within a horizontal migration length of ∼1.5 mm. The capillaries are self-enclosed conduits in cylindrical profile featuring a uniform diameter and realized through an approach that avoids advanced patterning techniques. The approach exploits a thermal reflow of a layer of doped glass for shape transformation into cylindrical capillaries and for controllably shrinking the capillary diameter. Lastly, atomic layer deposition of alumina is introduced for the first time to fine-tune the capillary diameter as well as to neutralize the surface charge, thereby suppressing undesired electroosmotic flows.
Tensor-driven extraction of developmental features from varying paediatric EEG datasets.
Kinney-Lang, Eli; Spyrou, Loukianos; Ebied, Ahmed; Chin, Richard Fm; Escudero, Javier
2018-05-21
Constant changes in developing children's brains can pose a challenge in EEG dependant technologies. Advancing signal processing methods to identify developmental differences in paediatric populations could help improve function and usability of such technologies. Taking advantage of the multi-dimensional structure of EEG data through tensor analysis may offer a framework for extracting relevant developmental features of paediatric datasets. A proof of concept is demonstrated through identifying latent developmental features in resting-state EEG. Approach. Three paediatric datasets (n = 50, 17, 44) were analyzed using a two-step constrained parallel factor (PARAFAC) tensor decomposition. Subject age was used as a proxy measure of development. Classification used support vector machines (SVM) to test if PARAFAC identified features could predict subject age. The results were cross-validated within each dataset. Classification analysis was complemented by visualization of the high-dimensional feature structures using t-distributed Stochastic Neighbour Embedding (t-SNE) maps. Main Results. Development-related features were successfully identified for the developmental conditions of each dataset. SVM classification showed the identified features could accurately predict subject at a significant level above chance for both healthy and impaired populations. t-SNE maps revealed suitable tensor factorization was key in extracting the developmental features. Significance. The described methods are a promising tool for identifying latent developmental features occurring throughout childhood EEG. © 2018 IOP Publishing Ltd.
An Optimization-Based Method for Feature Ranking in Nonlinear Regression Problems.
Bravi, Luca; Piccialli, Veronica; Sciandrone, Marco
2017-04-01
In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.
NASA Astrophysics Data System (ADS)
Xu, Ye; Lee, Michael C.; Boroczky, Lilla; Cann, Aaron D.; Borczuk, Alain C.; Kawut, Steven M.; Powell, Charles A.
2009-02-01
Features calculated from different dimensions of images capture quantitative information of the lung nodules through one or multiple image slices. Previously published computer-aided diagnosis (CADx) systems have used either twodimensional (2D) or three-dimensional (3D) features, though there has been little systematic analysis of the relevance of the different dimensions and of the impact of combining different dimensions. The aim of this study is to determine the importance of combining features calculated in different dimensions. We have performed CADx experiments on 125 pulmonary nodules imaged using multi-detector row CT (MDCT). The CADx system computed 192 2D, 2.5D, and 3D image features of the lesions. Leave-one-out experiments were performed using five different combinations of features from different dimensions: 2D, 3D, 2.5D, 2D+3D, and 2D+3D+2.5D. The experiments were performed ten times for each group. Accuracy, sensitivity and specificity were used to evaluate the performance. Wilcoxon signed-rank tests were applied to compare the classification results from these five different combinations of features. Our results showed that 3D image features generate the best result compared with other combinations of features. This suggests one approach to potentially reducing the dimensionality of the CADx data space and the computational complexity of the system while maintaining diagnostic accuracy.
NASA Astrophysics Data System (ADS)
McReynolds, Sean
Five-dimensional N = 2 Yang-Mills-Einstein supergravity and its couplings to hyper and tensor multiplets are considered on an orbifold spacetime of the form M4 x S1/Gamma, where Gamma is a discrete group. As is well known in such cases, supersymmetry is broken to N = 1 on the orbifold fixed planes, and chiral 4D theories can be obtained from bulk hypermultiplets (or from the coupling of fixed-plane supported fields). Five-dimensional gauge symmetries are broken by boundary conditions for the fields, which are equivalent to some set of Gamma-parity assignments in the orbifold theory, allowing for arbitrary rank reduction. Furthermore, Wilson lines looping from one boundary to the other can break bulk gauge groups, or give rise to vacuum expectation values for scalars on the boundaries, which can result in spontaneous breaking of boundary gauge groups. The broken gauge symmetries do not survive as global symmetries of the low energy theories below the compactification scale due to 4 D minimal couplings to gauge fields. Axionic fields are a generic feature, just as in any compactification of M-theory (or string theory for that matter), and we exhibit the form of this field and its role as the QCD axion, capable of resolving the strong CP problem. The main motivation for the orbifold theories here is taken to be orbifold-GUTS, wherein a unified gauge group is sought in higher dimensions while allowing the orbifold reduction to handle problems such as rapid proton decay, exotic matter, mass hierarchies, etc. To that end, we discuss the allowable minimal SU(5), SO(10) and E6 GUT theories with all fields living in five dimensions. It is argued that, within the class of homogeneous quaternionic scalar manifolds characterizing the hypermultiplet couplings in 5D, supergravity admits a restricted set of theories that yield minimal phenomenological field content. In addition, non-compact gaugings are a novel feature of supergravity theories, and in particular we consider the example of an SU(5,1) YMESGT in which all of the fields of the theory are connected by local (susy and gauge) transformations that are symmetries of the Lagrangian. Such non-compact gaugings allow a novel type of gauge-Higgs unification in higher dimensions. The possibility of boundary-localized fields is considered only via anomaly arguments. (Abstract shortened by UMI.)
Han, Sungmin; Chu, Jun-Uk; Park, Jong Woong; Youn, Inchan
2018-05-15
Proprioceptive afferent activities recorded by a multichannel microelectrode have been used to decode limb movements to provide sensory feedback signals for closed-loop control in a functional electrical stimulation (FES) system. However, analyzing the high dimensionality of neural activity is one of the major challenges in real-time applications. This paper proposes a linear feature projection method for the real-time decoding of ankle and knee joint angles. Single-unit activity was extracted as a feature vector from proprioceptive afferent signals that were recorded from the L7 dorsal root ganglion during passive movements of ankle and knee joints. The dimensionality of this feature vector was then reduced using a linear feature projection composed of projection pursuit and negentropy maximization (PP/NEM). Finally, a time-delayed Kalman filter was used to estimate the ankle and knee joint angles. The PP/NEM approach had a better decoding performance than did other feature projection methods, and all processes were completed within the real-time constraints. These results suggested that the proposed method could be a useful decoding method to provide real-time feedback signals in closed-loop FES systems.
Chang, Chi-Ying; Chang, Chia-Chi; Hsiao, Tzu-Chien
2013-01-01
Excitation-emission matrix (EEM) fluorescence spectroscopy is a noninvasive method for tissue diagnosis and has become important in clinical use. However, the intrinsic characterization of EEM fluorescence remains unclear. Photobleaching and the complexity of the chemical compounds make it difficult to distinguish individual compounds due to overlapping features. Conventional studies use principal component analysis (PCA) for EEM fluorescence analysis, and the relationship between the EEM features extracted by PCA and diseases has been examined. The spectral features of different tissue constituents are not fully separable or clearly defined. Recently, a non-stationary method called multi-dimensional ensemble empirical mode decomposition (MEEMD) was introduced; this method can extract the intrinsic oscillations on multiple spatial scales without loss of information. The aim of this study was to propose a fluorescence spectroscopy system for EEM measurements and to describe a method for extracting the intrinsic characteristics of EEM by MEEMD. The results indicate that, although PCA provides the principal factor for the spectral features associated with chemical compounds, MEEMD can provide additional intrinsic features with more reliable mapping of the chemical compounds. MEEMD has the potential to extract intrinsic fluorescence features and improve the detection of biochemical changes. PMID:24240806
A novel framework for feature extraction in multi-sensor action potential sorting.
Wu, Shun-Chi; Swindlehurst, A Lee; Nenadic, Zoran
2015-09-30
Extracellular recordings of multi-unit neural activity have become indispensable in neuroscience research. The analysis of the recordings begins with the detection of the action potentials (APs), followed by a classification step where each AP is associated with a given neural source. A feature extraction step is required prior to classification in order to reduce the dimensionality of the data and the impact of noise, allowing source clustering algorithms to work more efficiently. In this paper, we propose a novel framework for multi-sensor AP feature extraction based on the so-called Matched Subspace Detector (MSD), which is shown to be a natural generalization of standard single-sensor algorithms. Clustering using both simulated data and real AP recordings taken in the locust antennal lobe demonstrates that the proposed approach yields features that are discriminatory and lead to promising results. Unlike existing methods, the proposed algorithm finds joint spatio-temporal feature vectors that match the dominant subspace observed in the two-dimensional data without needs for a forward propagation model and AP templates. The proposed MSD approach provides more discriminatory features for unsupervised AP sorting applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Using Gaussian windows to explore a multivariate data set
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1991-01-01
In an earlier paper, I recounted an exploratory analysis, using Gaussian windows, of a data set derived from the Infrared Astronomical Satellite. Here, my goals are to develop strategies for finding structural features in a data set in a many-dimensional space, and to find ways to describe the shape of such a data set. After a brief review of Gaussian windows, I describe the current implementation of the method. I give some ways of describing features that we might find in the data, such as clusters and saddle points, and also extended structures such as a 'bar', which is an essentially one-dimensional concentration of data points. I then define a distance function, which I use to determine which data points are 'associated' with a feature. Data points not associated with any feature are called 'outliers'. I then explore the data set, giving the strategies that I used and quantitative descriptions of the features that I found, including clusters, bars, and a saddle point. I tried to use strategies and procedures that could, in principle, be used in any number of dimensions.
Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji
2017-01-01
We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392
Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation
NASA Astrophysics Data System (ADS)
Yu, Jialin; Sun, Jifeng; Luo, Shasha; Duan, Bichao
2017-09-01
Estimating three-dimensional (3D) human poses from a single camera is usually implemented by searching pose candidates with image descriptors. Existing methods usually suppose that the mapping from feature space to pose space is linear, but in fact, their mapping relationship is highly nonlinear, which heavily degrades the performance of 3D pose estimation. We propose a method to recover 3D pose from a silhouette image. It is based on the multiview feature embedding (MFE) and the locality-sensitive autoencoders (LSAEs). On the one hand, we first depict the manifold regularized sparse low-rank approximation for MFE and then the input image is characterized by a fused feature descriptor. On the other hand, both the fused feature and its corresponding 3D pose are separately encoded by LSAEs. A two-layer back-propagation neural network is trained by parameter fine-tuning and then used to map the encoded 2D features to encoded 3D poses. Our LSAE ensures a good preservation of the local topology of data points. Experimental results demonstrate the effectiveness of our proposed method.
Five-Dimensional Gauged Supergravity with Higher Derivatives
NASA Astrophysics Data System (ADS)
Hanaki, Kentaro
This thesis summarizes the recent developments on the study of five-dimensional gauged supergravity with higher derivative terms, emphasizing in particular the application to understanding the hydrodynamic properties of gauge theory plasma via the AdS/CFT correspondence. We first review how the ungauged and gauged five-dimensional supergravity actions with higher derivative terms can be constructed using the off-shell superconformal formalism. Then we relate the gauged supergravity to four-dimensional gauge theory using the AdS/CFT correspondence and extract the physical quantities associated with gauge theory plasma from the dual classical supergravity computations. We put a particular emphasis on the discussion of the conjectured lower bound for the shear viscosity over entropy density ratio proposed by Kovtun, Son and Starinets, and discuss how higher derivative terms in supergravity and the introduction of chemical potential for the R-charge affect this bound.
The importance of spatial ability and mental models in learning anatomy
NASA Astrophysics Data System (ADS)
Chatterjee, Allison K.
As a foundational course in medical education, gross anatomy serves to orient medical and veterinary students to the complex three-dimensional nature of the structures within the body. Understanding such spatial relationships is both fundamental and crucial for achievement in gross anatomy courses, and is essential for success as a practicing professional. Many things contribute to learning spatial relationships; this project focuses on a few key elements: (1) the type of multimedia resources, particularly computer-aided instructional (CAI) resources, medical students used to study and learn; (2) the influence of spatial ability on medical and veterinary students' gross anatomy grades and their mental models; and (3) how medical and veterinary students think about anatomy and describe the features of their mental models to represent what they know about anatomical structures. The use of computer-aided instruction (CAI) by gross anatomy students at Indiana University School of Medicine (IUSM) was assessed through a questionnaire distributed to the regional centers of the IUSM. Students reported using internet browsing, PowerPoint presentation software, and email on a daily bases to study gross anatomy. This study reveals that first-year medical students at the IUSM make limited use of CAI to study gross anatomy. Such studies emphasize the importance of examining students' use of CAI to study gross anatomy prior to development and integration of electronic media into the curriculum and they may be important in future decisions regarding the development of alternative learning resources. In order to determine how students think about anatomical relationships and describe the features of their mental models, personal interviews were conducted with select students based on students' ROT scores. Five typologies of the characteristics of students' mental models were identified and described: spatial thinking, kinesthetic approach, identification of anatomical structures, problem solving strategies, and study methods. Students with different levels of spatial ability visualize and think about anatomy in qualitatively different ways, which is reflected by the features of their mental models. Low spatial ability students thought about and used two-dimensional images from the textbook. They possessed basic two-dimensional models of anatomical structures; they placed emphasis on diagrams and drawings in their studies; and they re-read anatomical problems many times before answering. High spatial ability students thought fully in three-dimensional and imagined rotation and movement of the structures; they made use of many types of images and text as they studied and solved problems. They possessed elaborate three-dimensional models of anatomical structures which they were able to manipulate to solve problems; and they integrated diagrams, drawings, and written text in their studies. Middle spatial ability students were a mix between both low and high spatial ability students. They imagined two-dimensional images popping out of the flat paper to become more three-dimensional, but still relied on drawings and diagrams. Additionally, high spatial ability students used a higher proportion of anatomical terminology than low spatial ability or middle spatial ability students. This provides additional support to the premise that high spatial students' mental models are a complex mixture of imagistic representations and propositional representations that incorporate correct anatomical terminology. Low spatial ability students focused on the function of structures and ways to group information primarily for the purpose of recall. This supports the theory that low spatial students' mental models will be characterized by more on imagistic representations that are general in nature. (Abstract shortened by UMI.)
Integrated three-dimensional shape and reflection properties measurement system.
Krzesłowski, Jakub; Sitnik, Robert; Maczkowski, Grzegorz
2011-02-01
Creating accurate three-dimensional (3D) digitalized models of cultural heritage objects requires that information about surface geometry be integrated with measurements of other material properties like color and reflectance. Up until now, these measurements have been performed in laboratories using manually integrated (subjective) data analyses. We describe an out-of-laboratory bidirectional reflectance distribution function (BRDF) and 3D shape measurement system that implements shape and BRDF measurement in a single setup with BRDF uncertainty evaluation. The setup aligns spatial data with the angular reflectance distribution, yielding a better estimation of the surface's reflective properties by integrating these two modality measurements into one setup using a single detector. This approach provides a better picture of an object's intrinsic material features, which in turn produces a higher-quality digitalized model reconstruction. Furthermore, this system simplifies the data processing by combining structured light projection and photometric stereo. The results of our method of data analysis describe the diffusive and specular attributes corresponding to every measured geometric point and can be used to render intricate 3D models in an arbitrarily illuminated scene.
NASA Technical Reports Server (NTRS)
Delcourt, D. C.; Horwitz, J. L.; Swinney, K. R.
1988-01-01
The influence of the interplanetary magnetic field (IMF) orientation on the transport of low-energy ions injected from the ionosphere is investigated using three-dimensional particle codes. It is shown that, unlike the auroral zone outflow, the ions originating from the polar cap region exhibit drastically different drift paths during southward and northward IMF. During southward IMF orientation, a 'two-cell' convection pattern prevails in the ionosphere, and three-dimensional simulations of ion trajectories indicate a preferential trapping of the light ions H(+) in the central plasma sheet, due to the wide azimuthal dispersion of the heavy ions, O(+). In contrast, for northward IMF orientation, the 'four-cell' potential distribution predicted in the ionosphere imposes a temporary ion drift toward higher L shells in the central polar cap. In this case, while the light ions can escape into the magnetotail, the heavy ions can remain trapped, featuring more intense acceleration (from a few electron volts up to the keV range) followed by precipitation at high invariant latitudes, as a consequence of their further travel into the tail.
Homogeneous, anisotropic three-manifolds of topologically massive gravity
NASA Astrophysics Data System (ADS)
Nutku, Y.; Baekler, P.
1989-10-01
We present a new class of exact solutions of Deser, Jackiw, and Templeton's theory (DJT) of topologically massive gravity which consists of homogeneous, anisotropic manifolds. In these solutions the coframe is given by the left-invariant 1-forms of 3-dimensional Lie algebras up to constant scale factors. These factors are fixed in terms of the DJT coupling constant μ which is the constant of proportionality between the Einstein and Cotton tensors in 3-dimensions. Differences between the scale factors result in anisotropy which is a common feature of topologically massive 3-manifolds. We have found that only Bianchi Types VI, VIII, and IX lead to nontrivial solutions. Among these, a Bianchi Type IX, squashed 3-sphere solution of the Euclideanized DJT theory has finite action. Bianchi Type VIII, IX solutions can variously be embedded in the de Sitter/anti-de Sitter space. That is, some DJT 3-manifolds that we shall present here can be regarded as the basic constituent of anti-de Sitter space which is the ground state solution in higher dimensional generalization of Einstein's general relativity.
Homogeneous, anisotropic three-manifolds of topologically massive gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutku, Y.; Baekler, P.
1989-10-01
We present a new class of exact solutions of Deser, Jackiw, and Templeton's theory (DJT) of topologically massive gravity which consists of homogeneous, anisotropic manifolds. In these solutions the coframe is given by the left-invariant 1-forms of 3-dimensional Lie algebras up to constant scale factors. These factors are fixed in terms of the DJT coupling constant {mu}m which is the constant of proportionality between the Einstein and Cotton tensors in 3-dimensions. Differences between the scale factors result in anisotropy which is a common feature of topologically massive 3-manifolds. We have found that only Bianchi Types VI, VIII, and IX leadmore » to nontrivial solutions. Among these, a Bianchi Type IX, squashed 3-sphere solution of the Euclideanized DJT theory has finite action, Bianchi Type VIII, IX solutions can variously be embedded in the de Sitter/anti-de Sitter space. That is, some DJT 3-manifolds that we shall present here can be regarded as the basic constitent of anti-de Sitter space which is the ground state solution in higher dimensional generalizations of Einstein's general relativity. {copyright} 1989 Academic Press, Inc.« less
Realization of a scenario with two relaxation rates in the Hubbard Falicov-Kimball model
NASA Astrophysics Data System (ADS)
Barman, H.; Laad, M. S.; Hassan, S. R.
2018-02-01
A single transport relaxation rate governs the decay of both longitudinal and Hall currents in Landau Fermi liquids (FL). Breakdown of this fundamental feature, first observed in two-dimensional cuprates and subsequently in other three-dimensional correlated systems close to the Mott metal-insulator transition, played a pivotal role in emergence of a non-FL (NFL) paradigm in higher dimensions D (>1 ) . Motivated hereby, we explore the emergence of this "two relaxation rates" scenario in the Hubbard Falicov-Kimball model (HFKM) using the dynamical mean-field theory (DMFT). Specializing to D =3 , we find, beyond a critical Falicov-Kimball (FK) interaction, that two distinct relaxation rates governing distinct temperature (T ) dependence of the longitudinal and Hall currents naturally emerges in the NFL metal. Our results show good accord with the experiment in V2 -yO3 near the metal-to-insulator transition (MIT). We rationalize this surprising finding by an analytical analysis of the structure of charge and spin Hamiltonians in the underlying impurity problem, specifically through a bosonization method applied to the Wolff model and connecting it to the x-ray edge problem.
Three-Dimensional Modeling of Quasi-Homologous Solar Jets
NASA Technical Reports Server (NTRS)
Pariat, E.; Antiochos, S. K.; DeVore, C. R.
2010-01-01
Recent solar observations (e.g., obtained with Hinode and STEREO) have revealed that coronal jets are a more frequent phenomenon than previously believed. This higher frequency results, in part, from the fact that jets exhibit a homologous behavior: successive jets recur at the same location with similar morphological features. We present the results of three-dimensional (31)) numerical simulations of our model for coronal jets. This study demonstrates the ability of the model to generate recurrent 3D untwisting quasi-homologous jets when a stress is constantly applied at the photospheric boundary. The homology results from the property of the 3D null-point system to relax to a state topologically similar to its initial configuration. In addition, we find two distinct regimes of reconnection in the simulations: an impulsive 3D mode involving a helical rotating current sheet that generates the jet, and a quasi-steady mode that occurs in a 2D-like current sheet located along the fan between the sheared spines. We argue that these different regimes can explain the observed link between jets and plumes.
Karbasi, Saeed; Khorasani, Saied Nouri; Ebrahimi, Somayeh; Khalili, Shahla; Fekrat, Farnoosh; Sadeghi, Davoud
2016-01-01
Background: Poly (hydroxy butyrate) (PHB) is a biodegradable and biocompatible polymer with good mechanical properties. This polymer could be a promising material for scaffolds if some features improve. Materials and Methods: In the present work, new PHB/chitosan blend scaffolds were prepared as a three-dimensional substrate in cartilage tissue engineering. Chitosan in different weight percent was added to PHB and solved in trifluoroacetic acid. Statistical Taguchi method was employed in the design of experiments. Results: The Fourier-transform infrared spectroscopy test revealed that the crystallization of PHB in these blends is suppressed with increasing the amount of chitosan. Scanning electron microscopy images showed a thin and rough top layer with a nodular structure, supported with a porous sub-layer in the surface of the scaffolds. In vitro degradation rate of the scaffolds was higher than pure PHB scaffolds. Maximum degradation rate has been seen for the scaffold with 90% wt. NaCl and 40% wt. chitosan. Conclusions: The obtained results suggest that these newly developed PHB/chitosan blend scaffolds may serve as a three-dimensional substrate in cartilage tissue engineering. PMID:28028517
Karbasi, Saeed; Khorasani, Saied Nouri; Ebrahimi, Somayeh; Khalili, Shahla; Fekrat, Farnoosh; Sadeghi, Davoud
2016-01-01
Poly (hydroxy butyrate) (PHB) is a biodegradable and biocompatible polymer with good mechanical properties. This polymer could be a promising material for scaffolds if some features improve. In the present work, new PHB/chitosan blend scaffolds were prepared as a three-dimensional substrate in cartilage tissue engineering. Chitosan in different weight percent was added to PHB and solved in trifluoroacetic acid. Statistical Taguchi method was employed in the design of experiments. The Fourier-transform infrared spectroscopy test revealed that the crystallization of PHB in these blends is suppressed with increasing the amount of chitosan. Scanning electron microscopy images showed a thin and rough top layer with a nodular structure, supported with a porous sub-layer in the surface of the scaffolds. In vitro degradation rate of the scaffolds was higher than pure PHB scaffolds. Maximum degradation rate has been seen for the scaffold with 90% wt. NaCl and 40% wt. chitosan. The obtained results suggest that these newly developed PHB/chitosan blend scaffolds may serve as a three-dimensional substrate in cartilage tissue engineering.
Lew, Matthew D.; Thompson, Michael A.; Badieirostami, Majid; Moerner, W. E.
2010-01-01
The point spread function (PSF) of a widefield fluorescence microscope is not suitable for three-dimensional super-resolution imaging. We characterize the localization precision of a unique method for 3D superresolution imaging featuring a double-helix point spread function (DH-PSF). The DH-PSF is designed to have two lobes that rotate about their midpoint in any transverse plane as a function of the axial position of the emitter. In effect, the PSF appears as a double helix in three dimensions. By comparing the Cramer-Rao bound of the DH-PSF with the standard PSF as a function of the axial position, we show that the DH-PSF has a higher and more uniform localization precision than the standard PSF throughout a 2 μm depth of field. Comparisons between the DH-PSF and other methods for 3D super-resolution are briefly discussed. We also illustrate the applicability of the DH-PSF for imaging weak emitters in biological systems by tracking the movement of quantum dots in glycerol and in live cells. PMID:20563317
NASA Astrophysics Data System (ADS)
Sohail, Sara H.; Dahlberg, Peter D.; Allodi, Marco A.; Massey, Sara C.; Ting, Po-Chieh; Martin, Elizabeth C.; Hunter, C. Neil; Engel, Gregory S.
2017-10-01
In photosynthetic organisms, the pigment-protein complexes that comprise the light-harvesting antenna exhibit complex electronic structures and ultrafast dynamics due to the coupling among the chromophores. Here, we present absorptive two-dimensional (2D) electronic spectra from living cultures of the purple bacterium, Rhodobacter sphaeroides, acquired using gradient assisted photon echo spectroscopy. Diagonal slices through the 2D lineshape of the LH1 stimulated emission/ground state bleach feature reveal a resolvable higher energy population within the B875 manifold. The waiting time evolution of diagonal, horizontal, and vertical slices through the 2D lineshape shows a sub-100 fs intra-complex relaxation as this higher energy population red shifts. The absorption (855 nm) of this higher lying sub-population of B875 before it has red shifted optimizes spectral overlap between the LH1 B875 band and the B850 band of LH2. Access to an energetically broad distribution of excitonic states within B875 offers a mechanism for efficient energy transfer from LH2 to LH1 during photosynthesis while limiting back transfer. Two-dimensional lineshapes reveal a rapid decay in the ground-state bleach/stimulated emission of B875. This signal, identified as a decrease in the dipole strength of a strong transition in LH1 on the red side of the B875 band, is assigned to the rapid localization of an initially delocalized exciton state, a dephasing process that frustrates back transfer from LH1 to LH2.
Higher dimensional Taub-NUT spaces and applications
NASA Astrophysics Data System (ADS)
Stelea, Cristian Ionut
In the first part of this thesis we discuss classes of new exact NUT-charged solutions in four dimensions and higher, while in the remainder of the thesis we make a study of their properties and their possible applications. Specifically, in four dimensions we construct new families of axisymmetric vacuum solutions using a solution-generating technique based on the hidden SL(2,R) symmetry of the effective action. In particular, using the Schwarzschild solution as a seed we obtain the Zipoy-Voorhees generalisation of the Taub-NUT solution and of the Eguchi-Hanson soliton. Using the C-metric as a seed, we obtain and study the accelerating versions of all the above solutions. In higher dimensions we present new classes of NUT-charged spaces, generalising the previously known even-dimensional solutions to odd and even dimensions, as well as to spaces with multiple NUT-parameters. We also find the most general form of the odd-dimensional Eguchi-Hanson solitons. We use such solutions to investigate the thermodynamic properties of NUT-charged spaces in (A)dS backgrounds. These have been shown to yield counter-examples to some of the conjectures advanced in the still elusive dS/CFT paradigm (such as the maximal mass conjecture and Bousso's entropic N-bound). One important application of NUT-charged spaces is to construct higher dimensional generalisations of Kaluza-Klein magnetic monopoles, generalising the known 5-dimensional Kaluza-Klein soliton. Another interesting application involves a study of time-dependent higher-dimensional bubbles-of-nothing generated from NUT-charged solutions. We use them to test the AdS/CFT conjecture as well as to generate, by using stringy Hopf-dualities, new interesting time-dependent solutions in string theory. Finally, we construct and study new NUT-charged solutions in higher-dimensional Einstein-Maxwell theories, generalising the known Reissner-Nordstrom solutions.
Amis, Gregory P; Carpenter, Gail A
2010-03-01
Computational models of learning typically train on labeled input patterns (supervised learning), unlabeled input patterns (unsupervised learning), or a combination of the two (semi-supervised learning). In each case input patterns have a fixed number of features throughout training and testing. Human and machine learning contexts present additional opportunities for expanding incomplete knowledge from formal training, via self-directed learning that incorporates features not previously experienced. This article defines a new self-supervised learning paradigm to address these richer learning contexts, introducing a neural network called self-supervised ARTMAP. Self-supervised learning integrates knowledge from a teacher (labeled patterns with some features), knowledge from the environment (unlabeled patterns with more features), and knowledge from internal model activation (self-labeled patterns). Self-supervised ARTMAP learns about novel features from unlabeled patterns without destroying partial knowledge previously acquired from labeled patterns. A category selection function bases system predictions on known features, and distributed network activation scales unlabeled learning to prediction confidence. Slow distributed learning on unlabeled patterns focuses on novel features and confident predictions, defining classification boundaries that were ambiguous in the labeled patterns. Self-supervised ARTMAP improves test accuracy on illustrative low-dimensional problems and on high-dimensional benchmarks. Model code and benchmark data are available from: http://techlab.eu.edu/SSART/. Copyright 2009 Elsevier Ltd. All rights reserved.
Aksu, Yaman; Miller, David J; Kesidis, George; Yang, Qing X
2010-05-01
Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers." For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom--the hyperplane's intercept and its squared 2-norm--with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results.
NASA Astrophysics Data System (ADS)
Dashti-Naserabadi, H.; Najafi, M. N.
2017-10-01
We present extensive numerical simulations of Bak-Tang-Wiesenfeld (BTW) sandpile model on the hypercubic lattice in the upper critical dimension Du=4 . After re-extracting the critical exponents of avalanches, we concentrate on the three- and two-dimensional (2D) cross sections seeking for the induced criticality which are reflected in the geometrical and local exponents. Various features of finite-size scaling (FSS) theory have been tested and confirmed for all dimensions. The hyperscaling relations between the exponents of the distribution functions and the fractal dimensions are shown to be valid for all dimensions. We found that the exponent of the distribution function of avalanche mass is the same for the d -dimensional cross sections and the d -dimensional BTW model for d =2 and 3. The geometrical quantities, however, have completely different behaviors with respect to the same-dimensional BTW model. By analyzing the FSS theory for the geometrical exponents of the two-dimensional cross sections, we propose that the 2D induced models have degrees of similarity with the Gaussian free field (GFF). Although some local exponents are slightly different, this similarity is excellent for the fractal dimensions. The most important one showing this feature is the fractal dimension of loops df, which is found to be 1.50 ±0.02 ≈3/2 =dfGFF .
Dashti-Naserabadi, H; Najafi, M N
2017-10-01
We present extensive numerical simulations of Bak-Tang-Wiesenfeld (BTW) sandpile model on the hypercubic lattice in the upper critical dimension D_{u}=4. After re-extracting the critical exponents of avalanches, we concentrate on the three- and two-dimensional (2D) cross sections seeking for the induced criticality which are reflected in the geometrical and local exponents. Various features of finite-size scaling (FSS) theory have been tested and confirmed for all dimensions. The hyperscaling relations between the exponents of the distribution functions and the fractal dimensions are shown to be valid for all dimensions. We found that the exponent of the distribution function of avalanche mass is the same for the d-dimensional cross sections and the d-dimensional BTW model for d=2 and 3. The geometrical quantities, however, have completely different behaviors with respect to the same-dimensional BTW model. By analyzing the FSS theory for the geometrical exponents of the two-dimensional cross sections, we propose that the 2D induced models have degrees of similarity with the Gaussian free field (GFF). Although some local exponents are slightly different, this similarity is excellent for the fractal dimensions. The most important one showing this feature is the fractal dimension of loops d_{f}, which is found to be 1.50±0.02≈3/2=d_{f}^{GFF}.
NASA Astrophysics Data System (ADS)
Lestari, A. W.; Rustam, Z.
2017-07-01
In the last decade, breast cancer has become the focus of world attention as this disease is one of the primary leading cause of death for women. Therefore, it is necessary to have the correct precautions and treatment. In previous studies, Fuzzy Kennel K-Medoid algorithm has been used for multi-class data. This paper proposes an algorithm to classify the high dimensional data of breast cancer using Fuzzy Possibilistic C-means (FPCM) and a new method based on clustering analysis using Normed Kernel Function-Based Fuzzy Possibilistic C-Means (NKFPCM). The objective of this paper is to obtain the best accuracy in classification of breast cancer data. In order to improve the accuracy of the two methods, the features candidates are evaluated using feature selection, where Laplacian Score is used. The results show the comparison accuracy and running time of FPCM and NKFPCM with and without feature selection.
NASA Astrophysics Data System (ADS)
Brookshire, B. N., Jr.; Mattox, B. A.; Parish, A. E.; Burks, A. G.
2016-02-01
Utilizing recently advanced ultrahigh-resolution 3-dimensional (UHR3D) seismic tools we have imaged the seafloor geomorphology and associated subsurface aspects of seep related expulsion features along the continental slope of the northern Gulf of Mexico with unprecedented clarity and continuity. Over an area of approximately 400 km2, over 50 discrete features were identified and three general seafloor geomorphologies indicative of seep activity including mounds, depressions and bathymetrically complex features were quantitatively characterized. Moreover, areas of high seafloor reflectivity indicative of mineralization and areas of coherent seismic amplitude anomalies in the near-seafloor water column indicative of active gas expulsion were identified. In association with these features, shallow source gas accumulations and migration pathways based on salt related stratigraphic uplift and faulting were imaged. Shallow, bottom simulating reflectors (BSRs) interpreted to be free gas trapped under near seafloor gas hydrate accumulations were very clearly imaged.
NASA Astrophysics Data System (ADS)
Gangeh, Mehrdad J.; Fung, Brandon; Tadayyon, Hadi; Tran, William T.; Czarnota, Gregory J.
2016-03-01
A non-invasive computer-aided-theragnosis (CAT) system was developed for the early assessment of responses to neoadjuvant chemotherapy in patients with locally advanced breast cancer. The CAT system was based on quantitative ultrasound spectroscopy methods comprising several modules including feature extraction, a metric to measure the dissimilarity between "pre-" and "mid-treatment" scans, and a supervised learning algorithm for the classification of patients to responders/non-responders. One major requirement for the successful design of a high-performance CAT system is to accurately measure the changes in parametric maps before treatment onset and during the course of treatment. To this end, a unified framework based on Hilbert-Schmidt independence criterion (HSIC) was used for the design of feature extraction from parametric maps and the dissimilarity measure between the "pre-" and "mid-treatment" scans. For the feature extraction, HSIC was used to design a supervised dictionary learning (SDL) method by maximizing the dependency between the scans taken from "pre-" and "mid-treatment" with "dummy labels" given to the scans. For the dissimilarity measure, an HSIC-based metric was employed to effectively measure the changes in parametric maps as an indication of treatment effectiveness. The HSIC-based feature extraction and dissimilarity measure used a kernel function to nonlinearly transform input vectors into a higher dimensional feature space and computed the population means in the new space, where enhanced group separability was ideally obtained. The results of the classification using the developed CAT system indicated an improvement of performance compared to a CAT system with basic features using histogram of intensity.
Asseln, Malte; Hänisch, Christoph; Schick, Fabian; Radermacher, Klaus
2018-05-14
Morphological differences between female and male knees have been reported in the literature, which led to the development of so-called gender-specific implants. However, detailed morphological descriptions covering the entire joint are rare and little is known regarding whether gender differences are real sexual dimorphisms or can be explained by overall differences in size. We comprehensively analysed knee morphology using 33 features of the femur and 21 features of the tibia to quantify knee shape. The landmark recognition and feature extraction based on three-dimensional surface data were fully automatically applied to 412 pathological (248 female and 164 male) knees undergoing total knee arthroplasty. Subsequently, an exploratory statistical analysis was performed and linear correlation analysis was used to investigate normalization factors and gender-specific differences. Statistically significant differences between genders were observed. These were pronounced for distance measurements and negligible for angular (relative) measurements. Female knees were significantly narrower at the same depth compared to male knees. The correlation analysis showed that linear correlations were higher for distance measurements defined in the same direction. After normalizing the distance features according to overall dimensions in the direction of their definition, gender-specific differences disappeared or were smaller than the related confidence intervals. Implants should not be linearly scaled according to one dimension. Instead, features in medial/lateral and anterior/posterior directions should be normalized separately (non-isotropic scaling). However, large inter-individual variations of the features remain after normalization, suggesting that patient-specific design solutions are required for an improved implant design, regardless of gender. Copyright © 2018 Elsevier B.V. All rights reserved.
Multiview alignment hashing for efficient image search.
Liu, Li; Yu, Mengyang; Shao, Ling
2015-03-01
Hashing is a popular and efficient method for nearest neighbor search in large-scale data spaces by embedding high-dimensional feature descriptors into a similarity preserving Hamming space with a low dimension. For most hashing methods, the performance of retrieval heavily depends on the choice of the high-dimensional feature descriptor. Furthermore, a single type of feature cannot be descriptive enough for different images when it is used for hashing. Thus, how to combine multiple representations for learning effective hashing functions is an imminent task. In this paper, we present a novel unsupervised multiview alignment hashing approach based on regularized kernel nonnegative matrix factorization, which can find a compact representation uncovering the hidden semantics and simultaneously respecting the joint probability distribution of data. In particular, we aim to seek a matrix factorization to effectively fuse the multiple information sources meanwhile discarding the feature redundancy. Since the raised problem is regarded as nonconvex and discrete, our objective function is then optimized via an alternate way with relaxation and converges to a locally optimal solution. After finding the low-dimensional representation, the hashing functions are finally obtained through multivariable logistic regression. The proposed method is systematically evaluated on three data sets: 1) Caltech-256; 2) CIFAR-10; and 3) CIFAR-20, and the results show that our method significantly outperforms the state-of-the-art multiview hashing techniques.
Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.
In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less
Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction
Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.; ...
2016-08-01
In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less
Why is the World four-dimensional? Hermann Weyl’s 1955 argument and the topology of causation
NASA Astrophysics Data System (ADS)
De Bianchi, Silvia
2017-08-01
This paper approaches the question of space dimensionality by discussing a neglected argument proposed by Hermann Weyl in 1955. In Why is the World Four-Dimensional? (1955), Weyl offered a different argument from the one generally attributed to him and presented in Raum-Zeit-Materie. In the first sections of the paper, this new argument and its features are spelled-out, and in the last section, I shall develop some useful remarks on the concept of topology of causation that can still inform our reflection on the dimensionality of the world.
Role of dimensionality in Axelrod's model for the dissemination of culture
NASA Astrophysics Data System (ADS)
Klemm, Konstantin; Eguíluz, Víctor M.; Toral, Raúl; Miguel, Maxi San
2003-09-01
We analyze a model of social interaction in one- and two-dimensional lattices for a moderate number of features. We introduce an order parameter as a function of the overlap between neighboring sites. In a one-dimensional chain, we observe that the dynamics is consistent with a second-order transition, where the order parameter changes continuously and the average domain diverges at the transition point. However, in a two-dimensional lattice the order parameter is discontinuous at the transition point characteristic of a first-order transition between an ordered and a disordered state.
30 CFR 550.214 - What geological and geophysical (G&G) information must accompany the EP?
Code of Federal Regulations, 2012 CFR
2012-07-01
... already submitted it to the Regional Supervisor. (f) Shallow hazards assessment. For each proposed well, an assessment of any seafloor and subsurface geological and manmade features and conditions that may...-bearing reservoir showing the locations of proposed wells. (c) Two-dimensional (2-D) or three-dimensional...
30 CFR 550.214 - What geological and geophysical (G&G) information must accompany the EP?
Code of Federal Regulations, 2014 CFR
2014-07-01
... already submitted it to the Regional Supervisor. (f) Shallow hazards assessment. For each proposed well, an assessment of any seafloor and subsurface geological and manmade features and conditions that may...-bearing reservoir showing the locations of proposed wells. (c) Two-dimensional (2-D) or three-dimensional...
30 CFR 550.214 - What geological and geophysical (G&G) information must accompany the EP?
Code of Federal Regulations, 2013 CFR
2013-07-01
... already submitted it to the Regional Supervisor. (f) Shallow hazards assessment. For each proposed well, an assessment of any seafloor and subsurface geological and manmade features and conditions that may...-bearing reservoir showing the locations of proposed wells. (c) Two-dimensional (2-D) or three-dimensional...
Querying Patterns in High-Dimensional Heterogenous Datasets
ERIC Educational Resources Information Center
Singh, Vishwakarma
2012-01-01
The recent technological advancements have led to the availability of a plethora of heterogenous datasets, e.g., images tagged with geo-location and descriptive keywords. An object in these datasets is described by a set of high-dimensional feature vectors. For example, a keyword-tagged image is represented by a color-histogram and a…
Dimensionally Stable Ether-Containing Polyimide Copolymers
NASA Technical Reports Server (NTRS)
Fay, Catharine C. (Inventor); St.Clair, Anne K. (Inventor)
1999-01-01
Novel polyimide copolymers containing ether linkages were prepared by the reaction of an equimolar amount of dianhydride and a combination of diamines. The polyimide copolymers described herein possess the unique features of low moisture uptake, dimensional stability, good mechanical properties, and moderate glass transition temperatures. These materials have potential application as encapsulants and interlayer dielectrics.
Social Presence and Motivation in a Three-Dimensional Virtual World: An Explanatory Study
ERIC Educational Resources Information Center
Yilmaz, Rabia M.; Topu, F. Burcu; Goktas, Yuksel; Coban, Murat
2013-01-01
Three-dimensional (3-D) virtual worlds differ from other learning environments in their similarity to real life, providing opportunities for more effective communication and interaction. With these features, 3-D virtual worlds possess considerable potential to enhance learning opportunities. For effective learning, the users' motivation levels and…
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
Using High-Dimensional Image Models to Perform Highly Undetectable Steganography
NASA Astrophysics Data System (ADS)
Pevný, Tomáš; Filler, Tomáš; Bas, Patrick
This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.
Membership-degree preserving discriminant analysis with applications to face recognition.
Yang, Zhangjing; Liu, Chuancai; Huang, Pu; Qian, Jianjun
2013-01-01
In pattern recognition, feature extraction techniques have been widely employed to reduce the dimensionality of high-dimensional data. In this paper, we propose a novel feature extraction algorithm called membership-degree preserving discriminant analysis (MPDA) based on the fisher criterion and fuzzy set theory for face recognition. In the proposed algorithm, the membership degree of each sample to particular classes is firstly calculated by the fuzzy k-nearest neighbor (FKNN) algorithm to characterize the similarity between each sample and class centers, and then the membership degree is incorporated into the definition of the between-class scatter and the within-class scatter. The feature extraction criterion via maximizing the ratio of the between-class scatter to the within-class scatter is applied. Experimental results on the ORL, Yale, and FERET face databases demonstrate the effectiveness of the proposed algorithm.
Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA
NASA Astrophysics Data System (ADS)
He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong
2018-04-01
This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.
Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition
NASA Astrophysics Data System (ADS)
Rouabhia, C.; Tebbikh, H.
2008-06-01
Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).
Hidden Order and Dimensional Crossover of the Charge Density Waves in TiSe 2
Chen, P.; Chan, Y. -H.; Fang, X. -Y.; ...
2016-11-29
Charge density wave (CDW) formation, a key physics issue for materials, arises from interactions among electrons and phonons that can also lead to superconductivity and other competing or entangled phases. The prototypical system TiSe 2, with a particularly simple (2 × 2 × 2) transition and no Kohn anomalies caused by electron-phonon coupling, is a fascinating but unsolved case after decades of research. Our angle-resolved photoemission measurements of the band structure as a function of temperature, aided by first-principles calculations, reveal a hitherto undetected but crucial feature: a (2 × 2) electronic order in each layer sets in at ~232more » K before the widely recognized three-dimensional structural order at ~205 K. The dimensional crossover, likely a generic feature of such layered materials, involves renormalization of different band gaps in two stages.« less
Engineering Weyl Superfluid in Ultracold Fermionic Gases by One-Dimensional Optical Superlattices
NASA Astrophysics Data System (ADS)
Huang, Beibing
2018-01-01
In this paper, we theoretically demonstrate by using one-dimensional superlattices to couple two-dimensional time-reversal-breaking gapped topological superfluid models, an anomalous Weyl superfluid (WS) can be obtained. This new phase features its unique Fermi arc states (FAS) on the surfaces. In the conventional WS, FAS exist only for a part of the line connecting the projections of Weyl points and extending to the border and/or center of surface Brillouin zone. But for the anomalous WS, FAS exist for the whole line. As a proof of principle, we self-consistently at the mean-field level claim the achievement of the anomalous WS in the model with a dichromatic superlattice. In addition, inversion symmetry and band inversion in this model are analyzed to provide the unique features of identifying the anomalous WS experimentally by the momentum-resolved radio-frequency spectroscopy.
Gentry, Amanda Elswick; Jackson-Cook, Colleen K; Lyon, Debra E; Archer, Kellie J
2015-01-01
The pathological description of the stage of a tumor is an important clinical designation and is considered, like many other forms of biomedical data, an ordinal outcome. Currently, statistical methods for predicting an ordinal outcome using clinical, demographic, and high-dimensional correlated features are lacking. In this paper, we propose a method that fits an ordinal response model to predict an ordinal outcome for high-dimensional covariate spaces. Our method penalizes some covariates (high-throughput genomic features) without penalizing others (such as demographic and/or clinical covariates). We demonstrate the application of our method to predict the stage of breast cancer. In our model, breast cancer subtype is a nonpenalized predictor, and CpG site methylation values from the Illumina Human Methylation 450K assay are penalized predictors. The method has been made available in the ordinalgmifs package in the R programming environment.
Diagnostic analysis of two-dimensional monthly average ozone balance with Chapman chemistry
NASA Technical Reports Server (NTRS)
Stolarski, Richard S.; Jackman, Charles H.; Kaye, Jack A.
1986-01-01
Chapman chemistry has been used in a two-dimensional model to simulate ozone balance phenomenology. The similarity between regions of ozone production and loss calculated using Chapman chemistry and those computed using LIMS and SAMS data with a photochemical equilibrium model indicate that such simplified chemistry is useful in studying gross features in stratospheric ozone balance. Net ozone production or loss rates are brought about by departures from the photochemical equilibrium (PCE) condition. If transport drives ozone above its PCE condition, then photochemical loss dominates production. If transport drives ozone below its PCE condition, then photochemical production dominates loss. Gross features of ozone loss/production (L/P) inferred for the real atmosphere from data are also simulated using only eddy diffusion. This indicates that one must be careful in assigning a transport scheme for a two-dimensional model that mimics only behavior of the observed ozone L/P.
Pan, Rui; Wang, Hansheng; Li, Runze
2016-01-01
This paper is concerned with the problem of feature screening for multi-class linear discriminant analysis under ultrahigh dimensional setting. We allow the number of classes to be relatively large. As a result, the total number of relevant features is larger than usual. This makes the related classification problem much more challenging than the conventional one, where the number of classes is small (very often two). To solve the problem, we propose a novel pairwise sure independence screening method for linear discriminant analysis with an ultrahigh dimensional predictor. The proposed procedure is directly applicable to the situation with many classes. We further prove that the proposed method is screening consistent. Simulation studies are conducted to assess the finite sample performance of the new procedure. We also demonstrate the proposed methodology via an empirical analysis of a real life example on handwritten Chinese character recognition. PMID:28127109
A new procedure for dynamic adaption of three-dimensional unstructured grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1993-01-01
A new procedure is presented for the simultaneous coarsening and refinement of three-dimensional unstructured tetrahedral meshes. This algorithm allows for localized grid adaption that is used to capture aerodynamic flow features such as vortices and shock waves in helicopter flowfield simulations. The mesh-adaption algorithm is implemented in the C programming language and uses a data structure consisting of a series of dynamically-allocated linked lists. These lists allow the mesh connectivity to be rapidly reconstructed when individual mesh points are added and/or deleted. The algorithm allows the mesh to change in an anisotropic manner in order to efficiently resolve directional flow features. The procedure has been successfully implemented on a single processor of a Cray Y-MP computer. Two sample cases are presented involving three-dimensional transonic flow. Computed results show good agreement with conventional structured-grid solutions for the Euler equations.
Global Aeroheating Measurements of Shock-Shock Interactions on a Swept Cylinder
NASA Technical Reports Server (NTRS)
Mason, Michelle L.; Berry, Scott A.
2015-01-01
The effects of fin leading-edge radius and sweep angle on peak heating rates due to shock-shock interactions were investigated in the NASA Langley Research Center 20-Inch Mach 6 Air Tunnel. The cylindrical leading-edge fin models, with radii varied from 0.25 to 0.75 inches, represent wings or struts on hypersonic vehicles. A 9deg wedge generated a planar oblique shock at 16.7deg. to the flow that intersected the fin bow shock, producing a shock-shock interaction that impinged on the fin leading edge. The fin sweep angle was varied from 0deg (normal to the free-stream) to 15deg and 25deg swept forward. These cases were chosen to explore three characterized shock-shock interaction types. Global temperature data were obtained from the surface of the fused silica fins using phosphor thermography. Metal oil flow models with the same geometries as the fused silica models were used to visualize the streamline patterns for each angle of attack. High-speed zoom-schlieren videos were recorded to show the features and any temporal unsteadiness of the shock-shock interactions. The temperature data were analyzed using a one-dimensional semi-infinite method, as well as one- and two-dimensional finite-volume methods. These results were compared to determine the proper heat transfer analysis approach to minimize errors from lateral heat conduction due to the presence of strong surface temperature gradients induced by the shock interactions. The general trends in the leading-edge heat transfer behavior were similar for each explored shock-shock interaction type regardless of the leading-edge radius. However, the dimensional peak heat transfer coefficient augmentation increased with decreasing leading-edge radius. The dimensional peak heat transfer output from the two-dimensional code was about 20% higher than the value from a standard, semi-infinite one-dimensional method.
Four-dimensional wavelet compression of arbitrarily sized echocardiographic data.
Zeng, Li; Jansen, Christian P; Marsch, Stephan; Unser, Michael; Hunziker, Patrick R
2002-09-01
Wavelet-based methods have become most popular for the compression of two-dimensional medical images and sequences. The standard implementations consider data sizes that are powers of two. There is also a large body of literature treating issues such as the choice of the "optimal" wavelets and the performance comparison of competing algorithms. With the advent of telemedicine, there is a strong incentive to extend these techniques to higher dimensional data such as dynamic three-dimensional (3-D) echocardiography [four-dimensional (4-D) datasets]. One of the practical difficulties is that the size of this data is often not a multiple of a power of two, which can lead to increased computational complexity and impaired compression power. Our contribution in this paper is to present a genuine 4-D extension of the well-known zerotree algorithm for arbitrarily sized data. The key component of our method is a one-dimensional wavelet algorithm that can handle arbitrarily sized input signals. The method uses a pair of symmetric/antisymmetric wavelets (10/6) together with some appropriate midpoint symmetry boundary conditions that reduce border artifacts. The zerotree structure is also adapted so that it can accommodate noneven data splitting. We have applied our method to the compression of real 3-D dynamic sequences from clinical cardiac ultrasound examinations. Our new algorithm compares very favorably with other more ad hoc adaptations (image extension and tiling) of the standard powers-of-two methods, in terms of both compression performance and computational cost. It is vastly superior to slice-by-slice wavelet encoding. This was seen not only in numerical image quality parameters but also in expert ratings, where significant improvement using the new approach could be documented. Our validation experiments show that one can safely compress 4-D data sets at ratios of 128:1 without compromising the diagnostic value of the images. We also display some more extreme compression results at ratios of 2000:1 where some key diagnostically relevant key features are preserved.
TORRES, Fernanda Ferrari Esteves; BOSSO-MARTELO, Roberta; ESPIR, Camila Galletti; CIRELLI, Joni Augusto; GUERREIRO-TANOMARU, Juliane Maria; TANOMARU-FILHO, Mario
2017-01-01
Abstract Objective To evaluate solubility, dimensional stability, filling ability and volumetric change of root-end filling materials using conventional tests and new Micro-CT-based methods. Material and Methods 7 Results The results suggested correlated or complementary data between the proposed tests. At 7 days, BIO showed higher solubility and at 30 days, showed higher volumetric change in comparison with MTA (p<0.05). With regard to volumetric change, the tested materials were similar (p>0.05) at 7 days. At 30 days, they presented similar solubility. BIO and MTA showed higher dimensional stability than ZOE (p<0.05). ZOE and BIO showed higher filling ability (p<0.05). Conclusions ZOE presented a higher dimensional change, and BIO had greater solubility after 7 days. BIO presented filling ability and dimensional stability, but greater volumetric change than MTA after 30 days. Micro-CT can provide important data on the physicochemical properties of materials complementing conventional tests. PMID:28877275
Topology for Dominance for Network of Multi-Agent System
NASA Astrophysics Data System (ADS)
Szeto, K. Y.
2007-05-01
The resource allocation problem in evolving two-dimensional point patterns is investigated for the existence of good strategies for the construction of initial configuration that leads to fast dominance of the pattern by one single species, which can be interpreted as market dominance by a company in the context of multi-agent systems in econophysics. For hexagonal lattice, certain special topological arrangements of the resource in two-dimensions, such as rings, lines and clusters have higher probability of dominance, compared to random pattern. For more complex networks, a systematic way to search for a stable and dominant strategy of resource allocation in the changing environment is found by means of genetic algorithm. Five typical features can be summarized by means of the distribution function for the local neighborhood of friends and enemies as well as the local clustering coefficients: (1) The winner has more triangles than the loser has. (2) The winner likes to form clusters as the winner tends to connect with other winner rather than with losers; while the loser tends to connect with winners rather than losers. (3) The distribution function of friends as well as enemies for the winner is broader than the corresponding distribution function for the loser. (4) The connectivity at which the peak of the distribution of friends for the winner occurs is larger than that of the loser; while the peak values for friends for winners is lower. (5) The connectivity at which the peak of the distribution of enemies for the winner occurs is smaller than that of the loser; while the peak values for enemies for winners is lower. These five features appear to be general, at least in the context of two-dimensional hexagonal lattices of various sizes, hierarchical lattice, Voronoi diagrams, as well as high-dimensional random networks. These general local topological properties of networks are relevant to strategists aiming at dominance in evolving patterns when the interaction between the agents is local.
Özarslan, Evren; Koay, Cheng Guan; Shepherd, Timothy M; Komlosh, Michal E; İrfanoğlu, M Okan; Pierpaoli, Carlo; Basser, Peter J
2013-09-01
Diffusion-weighted magnetic resonance (MR) signals reflect information about underlying tissue microstructure and cytoarchitecture. We propose a quantitative, efficient, and robust mathematical and physical framework for representing diffusion-weighted MR imaging (MRI) data obtained in "q-space," and the corresponding "mean apparent propagator (MAP)" describing molecular displacements in "r-space." We also define and map novel quantitative descriptors of diffusion that can be computed robustly using this MAP-MRI framework. We describe efficient analytical representation of the three-dimensional q-space MR signal in a series expansion of basis functions that accurately describes diffusion in many complex geometries. The lowest order term in this expansion contains a diffusion tensor that characterizes the Gaussian displacement distribution, equivalent to diffusion tensor MRI (DTI). Inclusion of higher order terms enables the reconstruction of the true average propagator whose projection onto the unit "displacement" sphere provides an orientational distribution function (ODF) that contains only the orientational dependence of the diffusion process. The representation characterizes novel features of diffusion anisotropy and the non-Gaussian character of the three-dimensional diffusion process. Other important measures this representation provides include the return-to-the-origin probability (RTOP), and its variants for diffusion in one- and two-dimensions-the return-to-the-plane probability (RTPP), and the return-to-the-axis probability (RTAP), respectively. These zero net displacement probabilities measure the mean compartment (pore) volume and cross-sectional area in distributions of isolated pores irrespective of the pore shape. MAP-MRI represents a new comprehensive framework to model the three-dimensional q-space signal and transform it into diffusion propagators. Experiments on an excised marmoset brain specimen demonstrate that MAP-MRI provides several novel, quantifiable parameters that capture previously obscured intrinsic features of nervous tissue microstructure. This should prove helpful for investigating the functional organization of normal and pathologic nervous tissue. Copyright © 2013 Elsevier Inc. All rights reserved.
Dey, Susmita; Sarkar, Ripon; Chatterjee, Kabita; Datta, Pallab; Barui, Ananya; Maity, Santi P
2017-04-01
Habitual smokers are known to be at higher risk for developing oral cancer, which is increasing at an alarming rate globally. Conventionally, oral cancer is associated with high mortality rates, although recent reports show the improved survival outcomes by early diagnosis of disease. An effective prediction system which will enable to identify the probability of cancer development amongst the habitual smokers, is thus expected to benefit sizable number of populations. Present work describes a non-invasive, integrated method for early detection of cellular abnormalities based on analysis of different cyto-morphological features of exfoliative oral epithelial cells. Differential interference contrast (DIC) microscopy provides a potential optical tool as this mode provides a pseudo three dimensional (3-D) image with detailed morphological and textural features obtained from noninvasive, label free epithelial cells. For segmentation of DIC images, gradient vector flow snake model active contour process has been adopted. To evaluate cellular abnormalities amongst habitual smokers, the selected morphological and textural features of epithelial cells are compared with the non-smoker (-ve control group) group and clinically diagnosed pre-cancer patients (+ve control group) using support vector machine (SVM) classifier. Accuracy of the developed SVM based classification has been found to be 86% with 80% sensitivity and 89% specificity in classifying the features from the volunteers having smoking habit. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis.
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2014-11-01
For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)(2), a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET. Copyright © 2014 Elsevier Inc. All rights reserved.
Hierarchical Feature Representation and Multimodal Fusion with Deep Learning for AD/MCI Diagnosis
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2014-01-01
For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)1, a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET. PMID:25042445
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin Xiangguo; Chen Shu; Guan Xiwen
2011-07-15
We investigate quantum criticality and universal scaling of strongly attractive Fermi gases confined in a one-dimensional harmonic trap. We demonstrate from the power-law scaling of the thermodynamic properties that current experiments on this system are capable of measuring universal features at quantum criticality, such as universal scaling and Tomonaga-Luttinger liquid physics. The results also provide insights on recent measurements of key features of the phase diagram of a spin-imbalanced atomic Fermi gas [Y. Liao et al., Nature (London) 467, 567 (2010)] and point to further study of quantum critical phenomena in ultracold atomic Fermi gases.
NASA Astrophysics Data System (ADS)
Lytle, Justin Conrad
This dissertation details my study of three-dimensionally ordered macroporous (3DOM) materials, which were prepared using polymer latex colloidal crystal templates. These solids are composed of close-packed and three-dimensionally interconnected spherical macropores surrounded by nanoscale solid wall skeletons. This unique architecture offers relatively large surface areas that are accessible by interconnected macropores, making these materials important for innovative catalysis, sensing, and separations applications. In addition, the three-dimensionally alternating dielectric structure can establish photonic stop bands that control the flow of light analogously to the restraint of electronic conduction by electronic bandgaps. Many potential applications would benefit from reducing device feature sizes from the bulk into the nanoscale regime. However, some compositions are more easily prepared as nanostructured materials than others. Therefore, it would be immensely important to develop synthetic methods of transforming solids that are more easily formed with nanoarchitectural features into compositions that are not. Pseudomorphic transformation reactions may be one solution to this problem, since they are capable of altering chemical composition while maintaining shape and structural morphology. Several compositions of inverse opal and nanostructured preforms were investigated in this work to study the effects of vapor-phase and solution-phase conversion reactions on materials with feature sizes ranging from a few nm to tens of mum. 3DOM SiO2 and WO3, nanostructured Ni, and colloidal silica sphere performs were studied to investigate the effects of preform chemistries, feature sizes and shapes, processing temperatures, and reagent ratios on overall pseudomorphic structural retention. Power storage and fuel cell devices based on nanostructured electrodes are a major example of how reducing device component feature sizes can greatly benefit applications. Bulk electrode geometries have diffusion-limited kinetics and relatively low energy and power densities. Nanostructured electrodes offer extremely short ion diffusion pathlengths and relatively numerous reaction sites. 3DOM SnO2 thin films, 3DOM Li4Ti 5O12 powders, and 3DOM carbon monoliths have been fabricated and characterized in this work as Li-ion anode materials, with 3DOM carbon exhibiting an enormous rate capability beyond similarly prepared, but non-templated, bulk carbon. Furthermore, a novel battery design that is three-dimensionally interpenetrated on the nanoscale was prepared and evaluated in this research.
Pair creation of higher dimensional black holes on a de Sitter background
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dias, Oscar J.C.; Lemos, Jose P.S.; CENTRA, Departamento de Fisica, F.C.T., Universidade do Algarve, Campus de Gambelas, 8005-139 Faro
We study in detail the quantum process in which a pair of black holes is created in a higher D-dimensional de Sitter (dS) background. The energy to materialize and accelerate the pair comes from the positive cosmological constant. The instantons that describe the process are obtained from the Tangherlini black hole solutions. Our pair creation rates reduce to the pair creation rate for Reissner-Nordstroem-dS solutions when D=4. Pair creation of black holes in the dS background becomes less suppressed when the dimension of the spacetime increases. The dS space is the only background in which we can discuss analytically themore » pair creation process of higher dimensional black holes, since the C-metric and the Ernst solutions, which describe, respectively, a pair accelerated by a string and by an electromagnetic field, are not known yet in a higher dimensional spacetime.« less
Cubic Interactions of Massless Bosonic Fields in Three Dimensions
NASA Astrophysics Data System (ADS)
Mkrtchyan, Karapet
2018-06-01
In this Letter, we take the first step towards construction of nontrivial Lagrangian theories of higher-spin gravity in a metriclike formulation in three dimensions. The crucial feature of a metriclike formulation is that it is known how to incorporate matter interactions into the description. We derive a complete classification of cubic interactions for arbitrary triples s1 , s2 , s3 of massless fields, which are the building blocks of any interacting theory with massless higher spins. We find that there is, at most, one vertex for any given triple of spins in 3D (with one exception, s1=s2=s3=1 , which allows for two vertices). Remarkably, there are no vertices for spin values that do not respect strict triangle inequalities and contain at least two spins greater than one. This translates into selection rules for three-point functions of higher-spin conserved currents in two dimensional conformal field theory. Furthermore, universal coupling to gravity for any spin is derived. Last, we argue that this classification persists in arbitrary Einstein backgrounds.
Recurrent flow analysis in spatiotemporally chaotic 2-dimensional Kolmogorov flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Dan, E-mail: dan.lucas@ucd.ie; Kerswell, Rich R., E-mail: r.r.kerswell@bris.ac.uk
2015-04-15
Motivated by recent success in the dynamical systems approach to transitional flow, we study the efficiency and effectiveness of extracting simple invariant sets (recurrent flows) directly from chaotic/turbulent flows and the potential of these sets for providing predictions of certain statistics of the flow. Two-dimensional Kolmogorov flow (the 2D Navier-Stokes equations with a sinusoidal body force) is studied both over a square [0, 2π]{sup 2} torus and a rectangular torus extended in the forcing direction. In the former case, an order of magnitude more recurrent flows are found than previously [G. J. Chandler and R. R. Kerswell, “Invariant recurrent solutionsmore » embedded in a turbulent two-dimensional Kolmogorov flow,” J. Fluid Mech. 722, 554–595 (2013)] and shown to give improved predictions for the dissipation and energy pdfs of the chaos via periodic orbit theory. Analysis of the recurrent flows shows that the energy is largely trapped in the smallest wavenumbers through a combination of the inverse cascade process and a feature of the advective nonlinearity in 2D. Over the extended torus at low forcing amplitudes, some extracted states mimic the statistics of the spatially localised chaos present surprisingly well recalling the findings of Kawahara and Kida [“Periodic motion embedded in plane Couette turbulence: Regeneration cycle and burst,” J. Fluid Mech. 449, 291 (2001)] in low-Reynolds-number plane Couette flow. At higher forcing amplitudes, however, success is limited highlighting the increased dimensionality of the chaos and the need for larger data sets. Algorithmic developments to improve the extraction procedure are discussed.« less
Gender identity and sexual orientation in women with borderline personality disorder.
Singh, Devita; McMain, Shelley; Zucker, Kenneth J
2011-02-01
In the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, text revision (DSM-IV-TR) (and earlier editions), a disturbance in "identity" is one of the defining features of borderline personality disorder (BPD). Gender identity, a person's sense of self as a male or a female, constitutes an important aspect of identity formation, but this construct has rarely been examined in patients with BPD. In the present study, the presence of gender identity disorder or confusion was examined in women diagnosed with BPD. We used a validated dimensional measure of gender dysphoria. Recalled gender identity and gender role behavior from childhood was also assessed with a validated dimensional measure, and current sexual orientation was assessed by two self-report measures. A consecutive series of 100 clinic-referred women (mean age, 34 years) with BPD participated in the study. The women were diagnosed with BPD using the International Personality Disorder Exam-BPD Section. None of the women with BPD met the criterion for caseness on the dimensional measure of gender dysphoria. Women who self-reported either a bisexual or a homosexual sexual orientation had a significantly higher score on the dimensional measure of gender dysphoria than the women who self-reported a heterosexual sexual orientation, and they also recalled significantly more cross-gender behavior during childhood. Results were compared with a previous study on a diagnostically heterogeneous group of women with other clinical problems. The importance of psychosexual assessment in the clinical evaluation of patients with BPD is discussed. © 2010 International Society for Sexual Medicine.
NASA Astrophysics Data System (ADS)
Brandl, Miriam B.; Beck, Dominik; Pham, Tuan D.
2011-06-01
The high dimensionality of image-based dataset can be a drawback for classification accuracy. In this study, we propose the application of fuzzy c-means clustering, cluster validity indices and the notation of a joint-feature-clustering matrix to find redundancies of image-features. The introduced matrix indicates how frequently features are grouped in a mutual cluster. The resulting information can be used to find data-derived feature prototypes with a common biological meaning, reduce data storage as well as computation times and improve the classification accuracy.
A Glimpse in the Third Dimension for Electrical Resistivity Profiles
NASA Astrophysics Data System (ADS)
Robbins, A. R.; Plattner, A.
2017-12-01
We present an electrode layout strategy designed to enhance the popular two-dimensional electrical resistivity profile. Offsetting electrodes from the traditional linear layout and using 3-D inversion software allows for mapping the three-dimensional electrical resistivity close to the profile plane. We established a series of synthetic tests using simulated data generated from chosen resistivity distributions with a three-dimensional target feature. All inversions and simulations were conducted using freely-available ERT software, BERT and E4D. Synthetic results demonstrate the effectiveness of the offset electrode approach, whereas the linear layout failed to resolve the three-dimensional character of our subsurface feature. A field survey using trench backfill as a known resistivity contrast confirmed our synthetic tests. As we show, 3-D inversions of linear layouts for starting models without previously known structure are futile ventures because they generate symmetric resistivity solutions with respect to the profile plane. This is a consequence of the layout's inherent symmetrical sensitivity patterns. An offset electrode layout is not subject to the same limitation, as the collective measurements do not share a common sensitivity symmetry. For practitioners, this approach presents a low-cost improvement of a traditional geophysical method which is simple to use yet may provide critical information about the three dimensional structure of the subsurface close to the profile.
Shankar, Hariharan; Reddy, Sapna
2012-07-01
Ultrasound imaging has gained acceptance in pain management interventions. Features of myofascial pain syndrome have been explored using ultrasound imaging and elastography. There is a paucity of reports showing the benefit clinically. This report provides three-dimensional features of taut bands and highlights the advantages of using two-dimensional ultrasound imaging to improve targeting of taut bands in deeper locations. Fifty-eight-year-old man with pain and decreased range of motion of the right shoulder was referred for further management of pain above the scapula after having failed conservative management for myofascial pain syndrome. Three-dimensional ultrasound images provided evidence of aberrancy in the architecture of the muscle fascicles around the taut bands compared to the adjacent normal muscle tissue during serial sectioning of the accrued image. On two-dimensional ultrasound imaging over the palpated taut band, areas of hyperechogenicity were visualized in the trapezius and supraspinatus muscles. Subsequently, the patient received ultrasound-guided real-time lidocaine injections to the trigger points with successful resolution of symptoms. This is a successful demonstration of utility of ultrasound imaging of taut bands in the management of myofascial pain syndrome. Utility of this imaging modality in myofascial pain syndrome requires further clinical validation. Wiley Periodicals, Inc.
Higher-order nonclassicalities of finite dimensional coherent states: A comparative study
NASA Astrophysics Data System (ADS)
Alam, Nasir; Verma, Amit; Pathak, Anirban
2018-07-01
Conventional coherent states (CSs) are defined in various ways. For example, CS is defined as an infinite Poissonian expansion in Fock states, as displaced vacuum state, or as an eigenket of annihilation operator. In the infinite dimensional Hilbert space, these definitions are equivalent. However, these definitions are not equivalent for the finite dimensional systems. In this work, we present a comparative description of the lower- and higher-order nonclassical properties of the finite dimensional CSs which are also referred to as qudit CSs (QCSs). For the comparison, nonclassical properties of two types of QCSs are used: (i) nonlinear QCS produced by applying a truncated displacement operator on the vacuum and (ii) linear QCS produced by the Poissonian expansion in Fock states of the CS truncated at (d - 1)-photon Fock state. The comparison is performed using a set of nonclassicality witnesses (e.g., higher order antibunching, higher order sub-Poissonian statistics, higher order squeezing, Agarwal-Tara parameter, Klyshko's criterion) and a set of quantitative measures of nonclassicality (e.g., negativity potential, concurrence potential and anticlassicality). The higher order nonclassicality witnesses have found to reveal the existence of higher order nonclassical properties of QCS for the first time.
A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations
Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia
2015-01-01
Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738
Computation of viscous incompressible flows
NASA Technical Reports Server (NTRS)
Kwak, Dochan
1989-01-01
Incompressible Navier-Stokes solution methods and their applications to three-dimensional flows are discussed. A brief review of existing methods is given followed by a detailed description of recent progress on development of three-dimensional generalized flow solvers. Emphasis is placed on primitive variable formulations which are most promising and flexible for general three-dimensional computations of viscous incompressible flows. Both steady- and unsteady-solution algorithms and their salient features are discussed. Finally, examples of real world applications of these flow solvers are given.
NASA Technical Reports Server (NTRS)
Anderson, B. H.
1983-01-01
A broad program to develop advanced, reliable, and user oriented three-dimensional viscous design techniques for supersonic inlet systems, and encourage their transfer into the general user community is discussed. Features of the program include: (1) develop effective methods of computing three-dimensional flows within a zonal modeling methodology; (2) ensure reasonable agreement between said analysis and selective sets of benchmark validation data; (3) develop user orientation into said analysis; and (4) explore and develop advanced numerical methodology.
3D Electromagnetic Imaging of Fluid Distribution Below the Kii Peninsula, SW Japan Forearc
NASA Astrophysics Data System (ADS)
Kinoshita, Y.; Ogawa, Y.; Ichiki, M.; Yamaguchi, S.; Fujita, K.; Umeda, K.; Asamori, K.
2017-12-01
Although Kii peninsula is located in the forearc of southwest Japan, it has high temperature hot springs and fluids from mantle are inferred from the isotopic ratio of helium. Non-volcanic tremors underneath the Kii Peninsula suggest rising fluids from the slab.Previously, in the southern part of the Kii Peninsula, wide band magnetotelluric measurements were carried out (Fujita et al. ,1997; Umeda et al., 2004). These studies could image the existence of the conductivity anomaly in the shallow and deep crust, however they used two dimensional inversions and three-dimensionality is not fully taken into consideration. As part of the "Crustal Dynamics" project, we have measured 20 more stations so that the whole wide-band MT stations constitute grids for three-dimensional modeling of the area. In total we have 51 wide-band magnetotelluric sites. Preliminary 3d inverse modeling showed the following features. (1) The high resistivity in the eastern Kii Peninsula at depths of 5-40km. This may imply consolidated magma body of Kumano Acidic rocks underlain by resistive Philippine Sea Plate which subducts with a low dip angle. (2) The northwestern part of Kii Peninsula has the shallow low resistivity in the upper crust, around which high seismicity is observed. (3) The northwestern part of the survey area has a deeper conductor. This implies a wedge mantle where the Philippine Sea subduction has a higher dip angle.
NASA Astrophysics Data System (ADS)
Yan, Xiaoqing; Xue, Chao; Yang, Bolun; Yang, Guidong
2017-02-01
Novel three-dimensionally ordered macroporous (3DOM) Fe3+-doped TiO2 photocatalysts were prepared using a colloidal crystal template method with low-cost raw material including ferric trichloride, isopropanol, tetrabutyl titanate and polymethyl methacrylate. The as-prepared 3DOM Fe3+-doped TiO2 photocatalysts were characterized by various analytical techniques. TEM and SEM results showed that the obtained photocatalysts possess well-ordered macroporous structure in three dimensional orientations. As proved by XPS and EDX analysis that Fe3+ ions have been introduced TiO2 lattice and the doped Fe3+ ions can act as the electron acceptor/donor centers to significantly enhance the electron transfer from the bulk to surface of TiO2, resulting in more electrons could take part in the oxygen reduction process thereby decreasing the recombination rate of photogenerated charges. Meanwhile, the 3DOM architecture with the feature of interfacial chemical reaction active sites and optical absorption active sites is remarkably favorable for the reactant transfer and light trapping in the photoreaction process. As a result, the 3DOM Fe3+-doped TiO2 photocatalysts show the considerably higher photocatalytic activity for decomposition of the Rhodamine B (RhB) and the generation of hydrogen under visible light irradiation due to the synergistic effects of open, interconnected macroporous network and metal ion doping.
Zernike phase contrast cryo-electron tomography of whole bacterial cells
Guerrero-Ferreira, Ricardo C.; Wright, Elizabeth R.
2014-01-01
Cryo-electron tomography (cryo-ET) provides three-dimensional (3D) structural information of bacteria preserved in a native, frozen-hydrated state. The typical low contrast of tilt-series images, a result of both the need for a low electron dose and the use of conventional defocus phase-contrast imaging, is a challenge for high-quality tomograms. We show that Zernike phase-contrast imaging allows the electron dose to be reduced. This limits movement of gold fiducials during the tilt series, which leads to better alignment and a higher-resolution reconstruction. Contrast is also enhanced, improving visibility of weak features. The reduced electron dose also means that more images at more tilt angles could be recorded, further increasing resolution. PMID:24075950
NASA Astrophysics Data System (ADS)
González, Angélica; Linares, Román; Maceda, Marco; Sánchez-Santos, Oscar
2018-04-01
We analyze noncommutative deformations of a higher dimensional anti-de Sitter-Einstein-Born-Infeld black hole. Two models based on noncommutative inspired distributions of mass and charge are discussed and their thermodynamical properties such as the equation of state are explicitly calculated. In the (3 + 1)-dimensional case the Gibbs energy function of each model is used to discuss the presence of phase transitions.
Comparisons between thermodynamic and one-dimensional combustion models of spark-ignition engines
NASA Technical Reports Server (NTRS)
Ramos, J. I.
1986-01-01
Results from a one-dimensional combustion model employing a constant eddy diffusivity and a one-step chemical reaction are compared with those of one-zone and two-zone thermodynamic models to study the flame propagation in a spark-ignition engine. One-dimensional model predictions are found to be very sensitive to the eddy diffusivity and reaction rate data. The average mixing temperature found using the one-zone thermodynamic model is higher than those of the two-zone and one-dimensional models during the compression stroke, and that of the one-dimensional model is higher than those predicted by both thermodynamic models during the expansion stroke. The one-dimensional model is shown to predict an accelerating flame even when the front approaches the cold cylinder wall.
Subsurface structures of buried features in the lunar Procellarum region
NASA Astrophysics Data System (ADS)
Wang, Wenrui; Heki, Kosuke
2017-07-01
The Gravity Recovery and Interior Laboratory (GRAIL) mission unraveled numbers of features showing strong gravity anomalies without prominent topographic signatures in the lunar Procellarum region. These features, located in different geologic units, are considered to have complex subsurface structures reflecting different evolution processes. By using the GRAIL level-1 data, we estimated the free-air and Bouguer gravity anomalies in several selected regions including such intriguing features. With the three-dimensional inversion technique, we recovered subsurface density structures in these regions.
ERIC Educational Resources Information Center
Saorin, José Luis; Carbonell-Carrera, Carlos; Cantero, Jorge de la Torre; Meier, Cecile; Aleman, Drago Diaz
2017-01-01
Spatial interpretation features as a skill to acquire in the educational curricula. The visualization and interpretation of three-dimensional objects in tactile devices and the possibility of digital manufacturing with 3D printers, offers an opportunity to include replicas of sculptures in teaching and, thus, facilitate the 3D interpretation of…
ERIC Educational Resources Information Center
Ince, Elif; Kirbaslar, Fatma Gulay; Yolcu, Ergun; Aslan, Ayse Esra; Kayacan, Zeynep Cigdem; Alkan Olsson, Johanna; Akbasli, Ayse Ceylan; Aytekin, Mesut; Bauer, Thomas; Charalambis, Dimitris; Gunes, Zeliha Ozsoy; Kandemir, Ceyhan; Sari, Umit; Turkoglu, Suleyman; Yaman, Yavuz; Yolcu, Ozgu
2014-01-01
The purpose of this study is to develop a 3-dimensional interactive multi-user and multi-admin IUVIRLAB featuring active learning methods and techniques for university students and to introduce the Virtual Laboratory of Istanbul University and to show effects of IUVIRLAB on students' attitudes on communication skills and IUVIRLAB. Although there…
NASA Astrophysics Data System (ADS)
Weller, Andrew F.; Harris, Anthony J.; Ware, J. Andrew; Jarvis, Paul S.
2006-11-01
The classification of sedimentary organic matter (OM) images can be improved by determining the saliency of image analysis (IA) features measured from them. Knowing the saliency of IA feature measurements means that only the most significant discriminating features need be used in the classification process. This is an important consideration for classification techniques such as artificial neural networks (ANNs), where too many features can lead to the 'curse of dimensionality'. The classification scheme adopted in this work is a hybrid of morphologically and texturally descriptive features from previous manual classification schemes. Some of these descriptive features are assigned to IA features, along with several others built into the IA software (Halcon) to ensure that a valid cross-section is available. After an image is captured and segmented, a total of 194 features are measured for each particle. To reduce this number to a more manageable magnitude, the SPSS AnswerTree Exhaustive CHAID (χ 2 automatic interaction detector) classification tree algorithm is used to establish each measurement's saliency as a classification discriminator. In the case of continuous data as used here, the F-test is used as opposed to the published algorithm. The F-test checks various statistical hypotheses about the variance of groups of IA feature measurements obtained from the particles to be classified. The aim is to reduce the number of features required to perform the classification without reducing its accuracy. In the best-case scenario, 194 inputs are reduced to 8, with a subsequent multi-layer back-propagation ANN recognition rate of 98.65%. This paper demonstrates the ability of the algorithm to reduce noise, help overcome the curse of dimensionality, and facilitate an understanding of the saliency of IA features as discriminators for sedimentary OM classification.
Prediction of enhancer-promoter interactions via natural language processing.
Zeng, Wanwen; Wu, Mengmeng; Jiang, Rui
2018-05-09
Precise identification of three-dimensional genome organization, especially enhancer-promoter interactions (EPIs), is important to deciphering gene regulation, cell differentiation and disease mechanisms. Currently, it is a challenging task to distinguish true interactions from other nearby non-interacting ones since the power of traditional experimental methods is limited due to low resolution or low throughput. We propose a novel computational framework EP2vec to assay three-dimensional genomic interactions. We first extract sequence embedding features, defined as fixed-length vector representations learned from variable-length sequences using an unsupervised deep learning method in natural language processing. Then, we train a classifier to predict EPIs using the learned representations in supervised way. Experimental results demonstrate that EP2vec obtains F1 scores ranging from 0.841~ 0.933 on different datasets, which outperforms existing methods. We prove the robustness of sequence embedding features by carrying out sensitivity analysis. Besides, we identify motifs that represent cell line-specific information through analysis of the learned sequence embedding features by adopting attention mechanism. Last, we show that even superior performance with F1 scores 0.889~ 0.940 can be achieved by combining sequence embedding features and experimental features. EP2vec sheds light on feature extraction for DNA sequences of arbitrary lengths and provides a powerful approach for EPIs identification.
Kumar Myakalwar, Ashwin; Spegazzini, Nicolas; Zhang, Chi; Kumar Anubham, Siva; Dasari, Ramachandra R; Barman, Ishan; Kumar Gundawar, Manoj
2015-08-19
Despite its intrinsic advantages, translation of laser induced breakdown spectroscopy for material identification has been often impeded by the lack of robustness of developed classification models, often due to the presence of spurious correlations. While a number of classifiers exhibiting high discriminatory power have been reported, efforts in establishing the subset of relevant spectral features that enable a fundamental interpretation of the segmentation capability and avoid the 'curse of dimensionality' have been lacking. Using LIBS data acquired from a set of secondary explosives, we investigate judicious feature selection approaches and architect two different chemometrics classifiers -based on feature selection through prerequisite knowledge of the sample composition and genetic algorithm, respectively. While the full spectral input results in classification rate of ca.92%, selection of only carbon to hydrogen spectral window results in near identical performance. Importantly, the genetic algorithm-derived classifier shows a statistically significant improvement to ca. 94% accuracy for prospective classification, even though the number of features used is an order of magnitude smaller. Our findings demonstrate the impact of rigorous feature selection in LIBS and also hint at the feasibility of using a discrete filter based detector thereby enabling a cheaper and compact system more amenable to field operations.
Slow feature analysis: unsupervised learning of invariances.
Wiskott, Laurenz; Sejnowski, Terrence J
2002-04-01
Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.
Computing a Comprehensible Model for Spam Filtering
NASA Astrophysics Data System (ADS)
Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael
In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.
NASA Astrophysics Data System (ADS)
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness.
Liu, Bao; Fan, Xiaoming; Huo, Shengnan; Zhou, Lili; Wang, Jun; Zhang, Hui; Hu, Mei; Zhu, Jianhua
2011-12-01
A method was established to analyse the overlapped chromatographic peaks based on the chromatographic-spectra data detected by the diode-array ultraviolet detector. In the method, the three-dimensional data were de-noised and normalized firstly; secondly the differences and clustering analysis of the spectra at different time points were calculated; then the purity of the whole chromatographic peak were analysed and the region were sought out in which the spectra of different time points were stable. The feature spectra were extracted from the spectrum-stable region as the basic foundation. The nonnegative least-square method was chosen to separate the overlapped peaks and get the flow curve which was based on the feature spectrum. The three-dimensional divided chromatographic-spectrum peak could be gained by the matrix operations of the feature spectra with the flow curve. The results displayed that this method could separate the overlapped peaks.
NASA Astrophysics Data System (ADS)
Prudnikov, V. V.; Prudnikov, P. V.; Mamonova, M. V.
2017-11-01
This paper reviews features in critical behavior of far-from-equilibrium macroscopic systems and presents current methods of describing them by referring to some model statistical systems such as the three-dimensional Ising model and the two-dimensional XY model. The paper examines the critical relaxation of homogeneous and structurally disordered systems subjected to abnormally strong fluctuation effects involved in ordering processes in solids at second-order phase transitions. Interest in such systems is due to the aging properties and fluctuation-dissipation theorem violations predicted for and observed in systems slowly evolving from a nonequilibrium initial state. It is shown that these features of nonequilibrium behavior show up in the magnetic properties of magnetic superstructures consisting of alternating nanoscale-thick magnetic and nonmagnetic layers and can be observed not only near the film’s critical ferromagnetic ordering temperature Tc, but also over the wide temperature range T ⩽ Tc.
Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images
Pu, Shi; Vosselman, George
2009-01-01
Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539
An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
1999-01-01
An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
NASA Astrophysics Data System (ADS)
Davis, Benjamin L.; Berrier, J. C.; Shields, D. W.; Kennefick, J.; Kennefick, D.; Seigar, M. S.; Lacy, C. H. S.; Puerari, I.
2012-01-01
A logarithmic spiral is a prominent feature appearing in a majority of observed galaxies. This feature has long been associated with the traditional Hubble classification scheme, but historical quotes of pitch angle of spiral galaxies have been almost exclusively qualitative. We have developed a methodology, utilizing Two-Dimensional Fast Fourier Transformations of images of spiral galaxies, in order to isolate and measure the pitch angles of their spiral arms. Our technique provides a quantitative way to measure this morphological feature. This will allow the precise comparison of spiral galaxy evolution to other galactic parameters and test spiral arm genesis theories. In this work, we detail our image processing and analysis of spiral galaxy images and discuss the robustness of our analysis techniques. The authors gratefully acknowledge support for this work from NASA Grant NNX08AW03A.
Many-body effects and excitonic features in 2D biphenylene carbon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lüder, Johann, E-mail: johann.luder@physics.uu.se; Puglia, Carla; Eriksson, Olle
2016-01-14
The remarkable excitonic effects in low dimensional materials in connection to large binding energies of excitons are of great importance for research and technological applications such as in solar energy and quantum information processing as well as for fundamental investigations. In this study, the unique electronic and excitonic properties of the two dimensional carbon network biphenylene carbon were investigated with GW approach and the Bethe-Salpeter equation accounting for electron correlation effects and electron-hole interactions, respectively. Biphenylene carbon exhibits characteristic features including bright and dark excitons populating the optical gap of 0.52 eV and exciton binding energies of 530 meV asmore » well as a technologically relevant intrinsic band gap of 1.05 eV. Biphenylene carbon’s excitonic features, possibly tuned, suggest possible applications in the field of solar energy and quantum information technology in the future.« less
Approximation of Optimal Infinite Dimensional Compensators for Flexible Structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Mingori, D. L.; Adamian, A.; Jabbari, F.
1985-01-01
The infinite dimensional compensator for a large class of flexible structures, modeled as distributed systems are discussed, as well as an approximation scheme for designing finite dimensional compensators to approximate the infinite dimensional compensator. The approximation scheme is applied to develop a compensator for a space antenna model based on wrap-rib antennas being built currently. While the present model has been simplified, it retains the salient features of rigid body modes and several distributed components of different characteristics. The control and estimator gains are represented by functional gains, which provide graphical representations of the control and estimator laws. These functional gains also indicate the convergence of the finite dimensional compensators and show which modes the optimal compensator ignores.
2012-01-01
Background Dimensionality reduction (DR) enables the construction of a lower dimensional space (embedding) from a higher dimensional feature space while preserving object-class discriminability. However several popular DR approaches suffer from sensitivity to choice of parameters and/or presence of noise in the data. In this paper, we present a novel DR technique known as consensus embedding that aims to overcome these problems by generating and combining multiple low-dimensional embeddings, hence exploiting the variance among them in a manner similar to ensemble classifier schemes such as Bagging. We demonstrate theoretical properties of consensus embedding which show that it will result in a single stable embedding solution that preserves information more accurately as compared to any individual embedding (generated via DR schemes such as Principal Component Analysis, Graph Embedding, or Locally Linear Embedding). Intelligent sub-sampling (via mean-shift) and code parallelization are utilized to provide for an efficient implementation of the scheme. Results Applications of consensus embedding are shown in the context of classification and clustering as applied to: (1) image partitioning of white matter and gray matter on 10 different synthetic brain MRI images corrupted with 18 different combinations of noise and bias field inhomogeneity, (2) classification of 4 high-dimensional gene-expression datasets, (3) cancer detection (at a pixel-level) on 16 image slices obtained from 2 different high-resolution prostate MRI datasets. In over 200 different experiments concerning classification and segmentation of biomedical data, consensus embedding was found to consistently outperform both linear and non-linear DR methods within all applications considered. Conclusions We have presented a novel framework termed consensus embedding which leverages ensemble classification theory within dimensionality reduction, allowing for application to a wide range of high-dimensional biomedical data classification and segmentation problems. Our generalizable framework allows for improved representation and classification in the context of both imaging and non-imaging data. The algorithm offers a promising solution to problems that currently plague DR methods, and may allow for extension to other areas of biomedical data analysis. PMID:22316103
Wei, Fang; Hu, Na; Lv, Xin; Dong, Xu-Yan; Chen, Hong
2015-07-24
In this investigation, off-line comprehensive two-dimensional liquid chromatography-atmospheric pressure chemical ionization mass spectrometry using a single column has been applied for the identification and quantification of triacylglycerols in edible oils. A novel mixed-mode phenyl-hexyl chromatographic column was employed in this off-line two-dimensional separation system. The phenyl-hexyl column combined the features of traditional C18 and silver-ion columns, which could provide hydrophobic interactions with triacylglycerols under acetonitrile conditions and can offer π-π interactions with triacylglycerols under methanol conditions. When compared with traditional off-line comprehensive two-dimensional liquid chromatography employing two different chromatographic columns (C18 and silver-ion column) and using elution solvents comprised of two phases (reversed-phase/normal-phase) for triacylglycerols separation, the novel off-line comprehensive two-dimensional liquid chromatography using a single column can be achieved by simply altering the mobile phase between acetonitrile and methanol, which exhibited a much higher selectivity for the separation of triacylglycerols with great efficiency and rapid speed. In addition, an approach based on the use of response factor with atmospheric pressure chemical ionization mass spectrometry has been developed for triacylglycerols quantification. Due to the differences between saturated and unsaturated acyl chains, the use of response factors significantly improves the quantitation of triacylglycerols. This two-dimensional liquid chromatography-mass spectrometry system was successfully applied for the profiling of triacylglycerols in soybean oils, peanut oils and lord oils. A total of 68 triacylglycerols including 40 triacylglycerols in soybean oils, 50 triacylglycerols in peanut oils and 44 triacylglycerols in lord oils have been identified and quantified. The liquid chromatography-mass spectrometry data were analyzed using principal component analysis. The results of the principal component analysis enabled a clear identification of different plant oils. By using this two-dimensional liquid chromatography-mass spectrometry system coupled with principal component analysis, adulterated soybean oils with 5% added lord oil and peanut oils with 5% added soybean oil can be clearly identified. Copyright © 2015 Elsevier B.V. All rights reserved.
A VLSI implementation for synthetic aperture radar image processing
NASA Technical Reports Server (NTRS)
Premkumar, A.; Purviance, J.
1990-01-01
A simple physical model for the Synthetic Aperture Radar (SAR) is presented. This model explains the one dimensional and two dimensional nature of the received SAR signal in the range and azimuth directions. A time domain correlator, its algorithm, and features are explained. The correlator is ideally suited for VLSI implementation. A real time SAR architecture using these correlators is proposed. In the proposed architecture, the received SAR data is processed using one dimensional correlators for determining the range while two dimensional correlators are used to determine the azimuth of a target. The architecture uses only three different types of custom VLSI chips and a small amount of memory.
Metrological AFMs and its application for versatile nano-dimensional metrology tasks
NASA Astrophysics Data System (ADS)
Dai, Gaoliang; Dziomba, T.; Pohlenz, F.; Danzebrink, H.-U.; Koenders, L.
2010-08-01
Traceable calibrations of various micro and nano measurement devices are crucial tasks for ensuring reliable measurements for micro and nanotechnology. Today metrological AFM are widely used for traceable calibrations of nano dimensional standards. In this paper, we introduced the developments of metrological force microscopes at PTB. Of the three metrological AFMs described here, one is capable of measuring in a volume of 25 mm x 25 mm x 5 mm. All instruments feature interferometers and the three-dimensional position measurements are thus directly traceable to the metre definition. Some calibration examples on, for instance, flatness standards, step height standards, one and two dimensional gratings are demonstrated.
On the effect of memory in one-dimensional K=4 automata on networks
NASA Astrophysics Data System (ADS)
Alonso-Sanz, Ramón; Cárdenas, Juan Pablo
2008-12-01
The effect of implementing memory in cells of one-dimensional CA, and on nodes of various types of automata on networks with increasing degrees of random rewiring is studied in this article, paying particular attention to the case of four inputs. As a rule, memory induces a moderation in the rate of changing nodes and in the damage spreading, albeit in the latter case memory turns out to be ineffective in the control of the damage as the wiring network moves away from the ordered structure that features proper one-dimensional CA. This article complements the previous work done in the two-dimensional context.
Localized contourlet features in vehicle make and model recognition
NASA Astrophysics Data System (ADS)
Zafar, I.; Edirisinghe, E. A.; Acar, B. S.
2009-02-01
Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic Number Plate Recognition (ANPR) systems. Several vehicle MMR systems have been proposed in literature. In parallel to this, the usefulness of multi-resolution based feature analysis techniques leading to efficient object classification algorithms have received close attention from the research community. To this effect, Contourlet transforms that can provide an efficient directional multi-resolution image representation has recently been introduced. Already an attempt has been made in literature to use Curvelet/Contourlet transforms in vehicle MMR. In this paper we propose a novel localized feature detection method in Contourlet transform domain that is capable of increasing the classification rates up to 4%, as compared to the previously proposed Contourlet based vehicle MMR approach in which the features are non-localized and thus results in sub-optimal classification. Further we show that the proposed algorithm can achieve the increased classification accuracy of 96% at significantly lower computational complexity due to the use of Two Dimensional Linear Discriminant Analysis (2DLDA) for dimensionality reduction by preserving the features with high between-class variance and low inter-class variance.
Unsupervised universal steganalyzer for high-dimensional steganalytic features
NASA Astrophysics Data System (ADS)
Hou, Xiaodan; Zhang, Tao
2016-11-01
The research in developing steganalytic features has been highly successful. These features are extremely powerful when applied to supervised binary classification problems. However, they are incompatible with unsupervised universal steganalysis because the unsupervised method cannot distinguish embedding distortion from varying levels of noises caused by cover variation. This study attempts to alleviate the problem by introducing similarity retrieval of image statistical properties (SRISP), with the specific aim of mitigating the effect of cover variation on the existing steganalytic features. First, cover images with some statistical properties similar to those of a given test image are searched from a retrieval cover database to establish an aided sample set. Then, unsupervised outlier detection is performed on a test set composed of the given test image and its aided sample set to determine the type (cover or stego) of the given test image. Our proposed framework, called SRISP-aided unsupervised outlier detection, requires no training. Thus, it does not suffer from model mismatch mess. Compared with prior unsupervised outlier detectors that do not consider SRISP, the proposed framework not only retains the universality but also exhibits superior performance when applied to high-dimensional steganalytic features.
Simultaneous grouping pursuit and feature selection over an undirected graph*
Zhu, Yunzhang; Shen, Xiaotong; Pan, Wei
2013-01-01
Summary In high-dimensional regression, grouping pursuit and feature selection have their own merits while complementing each other in battling the curse of dimensionality. To seek a parsimonious model, we perform simultaneous grouping pursuit and feature selection over an arbitrary undirected graph with each node corresponding to one predictor. When the corresponding nodes are reachable from each other over the graph, regression coefficients can be grouped, whose absolute values are the same or close. This is motivated from gene network analysis, where genes tend to work in groups according to their biological functionalities. Through a nonconvex penalty, we develop a computational strategy and analyze the proposed method. Theoretical analysis indicates that the proposed method reconstructs the oracle estimator, that is, the unbiased least squares estimator given the true grouping, leading to consistent reconstruction of grouping structures and informative features, as well as to optimal parameter estimation. Simulation studies suggest that the method combines the benefit of grouping pursuit with that of feature selection, and compares favorably against its competitors in selection accuracy and predictive performance. An application to eQTL data is used to illustrate the methodology, where a network is incorporated into analysis through an undirected graph. PMID:24098061
Tsai, Meng-Yin; Lan, Kuo-Chung; Ou, Chia-Yo; Chen, Jen-Huang; Chang, Shiuh-Young; Hsu, Te-Yao
2004-02-01
Our purpose was to evaluate whether the application of serial three-dimensional (3D) sonography and the mandibular size monogram can allow observation of dynamic changes in facial features, as well as chin development in utero. The mandibular size monogram has been established through a cross-sectional study involving 183 fetal images. The serial changes of facial features and chin development are assessed in a cohort study involving 40 patients. The monogram reveals that the Biparietal distance (BPD)/Mandibular body length (MBL) ratio is gradually decreased with the advance of gestational age. The cohort study conducted with serial 3D sonography shows the same tendency. Both the images and the results of paired-samples t test (P<.001) statistical analysis suggest that the fetuses develop wider chins and broader facial features in later weeks. The serial 3D sonography and mandibular size monogram display disproportionate growth of the fetal head and chin that leads to changes in facial features in late gestation. This fact must be considered when we evaluate fetuses at risk for development of micrognathia.
The Virtual University: Creating an Emergent Reality.
ERIC Educational Resources Information Center
Latta, Gail F.
Higher education has traditionally been defined as a two dimensional affair concerned with content (curriculum) and pedagogy (instructional design). Information technologies are transforming the educational enterprise into a three-dimensional universe through the diversification of instructional delivery systems. The success of higher education in…
Simulating the influence of scatter and beam hardening in dimensional computed tomography
NASA Astrophysics Data System (ADS)
Lifton, J. J.; Carmignato, S.
2017-10-01
Cone-beam x-ray computed tomography (XCT) is a radiographic scanning technique that allows the non-destructive dimensional measurement of an object’s internal and external features. XCT measurements are influenced by a number of different factors that are poorly understood. This work investigates how non-linear x-ray attenuation caused by beam hardening and scatter influences XCT-based dimensional measurements through the use of simulated data. For the measurement task considered, both scatter and beam hardening are found to influence dimensional measurements when evaluated using the ISO50 surface determination method. On the other hand, only beam hardening is found to influence dimensional measurements when evaluated using an advanced surface determination method. Based on the results presented, recommendations on the use of beam hardening and scatter correction for dimensional XCT are given.
Certifying an Irreducible 1024-Dimensional Photonic State Using Refined Dimension Witnesses.
Aguilar, Edgar A; Farkas, Máté; Martínez, Daniel; Alvarado, Matías; Cariñe, Jaime; Xavier, Guilherme B; Barra, Johanna F; Cañas, Gustavo; Pawłowski, Marcin; Lima, Gustavo
2018-06-08
We report on a new class of dimension witnesses, based on quantum random access codes, which are a function of the recorded statistics and that have different bounds for all possible decompositions of a high-dimensional physical system. Thus, it certifies the dimension of the system and has the new distinct feature of identifying whether the high-dimensional system is decomposable in terms of lower dimensional subsystems. To demonstrate the practicability of this technique, we used it to experimentally certify the generation of an irreducible 1024-dimensional photonic quantum state. Therefore, certifying that the state is not multipartite or encoded using noncoupled different degrees of freedom of a single photon. Our protocol should find applications in a broad class of modern quantum information experiments addressing the generation of high-dimensional quantum systems, where quantum tomography may become intractable.
Certifying an Irreducible 1024-Dimensional Photonic State Using Refined Dimension Witnesses
NASA Astrophysics Data System (ADS)
Aguilar, Edgar A.; Farkas, Máté; Martínez, Daniel; Alvarado, Matías; Cariñe, Jaime; Xavier, Guilherme B.; Barra, Johanna F.; Cañas, Gustavo; Pawłowski, Marcin; Lima, Gustavo
2018-06-01
We report on a new class of dimension witnesses, based on quantum random access codes, which are a function of the recorded statistics and that have different bounds for all possible decompositions of a high-dimensional physical system. Thus, it certifies the dimension of the system and has the new distinct feature of identifying whether the high-dimensional system is decomposable in terms of lower dimensional subsystems. To demonstrate the practicability of this technique, we used it to experimentally certify the generation of an irreducible 1024-dimensional photonic quantum state. Therefore, certifying that the state is not multipartite or encoded using noncoupled different degrees of freedom of a single photon. Our protocol should find applications in a broad class of modern quantum information experiments addressing the generation of high-dimensional quantum systems, where quantum tomography may become intractable.
NASA Astrophysics Data System (ADS)
Hamrouni, Sameh; Rougon, Nicolas; Pr"teux, Françoise
2011-03-01
In perfusion MRI (p-MRI) exams, short-axis (SA) image sequences are captured at multiple slice levels along the long-axis of the heart during the transit of a vascular contrast agent (Gd-DTPA) through the cardiac chambers and muscle. Compensating cardio-thoracic motions is a requirement for enabling computer-aided quantitative assessment of myocardial ischaemia from contrast-enhanced p-MRI sequences. The classical paradigm consists of registering each sequence frame on a reference image using some intensity-based matching criterion. In this paper, we introduce a novel unsupervised method for the spatio-temporal groupwise registration of cardiac p-MRI exams based on normalized mutual information (NMI) between high-dimensional feature distributions. Here, local contrast enhancement curves are used as a dense set of spatio-temporal features, and statistically matched through variational optimization to a target feature distribution derived from a registered reference template. The hard issue of probability density estimation in high-dimensional state spaces is bypassed by using consistent geometric entropy estimators, allowing NMI to be computed directly from feature samples. Specifically, a computationally efficient kth-nearest neighbor (kNN) estimation framework is retained, leading to closed-form expressions for the gradient flow of NMI over finite- and infinite-dimensional motion spaces. This approach is applied to the groupwise alignment of cardiac p-MRI exams using a free-form Deformation (FFD) model for cardio-thoracic motions. Experiments on simulated and natural datasets suggest its accuracy and robustness for registering p-MRI exams comprising more than 30 frames.
Aerodynamic sound generation of flapping wing.
Bae, Youngmin; Moon, Young J
2008-07-01
The unsteady flow and acoustic characteristics of the flapping wing are numerically investigated for a two-dimensional model of Bombus terrestris bumblebee at hovering and forward flight conditions. The Reynolds number Re, based on the maximum translational velocity of the wing and the chord length, is 8800 and the Mach number M is 0.0485. The computational results show that the flapping wing sound is generated by two different sound generation mechanisms. A primary dipole tone is generated at wing beat frequency by the transverse motion of the wing, while other higher frequency dipole tones are produced via vortex edge scattering during a tangential motion. It is also found that the primary tone is directional because of the torsional angle in wing motion. These features are only distinct for hovering, while in forward flight condition, the wing-vortex interaction becomes more prominent due to the free stream effect. Thereby, the sound pressure level spectrum is more broadband at higher frequencies and the frequency compositions become similar in all directions.
Peridynamic Theory as a New Paradigm for Multiscale Modeling of Sintering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silling, Stewart A.; Abdeljawad, Fadi; Ford, Kurtis Ross
2017-09-01
Sintering is a component fabrication process in which powder is compacted by pressing or some other means and then held at elevated temperature for a period of hours. The powder grains bond with each other, leading to the formation of a solid component with much lower porosity, and therefore higher density and higher strength, than the original powder compact. In this project, we investigated a new way of computationally modeling sintering at the length scale of grains. The model uses a high-fidelity, three-dimensional representation with a few hundred nodes per grain. The numerical model solves the peridynamic equations, in whichmore » nonlocal forces allow representation of the attraction, adhesion, and mass diffusion between grains. The deformation of the grains is represented through a viscoelastic material model. The project successfully demonstrated the use of this method to reproduce experimentally observed features of material behavior in sintering, including densification, the evolution of microstructure, and the occurrence of random defects in the sintered solid.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, T., E-mail: xietao@ustc.edu.cn; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026; Qin, H.
A unified ballooning theory, constructed on the basis of two special theories [Zhang et al., Phys. Fluids B 4, 2729 (1992); Y. Z. Zhang and T. Xie, Nucl. Fusion Plasma Phys. 33, 193 (2013)], shows that a weak up-down asymmetric mode structure is normally formed in an up-down symmetric equilibrium; the weak up-down asymmetry in mode structure is the manifestation of non-trivial higher order effects beyond the standard ballooning equation. It is shown that the asymmetric mode may have even higher growth rate than symmetric modes. The salient features of the theory are illustrated by investigating a fluid model formore » the ion temperature gradient (ITG) mode. The two dimensional (2D) analytical form of the ITG mode, solved in ballooning representation, is then converted into the radial-poloidal space to provide the natural boundary condition for solving the 2D mathematical local eigenmode problem. We find that the analytical expression of the mode structure is in a good agreement with finite difference solution. This sets a reliable framework for quasi-linear computation.« less
Hanegraef, Hester; Martinón-Torres, María; Martínez de Pinillos, Marina; Martín-Francés, Laura; Vialet, Amélie; Arsuaga, Juan Luis; Bermúdez de Castro, José María
2018-06-01
This study aims to explore the affinities of the Sima de los Huesos (SH) population in relation to Homo neanderthalensis, Arago, and early and contemporary Homo sapiens. By characterizing SH intra-population variation, we test current models to explain the Neanderthal origins. Three-dimensional reconstructions of dentine surfaces of lower first and second molars were produced by micro-computed tomography. Landmarks and sliding semilandmarks were subjected to generalized Procrustes analysis and principal components analysis. SH is often similar in shape to Neanderthals, and both groups are generally discernible from Homo sapiens. For example, the crown height of SH and Neanderthals is lower than for modern humans. Differences in the presence of a mid-trigonid crest are also observed, with contemporary Homo sapiens usually lacking this feature. Although SH and Neanderthals show strong affinities, they can be discriminated based on certain traits. SH individuals are characterized by a lower intra-population variability, and show a derived dental reduction in lower second molars compared to Neanderthals. SH also differs in morphological features from specimens that are often classified as Homo heidelbergensis, such as a lower crown height and less pronounced mid-trigonid crest in the Arago fossils. Our results are compatible with the idea that multiple evolutionary lineages or populations coexisted in Europe during the Middle Pleistocene, with the SH paradigm phylogenetically closer to Homo neanderthalensis. Further research could support the possibility of SH as a separate taxon. Alternatively, SH could be a subspecies of Neanderthals, with the variability of this clade being remarkably higher than previously thought. © 2018 Wiley Periodicals, Inc.