NASA Astrophysics Data System (ADS)
Ray, S. Saha
2018-04-01
In this paper, the symmetry analysis and similarity reduction of the (2+1)-dimensional Bogoyavlensky-Konopelchenko (B-K) equation are investigated by means of the geometric approach of an invariance group, which is equivalent to the classical Lie symmetry method. Using the extended Harrison and Estabrook’s differential forms approach, the infinitesimal generators for (2+1)-dimensional B-K equation are obtained. Firstly, the vector field associated with the Lie group of transformation is derived. Then the symmetry reduction and the corresponding explicit exact solution of (2+1)-dimensional B-K equation is obtained.
Krivov, Sergei V
2011-07-01
Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game--the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.
NASA Astrophysics Data System (ADS)
Krivov, Sergei V.
2011-07-01
Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game—the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.
Dimensionality reduction of collective motion by principal manifolds
NASA Astrophysics Data System (ADS)
Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.
2015-01-01
While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.
Gönen, Mehmet
2014-01-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862
Gönen, Mehmet
2014-03-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.
Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding
Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping
2015-01-01
Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771
On the precision of quasi steady state assumptions in stochastic dynamics
NASA Astrophysics Data System (ADS)
Agarwal, Animesh; Adams, Rhys; Castellani, Gastone C.; Shouval, Harel Z.
2012-07-01
Many biochemical networks have complex multidimensional dynamics and there is a long history of methods that have been used for dimensionality reduction for such reaction networks. Usually a deterministic mass action approach is used; however, in small volumes, there are significant fluctuations from the mean which the mass action approach cannot capture. In such cases stochastic simulation methods should be used. In this paper, we evaluate the applicability of one such dimensionality reduction method, the quasi-steady state approximation (QSSA) [L. Menten and M. Michaelis, "Die kinetik der invertinwirkung," Biochem. Z 49, 333369 (1913)] for dimensionality reduction in case of stochastic dynamics. First, the applicability of QSSA approach is evaluated for a canonical system of enzyme reactions. Application of QSSA to such a reaction system in a deterministic setting leads to Michaelis-Menten reduced kinetics which can be used to derive the equilibrium concentrations of the reaction species. In the case of stochastic simulations, however, the steady state is characterized by fluctuations around the mean equilibrium concentration. Our analysis shows that a QSSA based approach for dimensionality reduction captures well the mean of the distribution as obtained from a full dimensional simulation but fails to accurately capture the distribution around that mean. Moreover, the QSSA approximation is not unique. We have then extended the analysis to a simple bistable biochemical network model proposed to account for the stability of synaptic efficacies; the substrate of learning and memory [J. E. Lisman, "A mechanism of memory storage insensitive to molecular turnover: A bistable autophosphorylating kinase," Proc. Natl. Acad. Sci. U.S.A. 82, 3055-3057 (1985)], 10.1073/pnas.82.9.3055. Our analysis shows that a QSSA based dimensionality reduction method results in errors as big as two orders of magnitude in predicting the residence times in the two stable states.
Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ochilov, S.; Alam, M. S.; Bal, A.
2006-05-01
Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.
Metric dimensional reduction at singularities with implications to Quantum Gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoica, Ovidiu Cristinel, E-mail: holotronix@gmail.com
2014-08-15
A series of old and recent theoretical observations suggests that the quantization of gravity would be feasible, and some problems of Quantum Field Theory would go away if, somehow, the spacetime would undergo a dimensional reduction at high energy scales. But an identification of the deep mechanism causing this dimensional reduction would still be desirable. The main contribution of this article is to show that dimensional reduction effects are due to General Relativity at singularities, and do not need to be postulated ad-hoc. Recent advances in understanding the geometry of singularities do not require modification of General Relativity, being justmore » non-singular extensions of its mathematics to the limit cases. They turn out to work fine for some known types of cosmological singularities (black holes and FLRW Big-Bang), allowing a choice of the fundamental geometric invariants and physical quantities which remain regular. The resulting equations are equivalent to the standard ones outside the singularities. One consequence of this mathematical approach to the singularities in General Relativity is a special, (geo)metric type of dimensional reduction: at singularities, the metric tensor becomes degenerate in certain spacetime directions, and some properties of the fields become independent of those directions. Effectively, it is like one or more dimensions of spacetime just vanish at singularities. This suggests that it is worth exploring the possibility that the geometry of singularities leads naturally to the spontaneous dimensional reduction needed by Quantum Gravity. - Highlights: • The singularities we introduce are described by finite geometric/physical objects. • Our singularities are accompanied by dimensional reduction effects. • They affect the metric, the measure, the topology, the gravitational DOF (Weyl = 0). • Effects proposed in other approaches to Quantum Gravity are obtained naturally. • The geometric dimensional reduction obtained opens new ways for Quantum Gravity.« less
Tensor Train Neighborhood Preserving Embedding
NASA Astrophysics Data System (ADS)
Wang, Wenqi; Aggarwal, Vaneet; Aeron, Shuchin
2018-05-01
In this paper, we propose a Tensor Train Neighborhood Preserving Embedding (TTNPE) to embed multi-dimensional tensor data into low dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate novel trade-off gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior trade-off in classification, computation, and dimensionality reduction in MNIST handwritten digits and Weizmann face datasets.
NASA Astrophysics Data System (ADS)
Davoudi, Alireza; Shiry Ghidary, Saeed; Sadatnejad, Khadijeh
2017-06-01
Objective. In this paper, we propose a nonlinear dimensionality reduction algorithm for the manifold of symmetric positive definite (SPD) matrices that considers the geometry of SPD matrices and provides a low-dimensional representation of the manifold with high class discrimination in a supervised or unsupervised manner. Approach. The proposed algorithm tries to preserve the local structure of the data by preserving distances to local means (DPLM) and also provides an implicit projection matrix. DPLM is linear in terms of the number of training samples. Main results. We performed several experiments on the multi-class dataset IIa from BCI competition IV and two other datasets from BCI competition III including datasets IIIa and IVa. The results show that our approach as dimensionality reduction technique—leads to superior results in comparison with other competitors in the related literature because of its robustness against outliers and the way it preserves the local geometry of the data. Significance. The experiments confirm that the combination of DPLM with filter geodesic minimum distance to mean as the classifier leads to superior performance compared with the state of the art on brain-computer interface competition IV dataset IIa. Also the statistical analysis shows that our dimensionality reduction method performs significantly better than its competitors.
NASA Astrophysics Data System (ADS)
Aviles, Angelica I.; Alsaleh, Samar; Sobrevilla, Pilar; Casals, Alicia
2016-03-01
Robotic-Assisted Surgery approach overcomes the limitations of the traditional laparoscopic and open surgeries. However, one of its major limitations is the lack of force feedback. Since there is no direct interaction between the surgeon and the tissue, there is no way of knowing how much force the surgeon is applying which can result in irreversible injuries. The use of force sensors is not practical since they impose different constraints. Thus, we make use of a neuro-visual approach to estimate the applied forces, in which the 3D shape recovery together with the geometry of motion are used as input to a deep network based on LSTM-RNN architecture. When deep networks are used in real time, pre-processing of data is a key factor to reduce complexity and improve the network performance. A common pre-processing step is dimensionality reduction which attempts to eliminate redundant and insignificant information by selecting a subset of relevant features to use in model construction. In this work, we show the effects of dimensionality reduction in a real-time application: estimating the applied force in Robotic-Assisted Surgeries. According to the results, we demonstrated positive effects of doing dimensionality reduction on deep networks including: faster training, improved network performance, and overfitting prevention. We also show a significant accuracy improvement, ranging from about 33% to 86%, over existing approaches related to force estimation.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
NASA Astrophysics Data System (ADS)
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Robust video copy detection approach based on local tangent space alignment
NASA Astrophysics Data System (ADS)
Nie, Xiushan; Qiao, Qianping
2012-04-01
We propose a robust content-based video copy detection approach based on local tangent space alignment (LTSA), which is an efficient dimensionality reduction algorithm. The idea is motivated by the fact that the content of video becomes richer and the dimension of content becomes higher. It does not give natural tools for video analysis and understanding because of the high dimensionality. The proposed approach reduces the dimensionality of video content using LTSA, and then generates video fingerprints in low dimensional space for video copy detection. Furthermore, a dynamic sliding window is applied to fingerprint matching. Experimental results show that the video copy detection approach has good robustness and discrimination.
Rigorous Model Reduction for a Damped-Forced Nonlinear Beam Model: An Infinite-Dimensional Analysis
NASA Astrophysics Data System (ADS)
Kogelbauer, Florian; Haller, George
2018-06-01
We use invariant manifold results on Banach spaces to conclude the existence of spectral submanifolds (SSMs) in a class of nonlinear, externally forced beam oscillations. SSMs are the smoothest nonlinear extensions of spectral subspaces of the linearized beam equation. Reduction in the governing PDE to SSMs provides an explicit low-dimensional model which captures the correct asymptotics of the full, infinite-dimensional dynamics. Our approach is general enough to admit extensions to other types of continuum vibrations. The model-reduction procedure we employ also gives guidelines for a mathematically self-consistent modeling of damping in PDEs describing structural vibrations.
A trace ratio maximization approach to multiple kernel-based dimensionality reduction.
Jiang, Wenhao; Chung, Fu-lai
2014-01-01
Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.
Exploring the CAESAR database using dimensionality reduction techniques
NASA Astrophysics Data System (ADS)
Mendoza-Schrock, Olga; Raymer, Michael L.
2012-06-01
The Civilian American and European Surface Anthropometry Resource (CAESAR) database containing over 40 anthropometric measurements on over 4000 humans has been extensively explored for pattern recognition and classification purposes using the raw, original data [1-4]. However, some of the anthropometric variables would be impossible to collect in an uncontrolled environment. Here, we explore the use of dimensionality reduction methods in concert with a variety of classification algorithms for gender classification using only those variables that are readily observable in an uncontrolled environment. Several dimensionality reduction techniques are employed to learn the underlining structure of the data. These techniques include linear projections such as the classical Principal Components Analysis (PCA) and non-linear (manifold learning) techniques, such as Diffusion Maps and the Isomap technique. This paper briefly describes all three techniques, and compares three different classifiers, Naïve Bayes, Adaboost, and Support Vector Machines (SVM), for gender classification in conjunction with each of these three dimensionality reduction approaches.
Ravi, Daniele; Fabelo, Himar; Callic, Gustavo Marrero; Yang, Guang-Zhong
2017-09-01
Recent advances in hyperspectral imaging have made it a promising solution for intra-operative tissue characterization, with the advantages of being non-contact, non-ionizing, and non-invasive. Working with hyperspectral images in vivo, however, is not straightforward as the high dimensionality of the data makes real-time processing challenging. In this paper, a novel dimensionality reduction scheme and a new processing pipeline are introduced to obtain a detailed tumor classification map for intra-operative margin definition during brain surgery. However, existing approaches to dimensionality reduction based on manifold embedding can be time consuming and may not guarantee a consistent result, thus hindering final tissue classification. The proposed framework aims to overcome these problems through a process divided into two steps: dimensionality reduction based on an extension of the T-distributed stochastic neighbor approach is first performed and then a semantic segmentation technique is applied to the embedded results by using a Semantic Texton Forest for tissue classification. Detailed in vivo validation of the proposed method has been performed to demonstrate the potential clinical value of the system.
Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés
2016-07-15
Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate ofmore » $$O(n^{-1/2})$$, the corresponding IRUQ converges at $$O(n^{-1})$$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.« less
A Fourier dimensionality reduction model for big data interferometric imaging
NASA Astrophysics Data System (ADS)
Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves
2017-06-01
Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.
Stable orthogonal local discriminant embedding for linear dimensionality reduction.
Gao, Quanxue; Ma, Jingjie; Zhang, Hailin; Gao, Xinbo; Liu, Yamin
2013-07-01
Manifold learning is widely used in machine learning and pattern recognition. However, manifold learning only considers the similarity of samples belonging to the same class and ignores the within-class variation of data, which will impair the generalization and stableness of the algorithms. For this purpose, we construct an adjacency graph to model the intraclass variation that characterizes the most important properties, such as diversity of patterns, and then incorporate the diversity into the discriminant objective function for linear dimensionality reduction. Finally, we introduce the orthogonal constraint for the basis vectors and propose an orthogonal algorithm called stable orthogonal local discriminate embedding. Experimental results on several standard image databases demonstrate the effectiveness of the proposed dimensionality reduction approach.
Dimensionality reduction in epidemic spreading models
NASA Astrophysics Data System (ADS)
Frasca, M.; Rizzo, A.; Gallo, L.; Fortuna, L.; Porfiri, M.
2015-09-01
Complex dynamical systems often exhibit collective dynamics that are well described by a reduced set of key variables in a low-dimensional space. Such a low-dimensional description offers a privileged perspective to understand the system behavior across temporal and spatial scales. In this work, we propose a data-driven approach to establish low-dimensional representations of large epidemic datasets by using a dimensionality reduction algorithm based on isometric features mapping (ISOMAP). We demonstrate our approach on synthetic data for epidemic spreading in a population of mobile individuals. We find that ISOMAP is successful in embedding high-dimensional data into a low-dimensional manifold, whose topological features are associated with the epidemic outbreak. Across a range of simulation parameters and model instances, we observe that epidemic outbreaks are embedded into a family of closed curves in a three-dimensional space, in which neighboring points pertain to instants that are close in time. The orientation of each curve is unique to a specific outbreak, and the coordinates correlate with the number of infected individuals. A low-dimensional description of epidemic spreading is expected to improve our understanding of the role of individual response on the outbreak dynamics, inform the selection of meaningful global observables, and, possibly, aid in the design of control and quarantine procedures.
Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.
2010-01-01
A central goal of human genetics is to identify and characterize susceptibility genes for common complex human diseases. An important challenge in this endeavor is the modeling of gene-gene interaction or epistasis that can result in non-additivity of genetic effects. The multifactor dimensionality reduction (MDR) method was developed as machine learning alternative to parametric logistic regression for detecting interactions in absence of significant marginal effects. The goal of MDR is to reduce the dimensionality inherent in modeling combinations of polymorphisms using a computational approach called constructive induction. Here, we propose a Robust Multifactor Dimensionality Reduction (RMDR) method that performs constructive induction using a Fisher’s Exact Test rather than a predetermined threshold. The advantage of this approach is that only those genotype combinations that are determined to be statistically significant are considered in the MDR analysis. We use two simulation studies to demonstrate that this approach will increase the success rate of MDR when there are only a few genotype combinations that are significantly associated with case-control status. We show that there is no loss of success rate when this is not the case. We then apply the RMDR method to the detection of gene-gene interactions in genotype data from a population-based study of bladder cancer in New Hampshire. PMID:21091664
NASA Astrophysics Data System (ADS)
Kawata, Y.; Niki, N.; Ohmatsu, H.; Aokage, K.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.
2015-03-01
Advantages of CT scanners with high resolution have allowed the improved detection of lung cancers. In the recent release of positive results from the National Lung Screening Trial (NLST) in the US showing that CT screening does in fact have a positive impact on the reduction of lung cancer related mortality. While this study does show the efficacy of CT based screening, physicians often face the problems of deciding appropriate management strategies for maximizing patient survival and for preserving lung function. Several key manifold-learning approaches efficiently reveal intrinsic low-dimensional structures latent in high-dimensional data spaces. This study was performed to investigate whether the dimensionality reduction can identify embedded structures from the CT histogram feature of non-small-cell lung cancer (NSCLC) space to improve the performance in predicting the likelihood of RFS for patients with NSCLC.
Geometric mean for subspace selection.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2009-02-01
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.
Gras, Florian; Marintschev, Ivan; Grossterlinden, Lars; Rossmann, Markus; Graul, Isabel; Hofmann, Gunther O; Rueger, Johannes M; Lehmann, Wolfgang
2017-07-01
Anatomical acetabular plates the anterior intrapelvic approach (AIP) were recently introduced to fix acetabular fractures through the intrapelvic approach. Therefore, we asked the following: (1) Does the preshaped 3-dimensional suprapectineal plate interfere with or even impair the fracture reduction quality? (2) How often does the AIP approach need to be extended by the first (lateral) window of the ilioinguinal approach? Observational case series. Two Level 1 trauma centers. Patients with unstable acetabular fractures in 2014. Fracture fixation with anatomical-preshaped, 3-dimensional suprapectineal plates through the AIP approach ± the first window of the ilioinguinal approach. Fracture reduction results were measured in computed tomography scans and graded according to the Matta quality of reduction. Intraoperative parameters and perioperative complications were recorded. Radiological results (according to Matta) and functional outcome (modified Merle d'Aubigné score) were evaluated at 1-year follow-up. Thirty patients (9 women + 21 men; mean age ± SE: 64 ± 8 years) were included. The intrapelvic approach was solely used in 19 cases, and in 11 cases, an additional extension with the first window of the ilioinguinal approach (preferential for 2-column fractures) was performed. The mean operating time was 202 ± 59 minutes; the fluoroscopic time was 66 ± 48 seconds. Fracture gaps and steps in preoperative versus postoperative computed tomography scans were 12.4 ± 9.8 versus 2.0 ± 1.5 and 6.0 ± 5.5 versus 1.3 ± 1.7 mm, respectively. At 13.4 ± 2.9 months follow-up, the Matta grading was excellent in 50%, good in 25%, fair in 11%, and poor in 14% of cases. The modified Merle d'Aubigné score was excellent in 17%, good in 37%, fair in 33%, and poor in 13% of cases. The AIP approach using approach-specific instruments and an anatomical-preshaped, 3-dimensional suprapectineal plate became the standard procedure in our departments. Radiological and functional early results justify joint preserving surgery in most cases. Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence.
Clark, Neil R.; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D.; Jones, Matthew R.; Ma’ayan, Avi
2016-01-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community. PMID:26848405
Clark, Neil R; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D; Jones, Matthew R; Ma'ayan, Avi
2015-11-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community.
Clustering and Dimensionality Reduction to Discover Interesting Patterns in Binary Data
NASA Astrophysics Data System (ADS)
Palumbo, Francesco; D'Enza, Alfonso Iodice
The attention towards binary data coding increased consistently in the last decade due to several reasons. The analysis of binary data characterizes several fields of application, such as market basket analysis, DNA microarray data, image mining, text mining and web-clickstream mining. The paper illustrates two different approaches exploiting a profitable combination of clustering and dimensionality reduction for the identification of non-trivial association structures in binary data. An application in the Association Rules framework supports the theory with the empirical evidence.
ERIC Educational Resources Information Center
Chen, Chwen Jen; Fauzy Wan Ismail, Wan Mohd
2008-01-01
The real-time interactive nature of three-dimensional virtual environments (VEs) makes this technology very appropriate for exploratory learning purposes. However, many studies have shown that the exploration process may cause cognitive overload that affects the learning of domain knowledge. This article reports a quasi-experimental study that…
NASA Astrophysics Data System (ADS)
Nicolini, Paolo; Frezzato, Diego
2013-06-01
Simplification of chemical kinetics description through dimensional reduction is particularly important to achieve an accurate numerical treatment of complex reacting systems, especially when stiff kinetics are considered and a comprehensive picture of the evolving system is required. To this aim several tools have been proposed in the past decades, such as sensitivity analysis, lumping approaches, and exploitation of time scales separation. In addition, there are methods based on the existence of the so-called slow manifolds, which are hyper-surfaces of lower dimension than the one of the whole phase-space and in whose neighborhood the slow evolution occurs after an initial fast transient. On the other hand, all tools contain to some extent a degree of subjectivity which seems to be irremovable. With reference to macroscopic and spatially homogeneous reacting systems under isothermal conditions, in this work we shall adopt a phenomenological approach to let self-emerge the dimensional reduction from the mathematical structure of the evolution law. By transforming the original system of polynomial differential equations, which describes the chemical evolution, into a universal quadratic format, and making a direct inspection of the high-order time-derivatives of the new dynamic variables, we then formulate a conjecture which leads to the concept of an "attractiveness" region in the phase-space where a well-defined state-dependent rate function ω has the simple evolution dot{ω }= - ω ^2 along any trajectory up to the stationary state. This constitutes, by itself, a drastic dimensional reduction from a system of N-dimensional equations (being N the number of chemical species) to a one-dimensional and universal evolution law for such a characteristic rate. Step-by-step numerical inspections on model kinetic schemes are presented. In the companion paper [P. Nicolini and D. Frezzato, J. Chem. Phys. 138, 234102 (2013)], 10.1063/1.4809593 this outcome will be naturally related to the appearance (and hence, to the definition) of the slow manifolds.
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Rosen, I. G.
1988-01-01
In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.
NASA Technical Reports Server (NTRS)
Burns, John A.; Marrekchi, Hamadi
1993-01-01
The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.
A Review on Dimension Reduction
Ma, Yanyuan; Zhu, Liping
2013-01-01
Summary Summarizing the effect of many covariates through a few linear combinations is an effective way of reducing covariate dimension and is the backbone of (sufficient) dimension reduction. Because the replacement of high-dimensional covariates by low-dimensional linear combinations is performed with a minimum assumption on the specific regression form, it enjoys attractive advantages as well as encounters unique challenges in comparison with the variable selection approach. We review the current literature of dimension reduction with an emphasis on the two most popular models, where the dimension reduction affects the conditional distribution and the conditional mean, respectively. We discuss various estimation and inference procedures in different levels of detail, with the intention of focusing on their underneath idea instead of technicalities. We also discuss some unsolved problems in this area for potential future research. PMID:23794782
Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach
NASA Astrophysics Data System (ADS)
Pinto, Rafael S.; Saa, Alberto
2015-12-01
A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.
Blöchliger, Nicolas; Caflisch, Amedeo; Vitalis, Andreas
2015-11-10
Data mining techniques depend strongly on how the data are represented and how distance between samples is measured. High-dimensional data often contain a large number of irrelevant dimensions (features) for a given query. These features act as noise and obfuscate relevant information. Unsupervised approaches to mine such data require distance measures that can account for feature relevance. Molecular dynamics simulations produce high-dimensional data sets describing molecules observed in time. Here, we propose to globally or locally weight simulation features based on effective rates. This emphasizes, in a data-driven manner, slow degrees of freedom that often report on the metastable states sampled by the molecular system. We couple this idea to several unsupervised learning protocols. Our approach unmasks slow side chain dynamics within the native state of a miniprotein and reveals additional metastable conformations of a protein. The approach can be combined with most algorithms for clustering or dimensionality reduction.
Learning an intrinsic-variable preserving manifold for dynamic visual tracking.
Qiao, Hong; Zhang, Peng; Zhang, Bo; Zheng, Suiwu
2010-06-01
Manifold learning is a hot topic in the field of computer science, particularly since nonlinear dimensionality reduction based on manifold learning was proposed in Science in 2000. The work has achieved great success. The main purpose of current manifold-learning approaches is to search for independent intrinsic variables underlying high dimensional inputs which lie on a low dimensional manifold. In this paper, a new manifold is built up in the training step of the process, on which the input training samples are set to be close to each other if the values of their intrinsic variables are close to each other. Then, the process of dimensionality reduction is transformed into a procedure of preserving the continuity of the intrinsic variables. By utilizing the new manifold, the dynamic tracking of a human who can move and rotate freely is achieved. From the theoretical point of view, it is the first approach to transfer the manifold-learning framework to dynamic tracking. From the application point of view, a new and low dimensional feature for visual tracking is obtained and successfully applied to the real-time tracking of a free-moving object from a dynamic vision system. Experimental results from a dynamic tracking system which is mounted on a dynamic robot validate the effectiveness of the new algorithm.
Sheets, H David; Covino, Kristen M; Panasiewicz, Joanna M; Morris, Sara R
2006-01-01
Background Geometric morphometric methods of capturing information about curves or outlines of organismal structures may be used in conjunction with canonical variates analysis (CVA) to assign specimens to groups or populations based on their shapes. This methodological paper examines approaches to optimizing the classification of specimens based on their outlines. This study examines the performance of four approaches to the mathematical representation of outlines and two different approaches to curve measurement as applied to a collection of feather outlines. A new approach to the dimension reduction necessary to carry out a CVA on this type of outline data with modest sample sizes is also presented, and its performance is compared to two other approaches to dimension reduction. Results Two semi-landmark-based methods, bending energy alignment and perpendicular projection, are shown to produce roughly equal rates of classification, as do elliptical Fourier methods and the extended eigenshape method of outline measurement. Rates of classification were not highly dependent on the number of points used to represent a curve or the manner in which those points were acquired. The new approach to dimensionality reduction, which utilizes a variable number of principal component (PC) axes, produced higher cross-validation assignment rates than either the standard approach of using a fixed number of PC axes or a partial least squares method. Conclusion Classification of specimens based on feather shape was not highly dependent of the details of the method used to capture shape information. The choice of dimensionality reduction approach was more of a factor, and the cross validation rate of assignment may be optimized using the variable number of PC axes method presented herein. PMID:16978414
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
Higher-order gravity in higher dimensions: geometrical origins of four-dimensional cosmology?
NASA Astrophysics Data System (ADS)
Troisi, Antonio
2017-03-01
Determining the cosmological field equations is still very much debated and led to a wide discussion around different theoretical proposals. A suitable conceptual scheme could be represented by gravity models that naturally generalize Einstein theory like higher-order gravity theories and higher-dimensional ones. Both of these two different approaches allow one to define, at the effective level, Einstein field equations equipped with source-like energy-momentum tensors of geometrical origin. In this paper, the possibility is discussed to develop a five-dimensional fourth-order gravity model whose lower-dimensional reduction could provide an interpretation of cosmological four-dimensional matter-energy components. We describe the basic concepts of the model, the complete field equations formalism and the 5-D to 4-D reduction procedure. Five-dimensional f( R) field equations turn out to be equivalent, on the four-dimensional hypersurfaces orthogonal to the extra coordinate, to an Einstein-like cosmological model with three matter-energy tensors related with higher derivative and higher-dimensional counter-terms. By considering the gravity model with f(R)=f_0R^n the possibility is investigated to obtain five-dimensional power law solutions. The effective four-dimensional picture and the behaviour of the geometrically induced sources are finally outlined in correspondence to simple cases of such higher-dimensional solutions.
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications.
Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael
2014-10-01
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. Copyright © 2014 Elsevier Inc. All rights reserved.
Yielding physically-interpretable emulators - A Sparse PCA approach
NASA Astrophysics Data System (ADS)
Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.
2015-12-01
Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.
Kent, Jack W
2016-02-03
New technologies for acquisition of genomic data, while offering unprecedented opportunities for genetic discovery, also impose severe burdens of interpretation and penalties for multiple testing. The Pathway-based Analyses Group of the Genetic Analysis Workshop 19 (GAW19) sought reduction of multiple-testing burden through various approaches to aggregation of highdimensional data in pathways informed by prior biological knowledge. Experimental methods testedincluded the use of "synthetic pathways" (random sets of genes) to estimate power and false-positive error rate of methods applied to simulated data; data reduction via independent components analysis, single-nucleotide polymorphism (SNP)-SNP interaction, and use of gene sets to estimate genetic similarity; and general assessment of the efficacy of prior biological knowledge to reduce the dimensionality of complex genomic data. The work of this group explored several promising approaches to managing high-dimensional data, with the caveat that these methods are necessarily constrained by the quality of external bioinformatic annotation.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.
Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
Peleato, Nicolas M; Legge, Raymond L; Andrews, Robert C
2018-06-01
The use of fluorescence data coupled with neural networks for improved predictability of drinking water disinfection by-products (DBPs) was investigated. Novel application of autoencoders to process high-dimensional fluorescence data was related to common dimensionality reduction techniques of parallel factors analysis (PARAFAC) and principal component analysis (PCA). The proposed method was assessed based on component interpretability as well as for prediction of organic matter reactivity to formation of DBPs. Optimal prediction accuracies on a validation dataset were observed with an autoencoder-neural network approach or by utilizing the full spectrum without pre-processing. Latent representation by an autoencoder appeared to mitigate overfitting when compared to other methods. Although DBP prediction error was minimized by other pre-processing techniques, PARAFAC yielded interpretable components which resemble fluorescence expected from individual organic fluorophores. Through analysis of the network weights, fluorescence regions associated with DBP formation can be identified, representing a potential method to distinguish reactivity between fluorophore groupings. However, distinct results due to the applied dimensionality reduction approaches were observed, dictating a need for considering the role of data pre-processing in the interpretability of the results. In comparison to common organic measures currently used for DBP formation prediction, fluorescence was shown to improve prediction accuracies, with improvements to DBP prediction best realized when appropriate pre-processing and regression techniques were applied. The results of this study show promise for the potential application of neural networks to best utilize fluorescence EEM data for prediction of organic matter reactivity. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the orthogonality of the projection matrix by exploiting recent results on the Stiefel manifold, i.e., the manifold of matrices with orthogonal columns. The additional benefit of our probabilistic formulation, is that it allows us to select the dimensionality of the AS via the Bayesian information criterion. We validate our approach by showing that it can discover the right AS in synthetic examples without gradient information using both noiseless and noisy observations. We demonstrate that our method is able to discover the same AS as the classical approach in a challenging one-hundred-dimensional problem involving an elliptic stochastic partial differential equation with random conductivity. Finally, we use our approach to study the effect of geometric and material uncertainties in the propagation of solitary waves in a one dimensional granular system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range ofmore » physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the orthogonality of the projection matrix by exploiting recent results on the Stiefel manifold, i.e., the manifold of matrices with orthogonal columns. The additional benefit of our probabilistic formulation, is that it allows us to select the dimensionality of the AS via the Bayesian information criterion. We validate our approach by showing that it can discover the right AS in synthetic examples without gradient information using both noiseless and noisy observations. We demonstrate that our method is able to discover the same AS as the classical approach in a challenging one-hundred-dimensional problem involving an elliptic stochastic partial differential equation with random conductivity. Finally, we use our approach to study the effect of geometric and material uncertainties in the propagation of solitary waves in a one dimensional granular system.« less
Lee, Seungyeoun; Kim, Yongkang; Kwon, Min-Seok; Park, Taesung
2015-01-01
Genome-wide association studies (GWAS) have extensively analyzed single SNP effects on a wide variety of common and complex diseases and found many genetic variants associated with diseases. However, there is still a large portion of the genetic variants left unexplained. This missing heritability problem might be due to the analytical strategy that limits analyses to only single SNPs. One of possible approaches to the missing heritability problem is to consider identifying multi-SNP effects or gene-gene interactions. The multifactor dimensionality reduction method has been widely used to detect gene-gene interactions based on the constructive induction by classifying high-dimensional genotype combinations into one-dimensional variable with two attributes of high risk and low risk for the case-control study. Many modifications of MDR have been proposed and also extended to the survival phenotype. In this study, we propose several extensions of MDR for the survival phenotype and compare the proposed extensions with earlier MDR through comprehensive simulation studies. PMID:26339630
Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform
NASA Astrophysics Data System (ADS)
Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah
2017-02-01
Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications. PMID:25885290
EXTRACTING PRINCIPLE COMPONENTS FOR DISCRIMINANT ANALYSIS OF FMRI IMAGES
Liu, Jingyu; Xu, Lai; Caprihan, Arvind; Calhoun, Vince D.
2009-01-01
This paper presents an approach for selecting optimal components for discriminant analysis. Such an approach is useful when further detailed analyses for discrimination or characterization requires dimensionality reduction. Our approach can accommodate a categorical variable such as diagnosis (e.g. schizophrenic patient or healthy control), or a continuous variable like severity of the disorder. This information is utilized as a reference for measuring a component’s discriminant power after principle component decomposition. After sorting each component according to its discriminant power, we extract the best components for discriminant analysis. An application of our reference selection approach is shown using a functional magnetic resonance imaging data set in which the sample size is much less than the dimensionality. The results show that the reference selection approach provides an improved discriminant component set as compared to other approaches. Our approach is general and provides a solid foundation for further discrimination and classification studies. PMID:20582334
EXTRACTING PRINCIPLE COMPONENTS FOR DISCRIMINANT ANALYSIS OF FMRI IMAGES.
Liu, Jingyu; Xu, Lai; Caprihan, Arvind; Calhoun, Vince D
2008-05-12
This paper presents an approach for selecting optimal components for discriminant analysis. Such an approach is useful when further detailed analyses for discrimination or characterization requires dimensionality reduction. Our approach can accommodate a categorical variable such as diagnosis (e.g. schizophrenic patient or healthy control), or a continuous variable like severity of the disorder. This information is utilized as a reference for measuring a component's discriminant power after principle component decomposition. After sorting each component according to its discriminant power, we extract the best components for discriminant analysis. An application of our reference selection approach is shown using a functional magnetic resonance imaging data set in which the sample size is much less than the dimensionality. The results show that the reference selection approach provides an improved discriminant component set as compared to other approaches. Our approach is general and provides a solid foundation for further discrimination and classification studies.
Fürnstahl, Philipp; Vlachopoulos, Lazaros; Schweizer, Andreas; Fucentese, Sandro F; Koch, Peter P
2015-08-01
The accurate reduction of tibial plateau malunions can be challenging without guidance. In this work, we report on a novel technique that combines 3-dimensional computer-assisted planning with patient-specific surgical guides for improving reliability and accuracy of complex intraarticular corrective osteotomies. Preoperative planning based on 3-dimensional bone models was performed to simulate fragment mobilization and reduction in 3 cases. Surgical implementation of the preoperative plan using patient-specific cutting and reduction guides was evaluated; benefits and limitations of the approach were identified and discussed. The preliminary results are encouraging and show that complex, intraarticular corrective osteotomies can be accurately performed with this technique. For selective patients with complex malunions around the tibia plateau, this method might be an attractive option, with the potential to facilitate achieving the most accurate correction possible.
Li, Shuang; Wu, Dongqing; Liang, Haiwei; Wang, Jinzuan; Zhuang, Xiaodong; Mai, Yiyong; Su, Yuezeng; Feng, Xinliang
2014-11-01
We demonstrate a general and efficient self-templating strategy towards transition metal-nitrogen containing mesoporous carbon/graphene nanosheets with a unique two-dimensional (2D) morphology and tunable mesoscale porosity. Owing to the well-defined 2D morphology, nanometer-scale thickness, high specific surface area, and the simultaneous doping of the metal-nitrogen compounds, the as-prepared catalysts exhibits excellent electrocatalytic activity and stability towards the oxygen reduction reaction (ORR) in both alkaline and acidic media. More importantly, such a self-templating approach towards two-dimensional porous carbon hybrids with diverse metal-nitrogen doping opens up new avenues to mesoporous heteroatom-doped carbon materials as electrochemical catalysts for oxygen reduction and hydrogen evolution, with promising applications in fuel cell and battery technologies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gui, Jiang; Moore, Jason H.; Williams, Scott M.; Andrews, Peter; Hillege, Hans L.; van der Harst, Pim; Navis, Gerjan; Van Gilst, Wiek H.; Asselbergs, Folkert W.; Gilbert-Diamond, Diane
2013-01-01
We present an extension of the two-class multifactor dimensionality reduction (MDR) algorithm that enables detection and characterization of epistatic SNP-SNP interactions in the context of a quantitative trait. The proposed Quantitative MDR (QMDR) method handles continuous data by modifying MDR’s constructive induction algorithm to use a T-test. QMDR replaces the balanced accuracy metric with a T-test statistic as the score to determine the best interaction model. We used a simulation to identify the empirical distribution of QMDR’s testing score. We then applied QMDR to genetic data from the ongoing prospective Prevention of Renal and Vascular End-Stage Disease (PREVEND) study. PMID:23805232
Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds
NASA Astrophysics Data System (ADS)
Abdo, Mohammad Gamal Mohammad Mostafa
This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).
Application of MSCTA combined with VRT in the operation of cervical dumbbell tumors
Wang, Wan; Lin, Jia; Knosp, Engelbert; Zhao, Yuanzheng; Xiu, Dianhui; Guo, Yongchuan
2015-01-01
Cervical dumbbell tumor poses great difficulties for neurosurgical treatment and incurs remarkable local recurrence rate as the formidable problem for neurosurgery. However, as the routine preoperative evaluation scheme, MRI and CT failed to reveal the mutual three-dimensional relationships between tumor and adjacent structures. Here, we report the clinical application of MSCTA and VRT in three-dimensional reconstruction of cervical dumbbell tumors. From January 2012 to July 2014, 24 patients diagnosed with cervical dumbbell tumor were retrospectively analyzed. All patients enrolled were indicated for preoperative MSCTA/VRT image reconstruction to explore the three-dimensional stereoscopic anatomical relationships among neuroma, spinal cord and vertebral artery to achieve optimal surgical approach from multiple configurations and surgical practice. Three-dimensional mutual anatomical relationships among tumor, adjacent vessels and vertebrae were vividly reconstructed by MSCTA/VRT in all patients in accordance with intraoperative findings. Multiple configurations for optimal surgical approach contribute to total resection of tumor, minimal damage to vessels and nerves, and maximal maintenance of cervical spine stability. Preoperative MSCTA/VRT contributes to reconstruction of three-dimensional stereoscopic anatomical relationships between cervical dumbbell tumor and adjacent structures for optimal surgical approach by multiple configurations and reduction of intraoperative damages and postoperative complications. PMID:26550385
Application of MSCTA combined with VRT in the operation of cervical dumbbell tumors.
Wang, Wan; Lin, Jia; Knosp, Engelbert; Zhao, Yuanzheng; Xiu, Dianhui; Guo, Yongchuan
2015-01-01
Cervical dumbbell tumor poses great difficulties for neurosurgical treatment and incurs remarkable local recurrence rate as the formidable problem for neurosurgery. However, as the routine preoperative evaluation scheme, MRI and CT failed to reveal the mutual three-dimensional relationships between tumor and adjacent structures. Here, we report the clinical application of MSCTA and VRT in three-dimensional reconstruction of cervical dumbbell tumors. From January 2012 to July 2014, 24 patients diagnosed with cervical dumbbell tumor were retrospectively analyzed. All patients enrolled were indicated for preoperative MSCTA/VRT image reconstruction to explore the three-dimensional stereoscopic anatomical relationships among neuroma, spinal cord and vertebral artery to achieve optimal surgical approach from multiple configurations and surgical practice. Three-dimensional mutual anatomical relationships among tumor, adjacent vessels and vertebrae were vividly reconstructed by MSCTA/VRT in all patients in accordance with intraoperative findings. Multiple configurations for optimal surgical approach contribute to total resection of tumor, minimal damage to vessels and nerves, and maximal maintenance of cervical spine stability. Preoperative MSCTA/VRT contributes to reconstruction of three-dimensional stereoscopic anatomical relationships between cervical dumbbell tumor and adjacent structures for optimal surgical approach by multiple configurations and reduction of intraoperative damages and postoperative complications.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization
Glaser, Joshua I.; Zamft, Bradley M.; Church, George M.; Kording, Konrad P.
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, “puzzle imaging,” that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples. PMID:26192446
Integrating diffusion maps with umbrella sampling: Application to alanine dipeptide
NASA Astrophysics Data System (ADS)
Ferguson, Andrew L.; Panagiotopoulos, Athanassios Z.; Debenedetti, Pablo G.; Kevrekidis, Ioannis G.
2011-04-01
Nonlinear dimensionality reduction techniques can be applied to molecular simulation trajectories to systematically extract a small number of variables with which to parametrize the important dynamical motions of the system. For molecular systems exhibiting free energy barriers exceeding a few kBT, inadequate sampling of the barrier regions between stable or metastable basins can lead to a poor global characterization of the free energy landscape. We present an adaptation of a nonlinear dimensionality reduction technique known as the diffusion map that extends its applicability to biased umbrella sampling simulation trajectories in which restraining potentials are employed to drive the system into high free energy regions and improve sampling of phase space. We then propose a bootstrapped approach to iteratively discover good low-dimensional parametrizations by interleaving successive rounds of umbrella sampling and diffusion mapping, and we illustrate the technique through a study of alanine dipeptide in explicit solvent.
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969
Euclidean sections of protein conformation space and their implications in dimensionality reduction
Duan, Mojie; Li, Minghai; Han, Li; Huo, Shuanghong
2014-01-01
Dimensionality reduction is widely used in searching for the intrinsic reaction coordinates for protein conformational changes. We find the dimensionality–reduction methods using the pairwise root–mean–square deviation as the local distance metric face a challenge. We use Isomap as an example to illustrate the problem. We believe that there is an implied assumption for the dimensionality–reduction approaches that aim to preserve the geometric relations between the objects: both the original space and the reduced space have the same kind of geometry, such as Euclidean geometry vs. Euclidean geometry or spherical geometry vs. spherical geometry. When the protein free energy landscape is mapped onto a 2D plane or 3D space, the reduced space is Euclidean, thus the original space should also be Euclidean. For a protein with N atoms, its conformation space is a subset of the 3N-dimensional Euclidean space R3N. We formally define the protein conformation space as the quotient space of R3N by the equivalence relation of rigid motions. Whether the quotient space is Euclidean or not depends on how it is parameterized. When the pairwise root–mean–square deviation is employed as the local distance metric, implicit representations are used for the protein conformation space, leading to no direct correspondence to a Euclidean set. We have demonstrated that an explicit Euclidean-based representation of protein conformation space and the local distance metric associated to it improve the quality of dimensionality reduction in the tetra-peptide and β–hairpin systems. PMID:24913095
Phase reduction approach to synchronisation of nonlinear oscillators
NASA Astrophysics Data System (ADS)
Nakao, Hiroya
2016-04-01
Systems of dynamical elements exhibiting spontaneous rhythms are found in various fields of science and engineering, including physics, chemistry, biology, physiology, and mechanical and electrical engineering. Such dynamical elements are often modelled as nonlinear limit-cycle oscillators. In this article, we briefly review phase reduction theory, which is a simple and powerful method for analysing the synchronisation properties of limit-cycle oscillators exhibiting rhythmic dynamics. Through phase reduction theory, we can systematically simplify the nonlinear multi-dimensional differential equations describing a limit-cycle oscillator to a one-dimensional phase equation, which is much easier to analyse. Classical applications of this theory, i.e. the phase locking of an oscillator to a periodic external forcing and the mutual synchronisation of interacting oscillators, are explained. Further, more recent applications of this theory to the synchronisation of non-interacting oscillators induced by common noise and the dynamics of coupled oscillators on complex networks are discussed. We also comment on some recent advances in phase reduction theory for noise-driven oscillators and rhythmic spatiotemporal patterns.
Mathew, Boby; Léon, Jens; Sannemann, Wiebke; Sillanpää, Mikko J.
2018-01-01
Gene-by-gene interactions, also known as epistasis, regulate many complex traits in different species. With the availability of low-cost genotyping it is now possible to study epistasis on a genome-wide scale. However, identifying genome-wide epistasis is a high-dimensional multiple regression problem and needs the application of dimensionality reduction techniques. Flowering Time (FT) in crops is a complex trait that is known to be influenced by many interacting genes and pathways in various crops. In this study, we successfully apply Sure Independence Screening (SIS) for dimensionality reduction to identify two-way and three-way epistasis for the FT trait in a Multiparent Advanced Generation Inter-Cross (MAGIC) barley population using the Bayesian multilocus model. The MAGIC barley population was generated from intercrossing among eight parental lines and thus, offered greater genetic diversity to detect higher-order epistatic interactions. Our results suggest that SIS is an efficient dimensionality reduction approach to detect high-order interactions in a Bayesian multilocus model. We also observe that many of our findings (genomic regions with main or higher-order epistatic effects) overlap with known candidate genes that have been already reported in barley and closely related species for the FT trait. PMID:29254994
Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie
2018-05-01
The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
ODF Maxima Extraction in Spherical Harmonic Representation via Analytical Search Space Reduction
Aganj, Iman; Lenglet, Christophe; Sapiro, Guillermo
2015-01-01
By revealing complex fiber structure through the orientation distribution function (ODF), q-ball imaging has recently become a popular reconstruction technique in diffusion-weighted MRI. In this paper, we propose an analytical dimension reduction approach to ODF maxima extraction. We show that by expressing the ODF, or any antipodally symmetric spherical function, in the common fourth order real and symmetric spherical harmonic basis, the maxima of the two-dimensional ODF lie on an analytically derived one-dimensional space, from which we can detect the ODF maxima. This method reduces the computational complexity of the maxima detection, without compromising the accuracy. We demonstrate the performance of our technique on both artificial and human brain data. PMID:20879302
Ly, Cheng
2013-10-01
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
NASA Technical Reports Server (NTRS)
Cunefare, K. A.; Koopmann, G. H.
1991-01-01
This paper presents the theoretical development of an approach to active noise control (ANC) applicable to three-dimensional radiators. The active noise control technique, termed ANC Optimization Analysis, is based on minimizing the total radiated power by adding secondary acoustic sources on the primary noise source. ANC Optimization Analysis determines the optimum magnitude and phase at which to drive the secondary control sources in order to achieve the best possible reduction in the total radiated power from the noise source/control source combination. For example, ANC Optimization Analysis predicts a 20 dB reduction in the total power radiated from a sphere of radius at a dimensionless wavenumber ka of 0.125, for a single control source representing 2.5 percent of the total area of the sphere. ANC Optimization Analysis is based on a boundary element formulation of the Helmholtz Integral Equation, and thus, the optimization analysis applies to a single frequency, while multiple frequencies can be treated through repeated analyses.
Martyna, Agnieszka; Michalska, Aleksandra; Zadora, Grzegorz
2015-05-01
The problem of interpretation of common provenance of the samples within the infrared spectra database of polypropylene samples from car body parts and plastic containers as well as Raman spectra databases of blue solid and metallic automotive paints was under investigation. The research involved statistical tools such as likelihood ratio (LR) approach for expressing the evidential value of observed similarities and differences in the recorded spectra. Since the LR models can be easily proposed for databases described by a few variables, research focused on the problem of spectra dimensionality reduction characterised by more than a thousand variables. The objective of the studies was to combine the chemometric tools easily dealing with multidimensionality with an LR approach. The final variables used for LR models' construction were derived from the discrete wavelet transform (DWT) as a data dimensionality reduction technique supported by methods for variance analysis and corresponded with chemical information, i.e. typical absorption bands for polypropylene and peaks associated with pigments present in the car paints. Univariate and multivariate LR models were proposed, aiming at obtaining more information about the chemical structure of the samples. Their performance was controlled by estimating the levels of false positive and false negative answers and using the empirical cross entropy approach. The results for most of the LR models were satisfactory and enabled solving the stated comparison problems. The results prove that the variables generated from DWT preserve signal characteristic, being a sparse representation of the original signal by keeping its shape and relevant chemical information.
NASA Astrophysics Data System (ADS)
Zhang, Chengwei; Yang, Hui; Sun, Tingting; Shan, Nannan; Chen, Jianfeng; Xu, Lianbin; Yan, Yushan
2014-01-01
Three dimensionally ordered macro-/mesoporous (3DOM/m) Pt catalysts are fabricated by chemical reduction employing a dual-templating synthesis approach combining both colloidal crystal (opal) templating (hard-templating) and lyotropic liquid crystal templating (soft-templating) techniques. The macropore walls of the prepared 3DOM/m Pt exhibit a uniform mesoporous structure composed of polycrystalline Pt nanoparticles. Both the size of the mesopores and Pt nanocrystallites are in the range of 3-5 nm. The 3DOM/m Pt catalyst shows a larger electrochemically active surface area (ECSA), and higher catalytic activity as well as better poisoning tolerance for methanol oxidation reaction (MOR) than the commercial Pt black catalyst.
Development of a Multifidelity Approach to Acoustic Liner Impedance Eduction
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Jones, Michael G.
2017-01-01
The use of acoustic liners has proven to be extremely effective in reducing aircraft engine fan noise transmission/radiation. However, the introduction of advanced fan designs and shorter engine nacelles has highlighted a need for novel acoustic liner designs that provide increased fan noise reduction over a broader frequency range. To achieve aggressive noise reduction goals, advanced broadband liner designs, such as zone liners and variable impedance liners, will likely depart from conventional uniform impedance configurations. Therefore, educing the impedance of these axial- and/or spanwise-variable impedance liners will require models that account for three-dimensional effects, thereby increasing computational expense. Thus, it would seem advantageous to investigate the use of multifidelity modeling approaches to impedance eduction for these advanced designs. This paper describes an extension of the use of the CDUCT-LaRC code to acoustic liner impedance eduction. The proposed approach is applied to a hardwall insert and conventional liner using simulated data. Educed values compare well with those educed using two extensively tested and validated approaches. The results are very promising and provide justification to further pursue the complementary use of CDUCT-LaRC with the currently used finite element codes to increase the efficiency of the eduction process for configurations involving three-dimensional effects.
Visualizing phylogenetic tree landscapes.
Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A
2017-02-02
Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D projections significantly increase the fit between the tree-to-tree distances and can facilitate the interpretation of the relationship among phylogenetic trees. We demonstrate that the choice of dimensionality reduction method can significantly influence the spatial relationship among a large set of competing phylogenetic trees. We highlight the importance of selecting a dimensionality reduction method to visualize large multi-locus phylogenetic landscapes and demonstrate that 3D projections of mitochondrial tree landscapes better capture the relationship among the trees being compared.
Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach
NASA Astrophysics Data System (ADS)
Liu, Wenyang; Sawant, Amit; Ruan, Dan
2016-07-01
The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.
Supervised linear dimensionality reduction with robust margins for object recognition
NASA Astrophysics Data System (ADS)
Dornaika, F.; Assoum, A.
2013-01-01
Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.
NASA Astrophysics Data System (ADS)
Bogiatzis, P.; Ishii, M.; Davis, T. A.
2016-12-01
Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.
Light-cone reduction vs. TsT transformations: a fluid dynamics perspective
NASA Astrophysics Data System (ADS)
Dutta, Suvankar; Krishna, Hare
2018-05-01
We compute constitutive relations for a charged (2+1) dimensional Schrödinger fluid up to first order in derivative expansion, using holographic techniques. Starting with a locally boosted, asymptotically AdS, 4 + 1 dimensional charged black brane geometry, we uplift that to ten dimensions and perform TsT transformations to obtain an effective five dimensional local black brane solution with asymptotically Schrödinger isometries. By suitably implementing the holographic techniques, we compute the constitutive relations for the effective fluid living on the boundary of this space-time and extract first order transport coefficients from these relations. Schrödinger fluid can also be obtained by reducing a charged relativistic conformal fluid over light-cone. It turns out that both the approaches result the same system at the end. Fluid obtained by light-cone reduction satisfies a restricted class of thermodynamics. Here, we see that the charged fluid obtained holographically also belongs to the same restricted class.
TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS
Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.
2017-01-01
Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971
Rydzewski, J; Nowak, W
2016-04-12
In this work we propose an application of a nonlinear dimensionality reduction method to represent the high-dimensional configuration space of the ligand-protein dissociation process in a manner facilitating interpretation. Rugged ligand expulsion paths are mapped into 2-dimensional space. The mapping retains the main structural changes occurring during the dissociation. The topological similarity of the reduced paths may be easily studied using the Fréchet distances, and we show that this measure facilitates machine learning classification of the diffusion pathways. Further, low-dimensional configuration space allows for identification of residues active in transport during the ligand diffusion from a protein. The utility of this approach is illustrated by examination of the configuration space of cytochrome P450cam involved in expulsing camphor by means of enhanced all-atom molecular dynamics simulations. The expulsion trajectories are sampled and constructed on-the-fly during molecular dynamics simulations using the recently developed memetic algorithms [ Rydzewski, J.; Nowak, W. J. Chem. Phys. 2015 , 143 ( 12 ), 124101 ]. We show that the memetic algorithms are effective for enforcing the ligand diffusion and cavity exploration in the P450cam-camphor complex. Furthermore, we demonstrate that machine learning techniques are helpful in inspecting ligand diffusion landscapes and provide useful tools to examine structural changes accompanying rare events.
A fully 3D approach for metal artifact reduction in computed tomography.
Kratz, Barbel; Weyers, Imke; Buzug, Thorsten M
2012-11-01
In computed tomography imaging metal objects in the region of interest introduce inconsistencies during data acquisition. Reconstructing these data leads to an image in spatial domain including star-shaped or stripe-like artifacts. In order to enhance the quality of the resulting image the influence of the metal objects can be reduced. Here, a metal artifact reduction (MAR) approach is proposed that is based on a recomputation of the inconsistent projection data using a fully three-dimensional Fourier-based interpolation. The success of the projection space restoration depends sensitively on a sensible continuation of neighboring structures into the recomputed area. Fortunately, structural information of the entire data is inherently included in the Fourier space of the data. This can be used for a reasonable recomputation of the inconsistent projection data. The key step of the proposed MAR strategy is the recomputation of the inconsistent projection data based on an interpolation using nonequispaced fast Fourier transforms (NFFT). The NFFT interpolation can be applied in arbitrary dimension. The approach overcomes the problem of adequate neighborhood definitions on irregular grids, since this is inherently given through the usage of higher dimensional Fourier transforms. Here, applications up to the third interpolation dimension are presented and validated. Furthermore, prior knowledge may be included by an appropriate damping of the transform during the interpolation step. This MAR method is applicable on each angular view of a detector row, on two-dimensional projection data as well as on three-dimensional projection data, e.g., a set of sequential acquisitions at different spatial positions, projection data of a spiral acquisition, or cone-beam projection data. Results of the novel MAR scheme based on one-, two-, and three-dimensional NFFT interpolations are presented. All results are compared in projection data space and spatial domain with the well-known one-dimensional linear interpolation strategy. In conclusion, it is recommended to include as much spatial information into the recomputation step as possible. This is realized by increasing the dimension of the NFFT. The resulting image quality can be enhanced considerably.
Isostable reduction with applications to time-dependent partial differential equations.
Wilson, Dan; Moehlis, Jeff
2016-07-01
Isostables and isostable reduction, analogous to isochrons and phase reduction for oscillatory systems, are useful in the study of nonlinear equations which asymptotically approach a stationary solution. In this work, we present a general method for isostable reduction of partial differential equations, with the potential power to reduce the dimensionality of a nonlinear system from infinity to 1. We illustrate the utility of this reduction by applying it to two different models with biological relevance. In the first example, isostable reduction of the Fokker-Planck equation provides the necessary framework to design a simple control strategy to desynchronize a population of pathologically synchronized oscillatory neurons, as might be relevant to Parkinson's disease. Another example analyzes a nonlinear reaction-diffusion equation with relevance to action potential propagation in a cardiac system.
Cao, Peng; Liu, Xiaoli; Yang, Jinzhu; Zhao, Dazhe; Huang, Min; Zhang, Jian; Zaiane, Osmar
2017-12-01
Alzheimer's disease (AD) has been not only a substantial financial burden to the health care system but also an emotional burden to patients and their families. Making accurate diagnosis of AD based on brain magnetic resonance imaging (MRI) is becoming more and more critical and emphasized at the earliest stages. However, the high dimensionality and imbalanced data issues are two major challenges in the study of computer aided AD diagnosis. The greatest limitations of existing dimensionality reduction and over-sampling methods are that they assume a linear relationship between the MRI features (predictor) and the disease status (response). To better capture the complicated but more flexible relationship, we propose a multi-kernel based dimensionality reduction and over-sampling approaches. We combined Marginal Fisher Analysis with ℓ 2,1 -norm based multi-kernel learning (MKMFA) to achieve the sparsity of region-of-interest (ROI), which leads to simultaneously selecting a subset of the relevant brain regions and learning a dimensionality transformation. Meanwhile, a multi-kernel over-sampling (MKOS) was developed to generate synthetic instances in the optimal kernel space induced by MKMFA, so as to compensate for the class imbalanced distribution. We comprehensively evaluate the proposed models for the diagnostic classification (binary class and multi-class classification) including all subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The experimental results not only demonstrate the proposed method has superior performance over multiple comparable methods, but also identifies relevant imaging biomarkers that are consistent with prior medical knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
Order parameter for bursting polyrhythms in multifunctional central pattern generators
NASA Astrophysics Data System (ADS)
Wojcik, Jeremy; Clewley, Robert; Shilnikov, Andrey
2011-05-01
We examine multistability of several coexisting bursting patterns in a central pattern generator network composed of three Hodgkin-Huxley type cells coupled reciprocally by inhibitory synapses. We establish that the control of switching between bursting polyrhythms and their bifurcations are determined by the temporal characteristics, such as the duty cycle, of networked interneurons and the coupling strength asymmetry. A computationally effective approach to the reduction of dynamics of the nine-dimensional network to two-dimensional Poincaré return mappings for phase lags between the interneurons is presented.
Principal component analysis on a torus: Theory and application to protein dynamics.
Sittel, Florian; Filk, Thomas; Stock, Gerhard
2017-12-28
A dimensionality reduction method for high-dimensional circular data is developed, which is based on a principal component analysis (PCA) of data points on a torus. Adopting a geometrical view of PCA, various distance measures on a torus are introduced and the associated problem of projecting data onto the principal subspaces is discussed. The main idea is that the (periodicity-induced) projection error can be minimized by transforming the data such that the maximal gap of the sampling is shifted to the periodic boundary. In a second step, the covariance matrix and its eigendecomposition can be computed in a standard manner. Adopting molecular dynamics simulations of two well-established biomolecular systems (Aib 9 and villin headpiece), the potential of the method to analyze the dynamics of backbone dihedral angles is demonstrated. The new approach allows for a robust and well-defined construction of metastable states and provides low-dimensional reaction coordinates that accurately describe the free energy landscape. Moreover, it offers a direct interpretation of covariances and principal components in terms of the angular variables. Apart from its application to PCA, the method of maximal gap shifting is general and can be applied to any other dimensionality reduction method for circular data.
Principal component analysis on a torus: Theory and application to protein dynamics
NASA Astrophysics Data System (ADS)
Sittel, Florian; Filk, Thomas; Stock, Gerhard
2017-12-01
A dimensionality reduction method for high-dimensional circular data is developed, which is based on a principal component analysis (PCA) of data points on a torus. Adopting a geometrical view of PCA, various distance measures on a torus are introduced and the associated problem of projecting data onto the principal subspaces is discussed. The main idea is that the (periodicity-induced) projection error can be minimized by transforming the data such that the maximal gap of the sampling is shifted to the periodic boundary. In a second step, the covariance matrix and its eigendecomposition can be computed in a standard manner. Adopting molecular dynamics simulations of two well-established biomolecular systems (Aib9 and villin headpiece), the potential of the method to analyze the dynamics of backbone dihedral angles is demonstrated. The new approach allows for a robust and well-defined construction of metastable states and provides low-dimensional reaction coordinates that accurately describe the free energy landscape. Moreover, it offers a direct interpretation of covariances and principal components in terms of the angular variables. Apart from its application to PCA, the method of maximal gap shifting is general and can be applied to any other dimensionality reduction method for circular data.
Mai, J G; Gu, C; Lin, X Z; Li, T; Huang, W Q; Wang, H; Tan, X Y; Lin, H; Wang, Y M; Yang, Y Q; Jin, D D; Fan, S C
2017-03-01
Objective: To investigate reduction and fixation of complex acetabular fractures using three-dimensional (3D) printing technique and personalized acetabular wing-plate via lateral-rectus approach. Methods: From March to July 2016, 8 patients with complex acetabular fractures were surgically managed through 3D printing personalized acetabular wing-plate via lateral-rectus approach at Department of Orthopedics, the Third Affiliated Hospital of Southern Medical University. There were 4 male patients and 4 female patients, with an average age of 57 years (ranging from 31 to 76 years). According to Letournel-Judet classification, there were 2 anterior+ posterior hemitransverse fractures and 6 both-column fractures, without posterior wall fracture or contralateral pelvic fracture. The CT data files of acetabular fracture were imported into the computer and 3D printing technique was used to print the fractures models after reduction by digital orthopedic technique. The acetabular wing-plate was designed and printed with titanium. All fractures were treated via the lateral-rectus approach in a horizontal position after general anesthesia. The anterior column and the quadrilateral surface fractures were fixed by 3D printing personalized acetabular wing-plate, and the posterior column fractures were reduction and fixed by antegrade lag screws under direct vision. Results: All the 8 cases underwent the operation successfully. Postoperative X-ray and CT examination showed excellent or good reduction of anterior and posterior column, without any operation complications. Only 1 case with 75 years old was found screw loosening in the pubic bone with osteoporosis after 1 month's follow-up, who didn't accept any treatment because the patient didn't feel discomfort. According to the Matta radiological evaluation, the reduction of the acetabular fracture was rated as excellent in 3 cases, good in 4 cases and fair in 1 case. All patients were followed up for 3 to 6 months and all patients had achieved bone union. According to the modified Merle D'Aubigné and Postel scoring system, 5 cases were excellent, 2 cases were good, 1 case was fair. Conclusions: Surgical management of complex acetabular fracture via lateral-rectus approach combine with 3D printing personalized acetabular wing-plate can effectively improve reduction quality and fixation effect. It will be truly accurate, personalized and minimally invasive.
Missing data is a common problem in the application of statistical techniques. In principal component analysis (PCA), a technique for dimensionality reduction, incomplete data points are either discarded or imputed using interpolation methods. Such approaches are less valid when ...
Active Subspaces for Wind Plant Surrogate Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Ryan N; Quick, Julian; Dykes, Katherine L
Understanding the uncertainty in wind plant performance is crucial to their cost-effective design and operation. However, conventional approaches to uncertainty quantification (UQ), such as Monte Carlo techniques or surrogate modeling, are often computationally intractable for utility-scale wind plants because of poor congergence rates or the curse of dimensionality. In this paper we demonstrate that wind plant power uncertainty can be well represented with a low-dimensional active subspace, thereby achieving a significant reduction in the dimension of the surrogate modeling problem. We apply the active sub-spaces technique to UQ of plant power output with respect to uncertainty in turbine axial inductionmore » factors, and find a single active subspace direction dominates the sensitivity in power output. When this single active subspace direction is used to construct a quadratic surrogate model, the number of model unknowns can be reduced by up to 3 orders of magnitude without compromising performance on unseen test data. We conclude that the dimension reduction achieved with active subspaces makes surrogate-based UQ approaches tractable for utility-scale wind plants.« less
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
Model-based reinforcement learning with dimension reduction.
Tangkaratt, Voot; Morimoto, Jun; Sugiyama, Masashi
2016-12-01
The goal of reinforcement learning is to learn an optimal policy which controls an agent to acquire the maximum cumulative reward. The model-based reinforcement learning approach learns a transition model of the environment from data, and then derives the optimal policy using the transition model. However, learning an accurate transition model in high-dimensional environments requires a large amount of data which is difficult to obtain. To overcome this difficulty, in this paper, we propose to combine model-based reinforcement learning with the recently developed least-squares conditional entropy (LSCE) method, which simultaneously performs transition model estimation and dimension reduction. We also further extend the proposed method to imitation learning scenarios. The experimental results show that policy search combined with LSCE performs well for high-dimensional control tasks including real humanoid robot control. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy
NASA Technical Reports Server (NTRS)
Ford, G. E.
1986-01-01
To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.
A Combinatorial Approach to Detecting Gene-Gene and Gene-Environment Interactions in Family Studies
Lou, Xiang-Yang; Chen, Guo-Bo; Yan, Lei; Ma, Jennie Z.; Mangold, Jamie E.; Zhu, Jun; Elston, Robert C.; Li, Ming D.
2008-01-01
Widespread multifactor interactions present a significant challenge in determining risk factors of complex diseases. Several combinatorial approaches, such as the multifactor dimensionality reduction (MDR) method, have emerged as a promising tool for better detecting gene-gene (G × G) and gene-environment (G × E) interactions. We recently developed a general combinatorial approach, namely the generalized multifactor dimensionality reduction (GMDR) method, which can entertain both qualitative and quantitative phenotypes and allows for both discrete and continuous covariates to detect G × G and G × E interactions in a sample of unrelated individuals. In this article, we report the development of an algorithm that can be used to study G × G and G × E interactions for family-based designs, called pedigree-based GMDR (PGMDR). Compared to the available method, our proposed method has several major improvements, including allowing for covariate adjustments and being applicable to arbitrary phenotypes, arbitrary pedigree structures, and arbitrary patterns of missing marker genotypes. Our Monte Carlo simulations provide evidence that the PGMDR method is superior in performance to identify epistatic loci compared to the MDR-pedigree disequilibrium test (PDT). Finally, we applied our proposed approach to a genetic data set on tobacco dependence and found a significant interaction between two taste receptor genes (i.e., TAS2R16 and TAS2R38) in affecting nicotine dependence. PMID:18834969
Effects of band selection on endmember extraction for forestry applications
NASA Astrophysics Data System (ADS)
Karathanassi, Vassilia; Andreou, Charoula; Andronis, Vassilis; Kolokoussis, Polychronis
2014-10-01
In spectral unmixing theory, data reduction techniques play an important role as hyperspectral imagery contains an immense amount of data, posing many challenging problems such as data storage, computational efficiency, and the so called "curse of dimensionality". Feature extraction and feature selection are the two main approaches for dimensionality reduction. Feature extraction techniques are used for reducing the dimensionality of the hyperspectral data by applying transforms on hyperspectral data. Feature selection techniques retain the physical meaning of the data by selecting a set of bands from the input hyperspectral dataset, which mainly contain the information needed for spectral unmixing. Although feature selection techniques are well-known for their dimensionality reduction potentials they are rarely used in the unmixing process. The majority of the existing state-of-the-art dimensionality reduction methods set criteria to the spectral information, which is derived by the whole wavelength, in order to define the optimum spectral subspace. These criteria are not associated with any particular application but with the data statistics, such as correlation and entropy values. However, each application is associated with specific land c over materials, whose spectral characteristics present variations in specific wavelengths. In forestry for example, many applications focus on tree leaves, in which specific pigments such as chlorophyll, xanthophyll, etc. determine the wavelengths where tree species, diseases, etc., can be detected. For such applications, when the unmixing process is applied, the tree species, diseases, etc., are considered as the endmembers of interest. This paper focuses on investigating the effects of band selection on the endmember extraction by exploiting the information of the vegetation absorbance spectral zones. More precisely, it is explored whether endmember extraction can be optimized when specific sets of initial bands related to leaf spectral characteristics are selected. Experiments comprise application of well-known signal subspace estimation and endmember extraction methods on a hyperspectral imagery that presents a forest area. Evaluation of the extracted endmembers showed that more forest species can be extracted as endmembers using selected bands.
Zeng, Canjun; Xiao, Jidong; Wu, Zhanglin; Huang, Wenhua
2015-01-01
The aim of this study is to evaluate the efficacy and feasibility of three-dimensional printing (3D printing) assisted internal fixation of unstable pelvic fracture from minimal invasive para-rectus abdominis approach. A total of 38 patients with unstable pelvic fractures were analyzed retrospectively from August 2012 to February 2014. All cases were treated operatively with internal fixation assisted by three-dimensional printing from minimal invasive para-rectus abdominis approach. Both preoperative CT and three-dimensional reconstruction were performed. Pelvic model was created by 3D printing. Data including the best entry points, plate position and direction and length of screw were obtained from simulated operation based on 3D printing pelvic model. The diaplasis and internal fixation were performed by minimal invasive para-rectus abdominis approach according to the optimized dada in real surgical procedure. Matta and Majeed score were used to evaluate currative effects after operation. According to the Matta standard, the outcome of the diaplasis achieved 97.37% with excellent and good. Majeed assessment showed 94.4% with excellent and good. The imageological examination showed consistency of internal fixation and simulated operation. The mean operation time was 110 minutes, mean intraoperative blood loss 320 ml, and mean incision length 6.5 cm. All patients have achieved clinical healing, with mean healing time of 8 weeks. Three-dimensional printing assisted internal fixation of unstable pelvic fracture from minimal invasive para-rectus abdominis approach is feasible and effective. This method has the advantages of trauma minimally, bleeding less, healing rapidly and satisfactory reduction, and worthwhile for spreading in clinical practice.
Nonlinear Analysis and Modeling of Tires
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1996-01-01
The objective of the study was to develop efficient modeling techniques and computational strategies for: (1) predicting the nonlinear response of tires subjected to inflation pressure, mechanical and thermal loads; (2) determining the footprint region, and analyzing the tire pavement contact problem, including the effect of friction; and (3) determining the sensitivity of the tire response (displacements, stresses, strain energy, contact pressures and contact area) to variations in the different material and geometric parameters. Two computational strategies were developed. In the first strategy the tire was modeled by using either a two-dimensional shear flexible mixed shell finite elements or a quasi-three-dimensional solid model. The contact conditions were incorporated into the formulation by using a perturbed Lagrangian approach. A number of model reduction techniques were applied to substantially reduce the number of degrees of freedom used in describing the response outside the contact region. The second strategy exploited the axial symmetry of the undeformed tire, and uses cylindrical coordinates in the development of three-dimensional elements for modeling each of the different parts of the tire cross section. Model reduction techniques are also used with this strategy.
NASA Technical Reports Server (NTRS)
Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San
1994-01-01
This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.
Wing download reduction using vortex trapping plates
NASA Technical Reports Server (NTRS)
Light, Jeffrey S.; Stremel, Paul M.; Bilanin, Alan J.
1994-01-01
A download reduction technique using spanwise plates on the upper and lower wing surfaces has been examined. Experimental and analytical techniques were used to determine the download reduction obtained using this technique. Simple two-dimensional wind tunnel testing confirmed the validity of the technique for reducing two-dimensional airfoil drag. Computations using a two-dimensional Navier-Stokes analysis provided insight into the mechanism causing the drag reduction. Finally, the download reduction technique was tested using a rotor and wing to determine the benefits for a semispan configuration representative of a tilt rotor aircraft.
NASA Astrophysics Data System (ADS)
Maghsoudi, Mohammad Javad; Mohamed, Z.; Sudin, S.; Buyamin, S.; Jaafar, H. I.; Ahmad, S. M.
2017-08-01
This paper proposes an improved input shaping scheme for an efficient sway control of a nonlinear three dimensional (3D) overhead crane with friction using the particle swarm optimization (PSO) algorithm. Using this approach, a higher payload sway reduction is obtained as the input shaper is designed based on a complete nonlinear model, as compared to the analytical-based input shaping scheme derived using a linear second order model. Zero Vibration (ZV) and Distributed Zero Vibration (DZV) shapers are designed using both analytical and PSO approaches for sway control of rail and trolley movements. To test the effectiveness of the proposed approach, MATLAB simulations and experiments on a laboratory 3D overhead crane are performed under various conditions involving different cable lengths and sway frequencies. Their performances are studied based on a maximum residual of payload sway and Integrated Absolute Error (IAE) values which indicate total payload sway of the crane. With experiments, the superiority of the proposed approach over the analytical-based is shown by 30-50% reductions of the IAE values for rail and trolley movements, for both ZV and DZV shapers. In addition, simulations results show higher sway reductions with the proposed approach. It is revealed that the proposed PSO-based input shaping design provides higher payload sway reductions of a 3D overhead crane with friction as compared to the commonly designed input shapers.
Yao, Z; Peng, Y; Bi, J; Xie, C; Chen, X; Li, Y; Ye, X; Zhou, J
2016-03-01
Multidrug-resistant Pseudomonas aeruginosa (MDRPA) infections are major threats to healthcare-associated infection control and the intrinsic molecular mechanisms of MDRPA are also unclear. We examined 348 isolates of P. aeruginosa, including 188 MDRPA and 160 non-MDRPA, obtained from five tertiary-care hospitals in Guangzhou, China. Significant correlations were found between gene/enzyme carriage and increased rates of antimicrobial resistance (P < 0·01). gyrA mutation, OprD loss and metallo-β-lactamase (MBL) presence were identified as crucial molecular risk factors for MDRPA acquisition by a combination of univariate logistic regression and a multifactor dimensionality reduction approach. The MDRPA rate was also elevated with the increase in positive numbers of those three determinants (P < 0·001). Thus, gyrA mutation, OprD loss and MBL presence may serve as predictors for early screening of MDRPA infections in clinical settings.
Azevedo, C F; Nascimento, M; Silva, F F; Resende, M D V; Lopes, P S; Guimarães, S E F; Glória, L S
2015-10-09
A significant contribution of molecular genetics is the direct use of DNA information to identify genetically superior individuals. With this approach, genome-wide selection (GWS) can be used for this purpose. GWS consists of analyzing a large number of single nucleotide polymorphism markers widely distributed in the genome; however, because the number of markers is much larger than the number of genotyped individuals, and such markers are highly correlated, special statistical methods are widely required. Among these methods, independent component regression, principal component regression, partial least squares, and partial principal components stand out. Thus, the aim of this study was to propose an application of the methods of dimensionality reduction to GWS of carcass traits in an F2 (Piau x commercial line) pig population. The results show similarities between the principal and the independent component methods and provided the most accurate genomic breeding estimates for most carcass traits in pigs.
NASA Astrophysics Data System (ADS)
Joslin, R. D.
1991-04-01
The use of passive devices to obtain drag and noise reduction or transition delays in boundary layers is highly desirable. One such device that shows promise for hydrodynamic applications is the compliant coating. The present study extends the mechanical model to allow for three-dimensional waves. This study also looks at the effect of compliant walls on three-dimensional secondary instabilities. For the primary and secondary instability analysis, spectral and shooting approximations are used to obtain solutions of the governing equations and boundary conditions. The spectral approximation consists of local and global methods of solution while the shooting approach is local. The global method is used to determine the discrete spectrum of eigenvalue without any initial guess. The local method requires a sufficiently accurate initial guess to converge to the eigenvalue. Eigenvectors may be obtained with either local approach. For the initial stage of this analysis, two and three dimensional primary instabilities propagate over compliant coatings. Results over the compliant walls are compared with the rigid wall case. Three-dimensional instabilities are found to dominate transition over the compliant walls considered. However, transition delays are still obtained and compared with transition delay predictions for rigid walls. The angles of wave propagation are plotted with Reynolds number and frequency. Low frequency waves are found to be highly three-dimensional.
Wang, Zhili; Liu, Pan; Han, Jiuhui; Cheng, Chun; Ning, Shoucong; Hirata, Akihiko; Fujita, Takeshi; Chen, Mingwei
2017-10-20
Tuning surface structures by bottom-up synthesis has been demonstrated as an effective strategy to improve the catalytic performances of nanoparticle catalysts. Nevertheless, the surface modification of three-dimensional nanoporous metals, fabricated by a top-down dealloying approach, has not been achieved despite great efforts devoted to improving the catalytic performance of three-dimensional nanoporous catalysts. Here we report a surfactant-modified dealloying method to tailor the surface structure of nanoporous gold for amplified electrocatalysis toward methanol oxidation and oxygen reduction reactions. With the assistance of surfactants, {111} or {100} faceted internal surfaces of nanoporous gold can be realized in a controllable manner by optimizing dealloying conditions. The surface modified nanoporous gold exhibits significantly enhanced electrocatalytic activities in comparison with conventional nanoporous gold. This study paves the way to develop high-performance three-dimensional nanoporous catalysts with a tunable surface structure by top-down dealloying for efficient chemical and electrochemical reactions.
Huang, Xiaojing; Lauer, Kenneth; Clark, Jesse N.; ...
2015-03-13
We report an experimental ptychography measurement performed in fly-scan mode. With a visible-light laser source, we demonstrate a 5-fold reduction of data acquisition time. By including multiple mutually incoherent modes into the incident illumination, high quality images were successfully reconstructed from blurry diffraction patterns. Thus, this approach significantly increases the throughput of ptychography, especially for three-dimensional applications and the visualization of dynamic systems.
A Corresponding Lie Algebra of a Reductive homogeneous Group and Its Applications
NASA Astrophysics Data System (ADS)
Zhang, Yu-Feng; Wu, Li-Xin; Rui, Wen-Juan
2015-05-01
With the help of a Lie algebra of a reductive homogeneous space G/K, where G is a Lie group and K is a resulting isotropy group, we introduce a Lax pair for which an expanding (2+1)-dimensional integrable hierarchy is obtained by applying the binormial-residue representation (BRR) method, whose Hamiltonian structure is derived from the trace identity for deducing (2+1)-dimensional integrable hierarchies, which was proposed by Tu, et al. We further consider some reductions of the expanding integrable hierarchy obtained in the paper. The first reduction is just right the (2+1)-dimensional AKNS hierarchy, the second-type reduction reveals an integrable coupling of the (2+1)-dimensional AKNS equation (also called the Davey-Stewartson hierarchy), a kind of (2+1)-dimensional Schrödinger equation, which was once reobtained by Tu, Feng and Zhang. It is interesting that a new (2+1)-dimensional integrable nonlinear coupled equation is generated from the reduction of the part of the (2+1)-dimensional integrable coupling, which is further reduced to the standard (2+1)-dimensional diffusion equation along with a parameter. In addition, the well-known (1+1)-dimensional AKNS hierarchy, the (1+1)-dimensional nonlinear Schrödinger equation are all special cases of the (2+1)-dimensional expanding integrable hierarchy. Finally, we discuss a few discrete difference equations of the diffusion equation whose stabilities are analyzed by making use of the von Neumann condition and the Fourier method. Some numerical solutions of a special stationary initial value problem of the (2+1)-dimensional diffusion equation are obtained and the resulting convergence and estimation formula are investigated. Supported by the Innovation Team of Jiangsu Province hosted by China University of Mining and Technology (2014), the National Natural Science Foundation of China under Grant No. 11371361, the Fundamental Research Funds for the Central Universities (2013XK03), and the Natural Science Foundation of Shandong Province under Grant No. ZR2013AL016
Modeling and control of flexible structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Mingori, D. L.
1988-01-01
This monograph presents integrated modeling and controller design methods for flexible structures. The controllers, or compensators, developed are optimal in the linear-quadratic-Gaussian sense. The performance objectives, sensor and actuator locations and external disturbances influence both the construction of the model and the design of the finite dimensional compensator. The modeling and controller design procedures are carried out in parallel to ensure compatibility of these two aspects of the design problem. Model reduction techniques are introduced to keep both the model order and the controller order as small as possible. A linear distributed, or infinite dimensional, model is the theoretical basis for most of the text, but finite dimensional models arising from both lumped-mass and finite element approximations also play an important role. A central purpose of the approach here is to approximate an optimal infinite dimensional controller with an implementable finite dimensional compensator. Both convergence theory and numerical approximation methods are given. Simple examples are used to illustrate the theory.
Computed tomography-guided tissue engineering of upper airway cartilage.
Brown, Bryan N; Siebenlist, Nicholas J; Cheetham, Jonathan; Ducharme, Norm G; Rawlinson, Jeremy J; Bonassar, Lawrence J
2014-06-01
Normal laryngeal function has a large impact on quality of life, and dysfunction can be life threatening. In general, airway obstructions arise from a reduction in neuromuscular function or a decrease in mechanical stiffness of the structures of the upper airway. These reductions decrease the ability of the airway to resist inspiratory or expiratory pressures, causing laryngeal collapse. We propose to restore airway patency through methods that replace damaged tissue and improve the stiffness of airway structures. A number of recent studies have utilized image-guided approaches to create cell-seeded constructs that reproduce the shape and size of the tissue of interest with high geometric fidelity. The objective of the present study was to establish a tissue engineering approach to the creation of viable constructs that approximate the shape and size of equine airway structures, in particular the epiglottis. Computed tomography images were used to create three-dimensional computer models of the cartilaginous structures of the larynx. Anatomically shaped injection molds were created from the three-dimensional models and were seeded with bovine auricular chondrocytes that were suspended within alginate before static culture. Constructs were then cultured for approximately 4 weeks post-seeding and evaluated for biochemical content, biomechanical properties, and histologic architecture. Results showed that the three-dimensional molded constructs had the approximate size and shape of the equine epiglottis and that it is possible to seed such constructs while maintaining 75%+ cell viability. Extracellular matrix content was observed to increase with time in culture and was accompanied by an increase in the mechanical stiffness of the construct. If successful, such an approach may represent a significant improvement on the currently available treatments for damaged airway cartilage and may provide clinical options for replacement of damaged tissue during treatment of obstructive airway disease.
2017-01-01
Background: The skin tightening effects induced by non-insulated microneedle radiofrequency have proved long-lasting. Our previous three-dimensional volumetric assessment showed significant facial tightening for up to six months. However, nasal and peri-oral tightening effects lasted longer. The objective of this study was to investigate the distribution of the long-term volumetric reduction in facial area induced by a single fractional non-insulated microneedle radiofrequency treatment. Methods: Fifteen Asian patients underwent full facial skin tightening using a sharply tapered non-insulated microneedle radiofrequency applicator with a novel fractionated pulse mode. Three-dimensional volumetric assessments were performed at six and 12 months post-treatment. Patients rated their satisfaction using a 5-point scale at each follow up. Results: Objective assessments with superimposed three-dimensional color images showed significant volumetric reduction in the nasal and peri-oral areas at 12 months post-treatment in all patients. Median volumetric reductions at six and 12 months post-treatment were 13.1 and 12.3ml, respectively. All of the patients were satisfied with their results 12 months post-treatment. Side effects were not observed. Conclusions: This single fractional NIMNRF treatment provided long-lasting nasal and peri-oral tightening as shown via 3D volumetric assessment. Moreover, NIMNRF produced minimal complications, downtime, and few side effects. This approach provides safe and effective treatment of skin tightening. PMID:28367261
Faust, Kevin; Xie, Quin; Han, Dominick; Goyle, Kartikay; Volynskaya, Zoya; Djuric, Ugljesa; Diamandis, Phedias
2018-05-16
There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.
Large-scale Granger causality analysis on resting-state functional MRI
NASA Astrophysics Data System (ADS)
D'Souza, Adora M.; Abidin, Anas Zainul; Leistritz, Lutz; Wismüller, Axel
2016-03-01
We demonstrate an approach to measure the information flow between each pair of time series in resting-state functional MRI (fMRI) data of the human brain and subsequently recover its underlying network structure. By integrating dimensionality reduction into predictive time series modeling, large-scale Granger Causality (lsGC) analysis method can reveal directed information flow suggestive of causal influence at an individual voxel level, unlike other multivariate approaches. This method quantifies the influence each voxel time series has on every other voxel time series in a multivariate sense and hence contains information about the underlying dynamics of the whole system, which can be used to reveal functionally connected networks within the brain. To identify such networks, we perform non-metric network clustering, such as accomplished by the Louvain method. We demonstrate the effectiveness of our approach to recover the motor and visual cortex from resting state human brain fMRI data and compare it with the network recovered from a visuomotor stimulation experiment, where the similarity is measured by the Dice Coefficient (DC). The best DC obtained was 0.59 implying a strong agreement between the two networks. In addition, we thoroughly study the effect of dimensionality reduction in lsGC analysis on network recovery. We conclude that our approach is capable of detecting causal influence between time series in a multivariate sense, which can be used to segment functionally connected networks in the resting-state fMRI.
Graph theory approach to the eigenvalue problem of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.; Bainum, P. M.
1981-01-01
Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.
Emergent behaviors of the Schrödinger-Lohe model on cooperative-competitive networks
NASA Astrophysics Data System (ADS)
Huh, Hyungjin; Ha, Seung-Yeal; Kim, Dohyun
2017-12-01
We present several sufficient frameworks leading to the emergent behaviors of the coupled Schrödinger-Lohe (S-L) model under the same one-body external potential on cooperative-competitive networks. The S-L model was first introduced as a possible phenomenological model exhibiting quantum synchronization and its emergent dynamics on all-to-all cooperative networks has been treated via two distinct approaches, Lyapunov functional approach and the finite-dimensional reduction based on pairwise correlations. In this paper, we further generalize the finite-dimensional dynamical systems approach for pairwise correlation functions on cooperative-competitive networks and provide several sufficient frameworks leading to the collective exponential synchronization. For small systems consisting of three and four quantum subsystem, we also show that the system for pairwise correlations can be reduced to the Lotka-Volterra model with cooperative and competitive interactions, in which lots of interesting dynamical patterns appear, e.g., existence of closed orbits and limit-cycles.
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Ezzedine, Souheil; Rubin, Yoram
2012-02-01
The significance of conditioning predictions of environmental performance metrics (EPMs) on hydrogeological data in heterogeneous porous media is addressed. Conditioning EPMs on available data reduces uncertainty and increases the reliability of model predictions. We present a rational and concise approach to investigate the impact of conditioning EPMs on data as a function of the location of the environmentally sensitive target receptor, data types and spacing between measurements. We illustrate how the concept of comparative information yield curves introduced in de Barros et al. [de Barros FPJ, Rubin Y, Maxwell R. The concept of comparative information yield curves and its application to risk-based site characterization. Water Resour Res 2009;45:W06401. doi:10.1029/2008WR007324] could be used to assess site characterization needs as a function of flow and transport dimensionality and EPMs. For a given EPM, we show how alternative uncertainty reduction metrics yield distinct gains of information from a variety of sampling schemes. Our results show that uncertainty reduction is EPM dependent (e.g., travel times) and does not necessarily indicate uncertainty reduction in an alternative EPM (e.g., human health risk). The results show how the position of the environmental target, flow dimensionality and the choice of the uncertainty reduction metric can be used to assist in field sampling campaigns.
Zhao, Mingbo; Zhang, Zhao; Chow, Tommy W S; Li, Bing
2014-07-01
Dealing with high-dimensional data has always been a major problem in research of pattern recognition and machine learning, and Linear Discriminant Analysis (LDA) is one of the most popular methods for dimension reduction. However, it only uses labeled samples while neglecting unlabeled samples, which are abundant and can be easily obtained in the real world. In this paper, we propose a new dimension reduction method, called "SL-LDA", by using unlabeled samples to enhance the performance of LDA. The new method first propagates label information from the labeled set to the unlabeled set via a label propagation process, where the predicted labels of unlabeled samples, called "soft labels", can be obtained. It then incorporates the soft labels into the construction of scatter matrixes to find a transformed matrix for dimension reduction. In this way, the proposed method can preserve more discriminative information, which is preferable when solving the classification problem. We further propose an efficient approach for solving SL-LDA under a least squares framework, and a flexible method of SL-LDA (FSL-LDA) to better cope with datasets sampled from a nonlinear manifold. Extensive simulations are carried out on several datasets, and the results show the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multigrid Approach to Incompressible Viscous Cavity Flows
NASA Technical Reports Server (NTRS)
Wood, William A.
1996-01-01
Two-dimensional incompressible viscous driven-cavity flows are computed for Reynolds numbers on the range 100-20,000 using a loosely coupled, implicit, second-order centrally-different scheme. Mesh sequencing and three-level V-cycle multigrid error smoothing are incorporated into the symmetric Gauss-Seidel time-integration algorithm. Parametrics on the numerical parameters are performed, achieving reductions in solution times by more than 60 percent with the full multigrid approach. Details of the circulation patterns are investigated in cavities of 2-to-1, 1-to-1, and 1-to-2 depth to width ratios.
Reinforcing mechanism of anchors in slopes: a numerical comparison of results of LEM and FEM
NASA Astrophysics Data System (ADS)
Cai, Fei; Ugai, Keizo
2003-06-01
This paper reports the limitation of the conventional Bishop's simplified method to calculate the safety factor of slopes stabilized with anchors, and proposes a new approach to considering the reinforcing effect of anchors on the safety factor. The reinforcing effect of anchors can be explained using an additional shearing resistance on the slip surface. A three-dimensional shear strength reduction finite element method (SSRFEM), where soil-anchor interactions were simulated by three-dimensional zero-thickness elasto-plastic interface elements, was used to calculate the safety factor of slopes stabilized with anchors to verify the reinforcing mechanism of anchors. The results of SSRFEM were compared with those of the conventional and proposed approaches for Bishop's simplified method for various orientations, positions, and spacings of anchors, and shear strengths of soil-grouted body interfaces. For the safety factor, the proposed approach compared better with SSRFEM than the conventional approach. The additional shearing resistance can explain the influence of the orientation, position, and spacing of anchors, and the shear strength of soil-grouted body interfaces on the safety factor of slopes stabilized with anchors.
High- and low-level hierarchical classification algorithm based on source separation process
NASA Astrophysics Data System (ADS)
Loghmari, Mohamed Anis; Karray, Emna; Naceur, Mohamed Saber
2016-10-01
High-dimensional data applications have earned great attention in recent years. We focus on remote sensing data analysis on high-dimensional space like hyperspectral data. From a methodological viewpoint, remote sensing data analysis is not a trivial task. Its complexity is caused by many factors, such as large spectral or spatial variability as well as the curse of dimensionality. The latter describes the problem of data sparseness. In this particular ill-posed problem, a reliable classification approach requires appropriate modeling of the classification process. The proposed approach is based on a hierarchical clustering algorithm in order to deal with remote sensing data in high-dimensional space. Indeed, one obvious method to perform dimensionality reduction is to use the independent component analysis process as a preprocessing step. The first particularity of our method is the special structure of its cluster tree. Most of the hierarchical algorithms associate leaves to individual clusters, and start from a large number of individual classes equal to the number of pixels; however, in our approach, leaves are associated with the most relevant sources which are represented according to mutually independent axes to specifically represent some land covers associated with a limited number of clusters. These sources contribute to the refinement of the clustering by providing complementary rather than redundant information. The second particularity of our approach is that at each level of the cluster tree, we combine both a high-level divisive clustering and a low-level agglomerative clustering. This approach reduces the computational cost since the high-level divisive clustering is controlled by a simple Boolean operator, and optimizes the clustering results since the low-level agglomerative clustering is guided by the most relevant independent sources. Then at each new step we obtain a new finer partition that will participate in the clustering process to enhance semantic capabilities and give good identification rates.
NASA Astrophysics Data System (ADS)
Jiang, Li; Shi, Tielin; Xuan, Jianping
2012-05-01
Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.
Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Youngsoo; Carlberg, Kevin Thomas
Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over allmore » space and time in a weighted ℓ 2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.« less
Tao, Chenyang; Nichols, Thomas E.; Hua, Xue; Ching, Christopher R.K.; Rolls, Edmund T.; Thompson, Paul M.; Feng, Jianfeng
2017-01-01
We propose a generalized reduced rank latent factor regression model (GRRLF) for the analysis of tensor field responses and high dimensional covariates. The model is motivated by the need from imaging-genetic studies to identify genetic variants that are associated with brain imaging phenotypes, often in the form of high dimensional tensor fields. GRRLF identifies from the structure in the data the effective dimensionality of the data, and then jointly performs dimension reduction of the covariates, dynamic identification of latent factors, and nonparametric estimation of both covariate and latent response fields. After accounting for the latent and covariate effects, GRLLF performs a nonparametric test on the remaining factor of interest. GRRLF provides a better factorization of the signals compared with common solutions, and is less susceptible to overfitting because it exploits the effective dimensionality. The generality and the flexibility of GRRLF also allow various statistical models to be handled in a unified framework and solutions can be efficiently computed. Within the field of neuroimaging, it improves the sensitivity for weak signals and is a promising alternative to existing approaches. The operation of the framework is demonstrated with both synthetic datasets and a real-world neuroimaging example in which the effects of a set of genes on the structure of the brain at the voxel level were measured, and the results compared favorably with those from existing approaches. PMID:27666385
Integrated Model Reduction and Control of Aircraft with Flexible Wings
NASA Technical Reports Server (NTRS)
Swei, Sean Shan-Min; Zhu, Guoming G.; Nguyen, Nhan T.
2013-01-01
This paper presents an integrated approach to the modeling and control of aircraft with exible wings. The coupled aircraft rigid body dynamics with a high-order elastic wing model can be represented in a nite dimensional state-space form. Given a set of desired output covariance, a model reduction process is performed by using the weighted Modal Cost Analysis (MCA). A dynamic output feedback controller, which is designed based on the reduced-order model, is developed by utilizing output covariance constraint (OCC) algorithm, and the resulting OCC design weighting matrix is used for the next iteration of the weighted cost analysis. This controller is then validated for full-order evaluation model to ensure that the aircraft's handling qualities are met and the uttering motion of the wings suppressed. An iterative algorithm is developed in CONDUIT environment to realize the integration of model reduction and controller design. The proposed integrated approach is applied to NASA Generic Transport Model (GTM) for demonstration.
[New techniques in the operative treatment of calcaneal fractures].
Rammelt, S; Amlang, M; Sands, A K; Swords, M
2016-03-01
The ideal treatment of displaced intra-articular calcaneal fractures is still controversially discussed. Because of the variable fracture patterns and the vulnerable soft tissue coverage an individual treatment concept is advisable. In order to minimize wound edge necrosis associated with extended lateral approaches, selected fractures may be treated percutaneously or in a less invasive manner while controlling joint reduction via a sinus tarsi approach. Fixation in these cases is achieved with screws, intramedullary locking nails or modified plates that are slid in subcutaneously. A thorough knowledge of the three dimensional calcaneal anatomy and open reduction maneuvers is a prerequisite for good results with less invasive techniques. Early functional follow-up treatment aims at early rehabilitation independent of the kind of fixation. Peripheral fractures of the talus and calcaneus frequently result from subluxation and dislocation at the subtalar and Chopart joints. They are still regularly overlooked and result in painful arthritis if left untreated. If an exact anatomical reduction of these intra-articular fractures is impossible, resection of small fragments is indicated.
Liu, Yang; Chiaromonte, Francesca; Li, Bing
2017-06-01
In many scientific and engineering fields, advanced experimental and computing technologies are producing data that are not just high dimensional, but also internally structured. For instance, statistical units may have heterogeneous origins from distinct studies or subpopulations, and features may be naturally partitioned based on experimental platforms generating them, or on information available about their roles in a given phenomenon. In a regression analysis, exploiting this known structure in the predictor dimension reduction stage that precedes modeling can be an effective way to integrate diverse data. To pursue this, we propose a novel Sufficient Dimension Reduction (SDR) approach that we call structured Ordinary Least Squares (sOLS). This combines ideas from existing SDR literature to merge reductions performed within groups of samples and/or predictors. In particular, it leads to a version of OLS for grouped predictors that requires far less computation than recently proposed groupwise SDR procedures, and provides an informal yet effective variable selection tool in these settings. We demonstrate the performance of sOLS by simulation and present a first application to genomic data. The R package "sSDR," publicly available on CRAN, includes all procedures necessary to implement the sOLS approach. © 2016, The International Biometric Society.
Gravity from entanglement and RG flow in a top-down approach
NASA Astrophysics Data System (ADS)
Kwon, O.-Kab; Jang, Dongmin; Kim, Yoonbai; Tolla, D. D.
2018-05-01
The duality between a d-dimensional conformal field theory with relevant deformation and a gravity theory on an asymptotically AdS d+1 geometry, has become a suitable tool in the investigation of the emergence of gravity from quantum entanglement in field theory. Recently, we have tested the duality between the mass-deformed ABJM theory and asymptotically AdS4 gravity theory, which is obtained from the KK reduction of the 11-dimensional supergravity on the LLM geometry. In this paper, we extend the KK reduction procedure beyond the linear order and establish non-trivial KK maps between 4-dimensional fields and 11-dimensional fluctuations. We rely on this gauge/gravity duality to calculate the entanglement entropy by using the Ryu-Takayanagi holographic formula and the path integral method developed by Faulkner. We show that the entanglement entropies obtained using these two methods agree when the asymptotically AdS4 metric satisfies the linearized Einstein equation with nonvanishing energy-momentum tensor for two scalar fields. These scalar fields encode the information of the relevant deformation of the ABJM theory. This confirms that the asymptotic limit of LLM geometry is the emergent gravity of the quantum entanglement in the mass-deformed ABJM theory with a small mass parameter. We also comment on the issue of the relative entropy and the Fisher information in our setup.
Adaptive sampling strategies with high-throughput molecular dynamics
NASA Astrophysics Data System (ADS)
Clementi, Cecilia
Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.
Spillover, nonlinearity, and flexible structures
NASA Technical Reports Server (NTRS)
Bass, Robert W.; Zes, Dean
1991-01-01
Many systems whose evolution in time is governed by Partial Differential Equations (PDEs) are linearized around a known equilibrium before Computer Aided Control Engineering (CACE) is considered. In this case, there are infinitely many independent vibrational modes, and it is intuitively evident on physical grounds that infinitely many actuators would be needed in order to control all modes. A more precise, general formulation of this grave difficulty (spillover problem) is due to A.V. Balakrishnan. A possible route to circumvention of this difficulty lies in leaving the PDE in its original nonlinear form, and adding the essentially finite dimensional control action prior to linearization. One possibly applicable technique is the Liapunov Schmidt rigorous reduction of singular infinite dimensional implicit function problems to finite dimensional implicit function problems. Omitting details of Banach space rigor, the formalities of this approach are given.
Analysis of internal ablation for the thermal control of aerospace vehicles
NASA Technical Reports Server (NTRS)
Camberos, Jose A.; Roberts, Leonard
1989-01-01
A new method of thermal protection for transatmospheric vehicles is introduced. The method involves the combination of radiation, ablation and transpiration cooling. By placing an ablating material behind a fixed-shape, porous outer shield, the effectiveness of transpiration cooling is made possible while retaining the simplicity of a passive mechanism. A simplified one-dimensional approach is used to derive the governing equations. Reduction of these equations to non-dimensional form yields two parameters which characterize the thermal protection effectiveness of the shield and ablator combination for a given trajectory. The non-dimensional equations are solved numerically for a sample trajectory corresponding to glide re-entry. Four typical ablators are tested and compared with results obtained by using the thermal properties of water. For the present level of analysis, the numerical computations adequately support the analytical model.
Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof
2018-01-01
We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.
Kireeva, N; Baskin, I I; Gaspar, H A; Horvath, D; Marcou, G; Varnek, A
2012-04-01
Here, the utility of Generative Topographic Maps (GTM) for data visualization, structure-activity modeling and database comparison is evaluated, on hand of subsets of the Database of Useful Decoys (DUD). Unlike other popular dimensionality reduction approaches like Principal Component Analysis, Sammon Mapping or Self-Organizing Maps, the great advantage of GTMs is providing data probability distribution functions (PDF), both in the high-dimensional space defined by molecular descriptors and in 2D latent space. PDFs for the molecules of different activity classes were successfully used to build classification models in the framework of the Bayesian approach. Because PDFs are represented by a mixture of Gaussian functions, the Bhattacharyya kernel has been proposed as a measure of the overlap of datasets, which leads to an elegant method of global comparison of chemical libraries. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Transition Manifolds of Complex Metastable Systems
NASA Astrophysics Data System (ADS)
Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof
2018-04-01
We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.
NASA Astrophysics Data System (ADS)
Crowell, Andrew Rippetoe
This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.
Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models
Cowley, Benjamin R.; Doiron, Brent; Kohn, Adam
2016-01-01
Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction—shared dimensionality and percent shared variance—with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure. PMID:27926936
Systematic dimensionality reduction for continuous-time quantum walks of interacting fermions
NASA Astrophysics Data System (ADS)
Izaac, J. A.; Wang, J. B.
2017-09-01
To extend the continuous-time quantum walk (CTQW) to simulate P distinguishable particles on a graph G composed of N vertices, the Hamiltonian of the system is expanded to act on an NP-dimensional Hilbert space, in effect, simulating the multiparticle CTQW on graph G via a single-particle CTQW propagating on the Cartesian graph product G□P. The properties of the Cartesian graph product have been well studied, and classical simulation of multiparticle CTQWs are common in the literature. However, the above approach is generally applied as is when simulating indistinguishable particles, with the particle statistics then applied to the propagated NP state vector to determine walker probabilities. We address the following question: How can we modify the underlying graph structure G□P in order to simulate multiple interacting fermionic CTQWs with a reduction in the size of the state space? In this paper, we present an algorithm for systematically removing "redundant" and forbidden quantum states from consideration, which provides a significant reduction in the effective dimension of the Hilbert space of the fermionic CTQW. As a result, as the number of interacting fermions in the system increases, the classical computational resources required no longer increases exponentially for fixed N .
Yunus, Rozan Mohamad; Endo, Hiroko; Tsuji, Masaharu; Ago, Hiroki
2015-10-14
Heterostructures of two-dimensional (2D) layered materials have attracted growing interest due to their unique properties and possible applications in electronics, photonics, and energy. Reduction of the dimensionality from 2D to one-dimensional (1D), such as graphene nanoribbons (GNRs), is also interesting due to the electron confinement effect and unique edge effects. Here, we demonstrate a bottom-up approach to grow vertical heterostructures of MoS2 and GNRs by a two-step chemical vapor deposition (CVD) method. Single-layer GNRs were first grown by ambient pressure CVD on an epitaxial Cu(100) film, followed by the second CVD process to grow MoS2 over the GNRs. The MoS2 layer was found to grow preferentially on the GNR surface, while the coverage could be further tuned by adjusting the growth conditions. The MoS2/GNR nanostructures show clear photosensitivity to visible light with an optical response much higher than that of a 2D MoS2/graphene heterostructure. The ability to grow a novel 1D heterostructure of layered materials by a bottom-up CVD approach will open up a new avenue to expand the dimensionality of the material synthesis and applications.
Higher order first integrals, Killing tensors and Killing-Maxwell system
NASA Astrophysics Data System (ADS)
Visinescu, Mihai
2012-02-01
Higher order first integrals of motion of particles in the presence of external gauge fields in a covariant Hamiltonian approach are investigated. The special role of Stackel-Killing and Killing-Yano tensors is pointed out. A condition of the electromagnetic field to maintain the hidden symmetry of the system is stated. A concrete realization of this condition is given by the Killing-Maxwell system and exemplified with the Kerr metric. Another application of the gauge covariant approach is provided by a non relativistic point charge in the field of a Dirac monopole. The corresponding dynamical system possessing a Kepler type symmetry is associated with the Taub-NUT metric using a reduction procedure of symplectic manifolds with symmetries. The reverse of the reduction procedure can be used to investigate higher-dimensional spacetimes admitting Killing tensors.
Multicomponent integrable reductions in the Kadomtsev-Petviashvilli hierarchy
NASA Astrophysics Data System (ADS)
Sidorenko, Jurij; Strampp, Walter
1993-04-01
New types of reductions of the Kadomtsev-Petviashvili (KP) hierarchy are considered on the basis of Sato's approach. Within this approach the KP hierarchy is represented by infinite sets of equations for potentials u2,u3,..., of pseudodifferential operators and their eigenfunctions Ψ and adjoint eigenfunctions Ψ*. The KP hierarchy was studied under constraints of the following type (∑ni=1 ΨiΨ*i)x = Sκ,x where Sκ,x are symmetries for the KP equation and Ψi(λi), Ψ*i(λi) are eigenfunctions with eigenvalue λi. It is shown that for the first three cases κ=2,3,4 these constraints give rise to hierarchies of 1+1-dimensional commuting flows for the variables u2, Ψ1,...,Ψn, Ψ*1,...,Ψ*n. Bi-Hamiltonian structures for the new hierarchies are presented.
Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.
2010-01-01
Epistasis or gene-gene interaction is a fundamental component of the genetic architecture of complex traits such as disease susceptibility. Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free method to detect epistasis when there are no significant marginal genetic effects. However, in many studies of complex disease, other covariates like age of onset and smoking status could have a strong main effect and may potentially interfere with MDR's ability to achieve its goal. In this paper, we present a simple and computationally efficient sampling method to adjust for covariate effects in MDR. We use simulation to show that after adjustment, MDR has sufficient power to detect true gene-gene interactions. We also compare our method with the state-of-art technique in covariate adjustment. The results suggest that our proposed method performs similarly, but is more computationally efficient. We then apply this new method to an analysis of a population-based bladder cancer study in New Hampshire. PMID:20924193
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smirnov, A. G., E-mail: smirnov@lpi.ru
2015-12-15
We develop a general technique for finding self-adjoint extensions of a symmetric operator that respects a given set of its symmetries. Problems of this type naturally arise when considering two- and three-dimensional Schrödinger operators with singular potentials. The approach is based on constructing a unitary transformation diagonalizing the symmetries and reducing the initial operator to the direct integral of a suitable family of partial operators. We prove that symmetry preserving self-adjoint extensions of the initial operator are in a one-to-one correspondence with measurable families of self-adjoint extensions of partial operators obtained by reduction. The general scheme is applied to themore » three-dimensional Aharonov-Bohm Hamiltonian describing the electron in the magnetic field of an infinitely thin solenoid. We construct all self-adjoint extensions of this Hamiltonian, invariant under translations along the solenoid and rotations around it, and explicitly find their eigenfunction expansions.« less
Automated detection of lung nodules with three-dimensional convolutional neural networks
NASA Astrophysics Data System (ADS)
Pérez, Gustavo; Arbeláez, Pablo
2017-11-01
Lung cancer is the cancer type with highest mortality rate worldwide. It has been shown that early detection with computer tomography (CT) scans can reduce deaths caused by this disease. Manual detection of cancer nodules is costly and time-consuming. We present a general framework for the detection of nodules in lung CT images. Our method consists of the pre-processing of a patient's CT with filtering and lung extraction from the entire volume using a previously calculated mask for each patient. From the extracted lungs, we perform a candidate generation stage using morphological operations, followed by the training of a three-dimensional convolutional neural network for feature representation and classification of extracted candidates for false positive reduction. We perform experiments on the publicly available LIDC-IDRI dataset. Our candidate extraction approach is effective to produce precise candidates with a recall of 99.6%. In addition, false positive reduction stage manages to successfully classify candidates and increases precision by a factor of 7.000.
Network community-based model reduction for vortical flows
NASA Astrophysics Data System (ADS)
Gopalakrishnan Meena, Muralikrishnan; Nair, Aditya G.; Taira, Kunihiko
2018-06-01
A network community-based reduced-order model is developed to capture key interactions among coherent structures in high-dimensional unsteady vortical flows. The present approach is data-inspired and founded on network-theoretic techniques to identify important vortical communities that are comprised of vortical elements that share similar dynamical behavior. The overall interaction-based physics of the high-dimensional flow field is distilled into the vortical community centroids, considerably reducing the system dimension. Taking advantage of these vortical interactions, the proposed methodology is applied to formulate reduced-order models for the inter-community dynamics of vortical flows, and predict lift and drag forces on bodies in wake flows. We demonstrate the capabilities of these models by accurately capturing the macroscopic dynamics of a collection of discrete point vortices, and the complex unsteady aerodynamic forces on a circular cylinder and an airfoil with a Gurney flap. The present formulation is found to be robust against simulated experimental noise and turbulence due to its integrating nature of the system reduction.
Zheng, Weili; Ackley, Elena S; Martínez-Ramón, Manel; Posse, Stefan
2013-02-01
In previous works, boosting aggregation of classifier outputs from discrete brain areas has been demonstrated to reduce dimensionality and improve the robustness and accuracy of functional magnetic resonance imaging (fMRI) classification. However, dimensionality reduction and classification of mixed activation patterns of multiple classes remain challenging. In the present study, the goals were (a) to reduce dimensionality by combining feature reduction at the voxel level and backward elimination of optimally aggregated classifiers at the region level, (b) to compare region selection for spatially aggregated classification using boosting and partial least squares regression methods and (c) to resolve mixed activation patterns using probabilistic prediction of individual tasks. Brain activation maps from interleaved visual, motor, auditory and cognitive tasks were segmented into 144 functional regions. Feature selection reduced the number of feature voxels by more than 50%, leaving 95 regions. The two aggregation approaches further reduced the number of regions to 30, resulting in more than 75% reduction of classification time and misclassification rates of less than 3%. Boosting and partial least squares (PLS) were compared to select the most discriminative and the most task correlated regions, respectively. Successful task prediction in mixed activation patterns was feasible within the first block of task activation in real-time fMRI experiments. This methodology is suitable for sparsifying activation patterns in real-time fMRI and for neurofeedback from distributed networks of brain activation. Copyright © 2013 Elsevier Inc. All rights reserved.
Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan
2015-01-01
Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836
Hypergraph-based anomaly detection of high-dimensional co-occurrences.
Silva, Jorge; Willett, Rebecca
2009-03-01
This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.
Effects of septal pacing on P wave characteristics: the value of three-dimensional echocardiography.
Szili-Torok, Tamas; Bruining, Nico; Scholten, Marcoen; Kimman, Geert-Jan; Roelandt, Jos; Jordaens, Luc
2003-01-01
Interatrial septum (IAS) pacing has been proposed for the prevention of paroxysmal atrial fibrillation. IAS pacing is usually guided by fluoroscopy and P wave analysis. The authors have developed a new approach for IAS pacing using intracardiac echocardiography (ICE), and examined its effects on P wave characteristics. Cross-sectional images are acquired during pullback of the ICE transducer from the superior vena cava into the inferior vena cava by an electrocardiogram- and respiration-gated technique. The right atrium and IAS are then three-dimensionally reconstructed, and the desired pacing site is selected. After lead placement and electrical testing, another three-dimensional reconstruction is performed to verify the final lead position. The study included 14 patients. IAS pacing was achieved at seven suprafossal (SF) and seven infrafossal (IF) lead locations, all confirmed by three-dimensional imaging. IAS pacing resulted in a significant reduction of P wave duration as compared to sinus rhythm (99.7 +/- 18.7 vs 140.4 +/- 8.8 ms; P < 0.01). SF pacing was associated with a greater reduction of P wave duration than IF pacing (56.1 +/- 9.9 vs 30.2 +/- 13.6 ms; P < 0.01). P wave dispersion remained unchanged during septal pacing as compared to sinus rhythm (21.4 +/- 16.1 vs 13.5 +/- 13.9 ms; NS). Three-dimensional intracardiac echocardiography can be used to guide IAS pacing. SF pacing was associated with a greater decrease in P wave duration, suggesting that it is a preferable location to decrease interatrial conduction delay.
Ephaptic conduction in a cardiac strand model with 3D electrodiffusion
Mori, Yoichiro; Fishman, Glenn I.; Peskin, Charles S.
2008-01-01
We study cardiac action potential propagation under severe reduction in gap junction conductance. We use a mathematical model of cellular electrical activity that takes into account both three-dimensional geometry and ionic concentration effects. Certain anatomical and biophysical parameters are varied to see their impact on cardiac action potential conduction velocity. This study uncovers quantitative features of ephaptic propagation that differ from previous studies based on one-dimensional models. We also identify a mode of cardiac action potential propagation in which the ephaptic and gap-junction-mediated mechanisms alternate. Our study demonstrates the usefulness of this modeling approach for electrophysiological systems especially when detailed membrane geometry plays an important role. PMID:18434544
Lie Symmetry Analysis of the Inhomogeneous Toda Lattice Equation via Semi-Discrete Exterior Calculus
NASA Astrophysics Data System (ADS)
Liu, Jiang; Wang, Deng-Shan; Yin, Yan-Bin
2017-06-01
In this work, the Lie point symmetries of the inhomogeneous Toda lattice equation are obtained by semi-discrete exterior calculus, which is a semi-discrete version of Harrison and Estabrook’s geometric approach. A four-dimensional Lie algebra and its one-, two- and three-dimensional subalgebras are given. Two similarity reductions of the inhomogeneous Toda lattice equation are obtained by using the symmetry vectors. Supported by National Natural Science Foundation of China under Grant Nos. 11375030, 11472315, and Department of Science and Technology of Henan Province under Grant No. 162300410223 and Beijing Finance Funds of Natural Science Program for Excellent Talents under Grant No. 2014000026833ZK19
Buckling Analysis of Single and Multi Delamination In Composite Beam Using Finite Element Method
NASA Astrophysics Data System (ADS)
Simanjorang, Hans Charles; Syamsudin, Hendri; Giri Suada, Muhammad
2018-04-01
Delamination is one type of imperfection in structure which found usually in the composite structure. Delamination may exist due to some factors namely in-service condition where the foreign objects hit the composite structure and creates inner defect and poor manufacturing that causes the initial imperfections. Composite structure is susceptible to the compressive loading. Compressive loading leads the instability phenomenon in the composite structure called buckling. The existence of delamination inside of the structure will cause reduction in buckling strength. This paper will explain the effect of delamination location to the buckling strength. The analysis will use the one-dimensional modelling approach using two- dimensional finite element method.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
NASA Astrophysics Data System (ADS)
Luo, Yuan; Tan, Meng-Chwan; Vasko, Petr; Zhao, Qin
2017-05-01
We perform a series of dimensional reductions of the 6d, \\mathcal{N} = (2, 0) SCFT on S 2 × Σ × I × S 1 down to 2d on Σ. The reductions are performed in three steps: (i) a reduction on S 1 (accompanied by a topological twist along Σ) leading to a supersymmetric Yang-Mills theory on S 2 × Σ × I, (ii) a further reduction on S 2 resulting in a complex Chern-Simons theory defined on Σ × I, with the real part of the complex Chern-Simons level being zero, and the imaginary part being proportional to the ratio of the radii of S 2 and S 1, and (iii) a final reduction to the boundary modes of complex Chern-Simons theory with the Nahm pole boundary condition at both ends of the interval I, which gives rise to a complex Toda CFT on the Riemann surface Σ. As the reduction of the 6d theory on Σ would give rise to an \\mathcal{N} = 2 supersymmetric theory on S 2 × I × S 1, our results imply a 4d-2d duality between four-dimensional \\mathcal{N} = 2 supersymmetric theory with boundary and two-dimensional complex Toda theory.
NASA Technical Reports Server (NTRS)
Felici, Helene M.; Drela, Mark
1993-01-01
A new approach based on the coupling of an Eulerian and a Lagrangian solver, aimed at reducing the numerical diffusion errors of standard Eulerian time-marching finite-volume solvers, is presented. The approach is applied to the computation of the secondary flow in two bent pipes and the flow around a 3D wing. Using convective point markers the Lagrangian approach provides a correction of the basic Eulerian solution. The Eulerian flow in turn integrates in time the Lagrangian state-vector. A comparison of coarse and fine grid Eulerian solutions makes it possible to identify numerical diffusion. It is shown that the Eulerian/Lagrangian approach is an effective method for reducing numerical diffusion errors.
Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.
Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli
2015-05-01
2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.
Seismic Data Analysis throught Multi-Class Classification.
NASA Astrophysics Data System (ADS)
Anderson, P.; Kappedal, R. D.; Magana-Zook, S. A.
2017-12-01
In this research, we conducted twenty experiments of varying time and frequency bands on 5000seismic signals with the intent of finding a method to classify signals as either an explosion or anearthquake in an automated fashion. We used a multi-class approach by clustering of the data throughvarious techniques. Dimensional reduction was examined through the use of wavelet transforms withthe use of the coiflet mother wavelet and various coefficients to explore possible computational time vsaccuracy dependencies. Three and four classes were generated from the clustering techniques andexamined with the three class approach producing the most accurate and realistic results.
Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.
Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil
2017-01-19
Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.
On the reduction of 4d $$ \\mathcal{N}=1 $$ theories on $$ {\\mathbb{S}}^2 $$
Gadde, Abhijit; Razamat, Shlomo S.; Willett, Brian
2015-11-24
Here, we discuss reductions of generalmore » $$ \\mathcal{N}=1 $$ four dimensional gauge theories on $$ {\\mathbb{S}}^2 $$. The effective two dimensional theory one obtains depends on the details of the coupling of the theory to background fields, which can be translated to a choice of R-symmetry. We argue that, for special choices of R-symmetry, the resulting two dimensional theory has a natural interpretation as an $$ \\mathcal{N}(0,2) $$ gauge theory. As an application of our general observations, we discuss reductions of $$ \\mathcal{N}=1 $$ and $$ \\mathcal{N}=2 $$ dualities and argue that they imply certain two dimensional dualities.« less
A new approach to importance sampling for the simulation of false alarms. [in radar systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1987-01-01
In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.
Data analytics and parallel-coordinate materials property charts
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.
2018-01-01
It is often advantageous to display material properties relationships in the form of charts that highlight important correlations and thereby enhance our understanding of materials behavior and facilitate materials selection. Unfortunately, in many cases, these correlations are highly multidimensional in nature, and one typically employs low-dimensional cross-sections of the property space to convey some aspects of these relationships. To overcome some of these difficulties, in this work we employ methods of data analytics in conjunction with a visualization strategy, known as parallel coordinates, to represent better multidimensional materials data and to extract useful relationships among properties. We illustrate the utility of this approach by the construction and systematic analysis of multidimensional materials properties charts for metallic and ceramic systems. These charts simplify the description of high-dimensional geometry, enable dimensional reduction and the identification of significant property correlations and underline distinctions among different materials classes.
Vectorized Rebinning Algorithm for Fast Data Down-Sampling
NASA Technical Reports Server (NTRS)
Dean, Bruce; Aronstein, David; Smith, Jeffrey
2013-01-01
A vectorized rebinning (down-sampling) algorithm, applicable to N-dimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as "images," and each value in an image as a "pixel." The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns. Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications.
Coarse-grained mechanics of viral shells
NASA Astrophysics Data System (ADS)
Klug, William S.; Gibbons, Melissa M.
2008-03-01
We present an approach for creating three-dimensional finite element models of viral capsids from atomic-level structural data (X-ray or cryo-EM). The models capture heterogeneous geometric features and are used in conjunction with three-dimensional nonlinear continuum elasticity to simulate nanoindentation experiments as performed using atomic force microscopy. The method is extremely flexible; able to capture varying levels of detail in the three-dimensional structure. Nanoindentation simulations are presented for several viruses: Hepatitis B, CCMV, HK97, and φ29. In addition to purely continuum elastic models a multiscale technique is developed that combines finite-element kinematics with MD energetics such that large-scale deformations are facilitated by a reduction in degrees of freedom. Simulations of these capsid deformation experiments provide a testing ground for the techniques, as well as insight into the strength-determining mechanisms of capsid deformation. These methods can be extended as a framework for modeling other proteins and macromolecular structures in cell biology.
Assessment of metal artifact reduction methods in pelvic CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdoli, Mehrsima; Mehranian, Abolfazl; Ailianou, Angeliki
2016-04-15
Purpose: Metal artifact reduction (MAR) produces images with improved quality potentially leading to confident and reliable clinical diagnosis and therapy planning. In this work, the authors evaluate the performance of five MAR techniques for the assessment of computed tomography images of patients with hip prostheses. Methods: Five MAR algorithms were evaluated using simulation and clinical studies. The algorithms included one-dimensional linear interpolation (LI) of the corrupted projection bins in the sinogram, two-dimensional interpolation (2D), a normalized metal artifact reduction (NMAR) technique, a metal deletion technique, and a maximum a posteriori completion (MAPC) approach. The algorithms were applied to ten simulatedmore » datasets as well as 30 clinical studies of patients with metallic hip implants. Qualitative evaluations were performed by two blinded experienced radiologists who ranked overall artifact severity and pelvic organ recognition for each algorithm by assigning scores from zero to five (zero indicating totally obscured organs with no structures identifiable and five indicating recognition with high confidence). Results: Simulation studies revealed that 2D, NMAR, and MAPC techniques performed almost equally well in all regions. LI falls behind the other approaches in terms of reducing dark streaking artifacts as well as preserving unaffected regions (p < 0.05). Visual assessment of clinical datasets revealed the superiority of NMAR and MAPC in the evaluated pelvic organs and in terms of overall image quality. Conclusions: Overall, all methods, except LI, performed equally well in artifact-free regions. Considering both clinical and simulation studies, 2D, NMAR, and MAPC seem to outperform the other techniques.« less
Advanced Fluid Reduced Order Models for Compressible Flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tezaur, Irina Kalashnikova; Fike, Jeffrey A.; Carlberg, Kevin Thomas
This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly themore » POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.« less
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Dutt-Mazumder, Aviroop; Button, Chris; Robins, Anthony; Bartlett, Roger
2011-12-01
Recent studies have explored the organization of player movements in team sports using a range of statistical tools. However, the factors that best explain the performance of association football teams remain elusive. Arguably, this is due to the high-dimensional behavioural outputs that illustrate the complex, evolving configurations typical of team games. According to dynamical system analysts, movement patterns in team sports exhibit nonlinear self-organizing features. Nonlinear processing tools (i.e. Artificial Neural Networks; ANNs) are becoming increasingly popular to investigate the coordination of participants in sports competitions. ANNs are well suited to describing high-dimensional data sets with nonlinear attributes, however, limited information concerning the processes required to apply ANNs exists. This review investigates the relative value of various ANN learning approaches used in sports performance analysis of team sports focusing on potential applications for association football. Sixty-two research sources were summarized and reviewed from electronic literature search engines such as SPORTDiscus, Google Scholar, IEEE Xplore, Scirus, ScienceDirect and Elsevier. Typical ANN learning algorithms can be adapted to perform pattern recognition and pattern classification. Particularly, dimensionality reduction by a Kohonen feature map (KFM) can compress chaotic high-dimensional datasets into low-dimensional relevant information. Such information would be useful for developing effective training drills that should enhance self-organizing coordination among players. We conclude that ANN-based qualitative analysis is a promising approach to understand the dynamical attributes of association football players.
A review on the multivariate statistical methods for dimensional reduction studies
NASA Astrophysics Data System (ADS)
Aik, Lim Eng; Kiang, Lam Chee; Mohamed, Zulkifley Bin; Hong, Tan Wei
2017-05-01
In this research study we have discussed multivariate statistical methods for dimensional reduction, which has been done by various researchers. The reduction of dimensionality is valuable to accelerate algorithm progression, as well as really may offer assistance with the last grouping/clustering precision. A lot of boisterous or even flawed info information regularly prompts a not exactly alluring algorithm progression. Expelling un-useful or dis-instructive information segments may for sure help the algorithm discover more broad grouping locales and principles and generally speaking accomplish better exhibitions on new data set.
Local Table Condensation in Rough Set Approach for Jumping Emerging Pattern Induction
NASA Astrophysics Data System (ADS)
Terlecki, Pawel; Walczak, Krzysztof
This paper extends the rough set approach for JEP induction based on the notion of a condensed decision table. The original transaction database is transformed to a relational form and patterns are induced by means of local reducts. The transformation employs an item aggregation obtained by coloring a graph that re0ects con0icts among items. For e±ciency reasons we propose to perform this preprocessing locally, i.e. at the transaction level, to achieve a higher dimensionality gain. Special maintenance strategy is also used to avoid graph rebuilds. Both global and local approach have been tested and discussed for dense and synthetically generated sparse datasets.
NASA Technical Reports Server (NTRS)
Dasarathy, B. V.
1976-01-01
An algorithm is proposed for dimensionality reduction in the context of clustering techniques based on histogram analysis. The approach is based on an evaluation of the hills and valleys in the unidimensional histograms along the different features and provides an economical means of assessing the significance of the features in a nonparametric unsupervised data environment. The method has relevance to remote sensing applications.
An adaptive confidence limit for periodic non-steady conditions fault detection
NASA Astrophysics Data System (ADS)
Wang, Tianzhen; Wu, Hao; Ni, Mengqi; Zhang, Milu; Dong, Jingjing; Benbouzid, Mohamed El Hachemi; Hu, Xiong
2016-05-01
System monitoring has become a major concern in batch process due to the fact that failure rate in non-steady conditions is much higher than in steady ones. A series of approaches based on PCA have already solved problems such as data dimensionality reduction, multivariable decorrelation, and processing non-changing signal. However, if the data follows non-Gaussian distribution or the variables contain some signal changes, the above approaches are not applicable. To deal with these concerns and to enhance performance in multiperiod data processing, this paper proposes a fault detection method using adaptive confidence limit (ACL) in periodic non-steady conditions. The proposed ACL method achieves four main enhancements: Longitudinal-Standardization could convert non-Gaussian sampling data to Gaussian ones; the multiperiod PCA algorithm could reduce dimensionality, remove correlation, and improve the monitoring accuracy; the adaptive confidence limit could detect faults under non-steady conditions; the fault sections determination procedure could select the appropriate parameter of the adaptive confidence limit. The achieved result analysis clearly shows that the proposed ACL method is superior to other fault detection approaches under periodic non-steady conditions.
O'Neil, Gregory W; Nelson, Robert K; Wright, Alicia M; Reddy, Christopher M
2016-05-06
A representative substrate scope investigation for an enantioselective catalytic ketone-reduction has been performed as a single reaction on a mixture containing equimolar amounts of nine (9) prototypical compounds. The resulting analyte pool containing 18 potential products from nine different reactions could all be completely resolved in a single chromatographic injection using comprehensive two-dimensional gas chromatography (GC×GC) with time-of-flight mass spectrometry, allowing for simultaneous determination of percent conversion and enantiomeric excess for each substrate. The results obtained for an enantioselective iron-catalyzed asymmetric transfer hydrogenation using this one-pot/single-analysis approach were similar to those reported for the individualized reactions, demonstrating the utility of this strategy for streamlining substrate scope investigations. Moreover, for this particular catalyst, activity and selectivity were not greatly affected by the presence of other ketones or enantioenriched reduced products. This approach allows for faster and greener analyses that are central to new reaction development, as well as an opportunity to gain further insights into other established transformations.
Fragment approach to the electronic structure of τ -boron allotrope
NASA Astrophysics Data System (ADS)
Karmodak, Naiwrit; Jemmis, Eluvathingal D.
2017-04-01
The presence of nonconventional bonding features is an intriguing part of elemental boron. The recent addition of τ boron to the family of three-dimensional boron allotropes is no exception. We provide an understanding of the electronic structure of τ boron using a fragment molecular approach, where the effect of symmetry reduction on skeletal bands of B12 and the B57 fragments are examined qualitatively by analyzing the projected density of states of these fragments. In spite of the structural resemblance to β boron, the reduction of symmetry from a rhombohedral space group to the orthorhombic one destabilizes the bands and reduces the electronic requirements. This suggests the presence of the partially occupied boron sites, as seen for a β boron unit cell, and draws the possibility for the existence of different energetically similar polymorphs. τ boron has a lower binding energy than β boron.
Kaluza-Klein cosmology from five-dimensional Lovelock-Cartan theory
NASA Astrophysics Data System (ADS)
Castillo-Felisola, Oscar; Corral, Cristóbal; del Pino, Simón; Ramírez, Francisca
2016-12-01
We study the Kaluza-Klein dimensional reduction of the Lovelock-Cartan theory in five-dimensional spacetime, with a compact dimension of S1 topology. We find cosmological solutions of the Friedmann-Robertson-Walker class in the reduced spacetime. The torsion and the fields arising from the dimensional reduction induce a nonvanishing energy-momentum tensor in four dimensions. We find solutions describing expanding, contracting, and bouncing universes. The model shows a dynamical compactification of the extra dimension in some regions of the parameter space.
The Ritz - Sublaminate Generalized Unified Formulation approach for piezoelectric composite plates
NASA Astrophysics Data System (ADS)
D'Ottavio, Michele; Dozio, Lorenzo; Vescovini, Riccardo; Polit, Olivier
2018-01-01
This paper extends to composite plates including piezoelectric plies the variable kinematics plate modeling approach called Sublaminate Generalized Unified Formulation (SGUF). Two-dimensional plate equations are obtained upon defining a priori the through-thickness distribution of the displacement field and electric potential. According to SGUF, independent approximations can be adopted for the four components of these generalized displacements: an Equivalent Single Layer (ESL) or Layer-Wise (LW) description over an arbitrary group of plies constituting the composite plate (the sublaminate) and the polynomial order employed in each sublaminate. The solution of the two-dimensional equations is sought in weak form by means of a Ritz method. In this work, boundary functions are used in conjunction with the domain approximation expressed by an orthogonal basis spanned by Legendre polynomials. The proposed computational tool is capable to represent electroded surfaces with equipotentiality conditions. Free-vibration problems as well as static problems involving actuator and sensor configurations are addressed. Two case studies are presented, which demonstrate the high accuracy of the proposed Ritz-SGUF approach. A model assessment is proposed for showcasing to which extent the SGUF approach allows a reduction of the number of unknowns with a controlled impact on the accuracy of the result.
A manifold learning approach to target detection in high-resolution hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ziemann, Amanda K.
Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying "targets" such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data; this is particularly true when implementing traditional target detection approaches, and the limitations of these models are well-documented. With manifold learning based approaches, the only assumption is that the data reside on an underlying manifold that can be discretely modeled by a graph. The research presented here focuses on the use of graph theory and manifold learning in hyperspectral imagery. Early work explored various graph-building techniques with application to the background model of the Topological Anomaly Detection (TAD) algorithm, which is a graph theory based approach to anomaly detection. This led towards a focus on target detection, and in the development of a specific graph-based model of the data and subsequent dimensionality reduction using manifold learning. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, lower-dimensional manifold coordinates. Then, target detection is performed in the manifold space.
DD-HDS: A method for visualization and exploration of high-dimensional data.
Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard
2007-09-01
Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
Mass reduction patterning of silicon-on-oxide-based micromirrors
NASA Astrophysics Data System (ADS)
Hall, Harris J.; Green, Andrew; Dooley, Sarah; Schmidt, Jason D.; Starman, LaVern A.; Langley, Derrick; Coutu, Ronald A.
2016-10-01
It has long been recognized in the design of micromirror-based optical systems that balancing static flatness of the mirror surface through structural design with the system's mechanical dynamic response is challenging. Although a variety of mass reduction approaches have been presented in the literature to address this performance trade, there has been little quantifiable comparison reported. In this work, different mass reduction approaches, some unique to the work, are quantifiably compared with solid plate thinning in both curvature and mass using commercial finite element simulation of a specific square silicon-on-insulator-based micromirror geometry. Other important considerations for micromirror surfaces, including surface profile and smoothness, are also discussed. Fabrication of one of these geometries, a two-dimensional tessellated square pattern, was performed in the presence of a 400-μm-tall central post structure using a simple single mask process. Limited experimental curvature measurements of fabricated samples are shown to correspond well with properly characterized simulation results and indicate ˜67% improvement in radius of curvature in comparison to a solid plate design of equivalent mass.
Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S
2017-06-01
Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.
Vilhelmova, N; Jacquet, R; Quideau, S; Stoyanova, A; Galabov, A S
2011-02-01
The effects of combinations of three nonahydroxyterphenoyl-bearing C-glucosidic ellagitannins (castalagin, vescalagin and grandinin) with acyclovir (ACV) on the replication of type-1 and type-2 herpes simplex viruses in MDBK cells were tested by the focus-forming units reduction test. Ellagitannins included in these combinations possess a high individual antiviral activity: selectivity index of castalagin and vescalagin versus HSV-1 was similar to that of ACV, and relatively lower against HSV-2. The three-dimensional analytical approach of Prichard and Shipman was used to evaluate the impact of drug-drug interactions. The combination effects of ellagitannins with acyclovir were markedly synergistic. Copyright © 2010 Elsevier B.V. All rights reserved.
Topology optimization of two-dimensional elastic wave barriers
NASA Astrophysics Data System (ADS)
Van hoorickx, C.; Sigmund, O.; Schevenels, M.; Lazarov, B. S.; Lombaert, G.
2016-08-01
Topology optimization is a method that optimally distributes material in a given design domain. In this paper, topology optimization is used to design two-dimensional wave barriers embedded in an elastic halfspace. First, harmonic vibration sources are considered, and stiffened material is inserted into a design domain situated between the source and the receiver to minimize wave transmission. At low frequencies, the stiffened material reflects and guides waves away from the surface. At high frequencies, destructive interference is obtained that leads to high values of the insertion loss. To handle harmonic sources at a frequency in a given range, a uniform reduction of the response over a frequency range is pursued. The minimal insertion loss over the frequency range of interest is maximized. The resulting design contains features at depth leading to a reduction of the insertion loss at the lowest frequencies and features close to the surface leading to a reduction at the highest frequencies. For broadband sources, the average insertion loss in a frequency range is optimized. This leads to designs that especially reduce the response at high frequencies. The designs optimized for the frequency averaged insertion loss are found to be sensitive to geometric imperfections. In order to obtain a robust design, a worst case approach is followed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inomata, A.; Junker, G.; Wilson, R.
1993-08-01
The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.
Beretta, Lorenzo; Santaniello, Alessandro; van Riel, Piet L C M; Coenen, Marieke J H; Scorza, Raffaella
2010-08-06
Epistasis is recognized as a fundamental part of the genetic architecture of individuals. Several computational approaches have been developed to model gene-gene interactions in case-control studies, however, none of them is suitable for time-dependent analysis. Herein we introduce the Survival Dimensionality Reduction (SDR) algorithm, a non-parametric method specifically designed to detect epistasis in lifetime datasets. The algorithm requires neither specification about the underlying survival distribution nor about the underlying interaction model and proved satisfactorily powerful to detect a set of causative genes in synthetic epistatic lifetime datasets with a limited number of samples and high degree of right-censorship (up to 70%). The SDR method was then applied to a series of 386 Dutch patients with active rheumatoid arthritis that were treated with anti-TNF biological agents. Among a set of 39 candidate genes, none of which showed a detectable marginal effect on anti-TNF responses, the SDR algorithm did find that the rs1801274 SNP in the Fc gamma RIIa gene and the rs10954213 SNP in the IRF5 gene non-linearly interact to predict clinical remission after anti-TNF biologicals. Simulation studies and application in a real-world setting support the capability of the SDR algorithm to model epistatic interactions in candidate-genes studies in presence of right-censored data. http://sourceforge.net/projects/sdrproject/.
Compactification on phase space
NASA Astrophysics Data System (ADS)
Lovelady, Benjamin; Wheeler, James
2016-03-01
A major challenge for string theory is to understand the dimensional reduction required for comparison with the standard model. We propose reducing the dimension of the compactification by interpreting some of the extra dimensions as the energy-momentum portion of a phase-space. Such models naturally arise as generalized quotients of the conformal group called biconformal spaces. By combining the standard Kaluza-Klein approach with such a conformal gauge theory, we may start from the conformal group of an n-dimensional Euclidean space to form a 2n-dimensional quotient manifold with symplectic structure. A pair of involutions leads naturally to two n-dimensional Lorentzian manifolds. For n = 5, this leaves only two extra dimensions, with a countable family of possible compactifications and an SO(5) Yang-Mills field on the fibers. Starting with n=6 leads to 4-dimensional compactification of the phase space. In the latter case, if the two dimensions each from spacetime and momentum space are compactified onto spheres, then there is an SU(2)xSU(2) (left-right symmetric electroweak) field between phase and configuration space and an SO(6) field on the fibers. Such a theory, with minor additional symmetry breaking, could contain all parts of the standard model.
Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.
Zhao, Dongfang; Yang, Li
2009-01-01
Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.
A Newtonian approach to extraordinarily strong negative refraction.
Yoon, Hosang; Yeung, Kitty Y M; Umansky, Vladimir; Ham, Donhee
2012-08-02
Metamaterials with negative refractive indices can manipulate electromagnetic waves in unusual ways, and can be used to achieve, for example, sub-diffraction-limit focusing, the bending of light in the 'wrong' direction, and reversed Doppler and Cerenkov effects. These counterintuitive and technologically useful behaviours have spurred considerable efforts to synthesize a broad array of negative-index metamaterials with engineered electric, magnetic or optical properties. Here we demonstrate another route to negative refraction by exploiting the inertia of electrons in semiconductor two-dimensional electron gases, collectively accelerated by electromagnetic waves according to Newton's second law of motion, where this acceleration effect manifests as kinetic inductance. Using kinetic inductance to attain negative refraction was theoretically proposed for three-dimensional metallic nanoparticles and seen experimentally with surface plasmons on the surface of a three-dimensional metal. The two-dimensional electron gas that we use at cryogenic temperatures has a larger kinetic inductance than three-dimensional metals, leading to extraordinarily strong negative refraction at gigahertz frequencies, with an index as large as -700. This pronounced negative refractive index and the corresponding reduction in the effective wavelength opens a path to miniaturization in the science and technology of negative refraction.
Yuan, Fang; Wang, Guangyi; Wang, Xiaowei
2017-03-01
In this paper, smooth curve models of meminductor and memcapacitor are designed, which are generalized from a memristor. Based on these models, a new five-dimensional chaotic oscillator that contains a meminductor and memcapacitor is proposed. By dimensionality reducing, this five-dimensional system can be transformed into a three-dimensional system. The main work of this paper is to give the comparisons between the five-dimensional system and its dimensionality reduction model. To investigate dynamics behaviors of the two systems, equilibrium points and stabilities are analyzed. And the bifurcation diagrams and Lyapunov exponent spectrums are used to explore their properties. In addition, digital signal processing technologies are used to realize this chaotic oscillator, and chaotic sequences are generated by the experimental device, which can be used in encryption applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang
GIXSGUIis a MATLAB toolbox that offers both a graphical user interface and script-based access to visualize and process grazing-incidence X-ray scattering data from nanostructures on surfaces and in thin films. It provides routine surface scattering data reduction methods such as geometric correction, one-dimensional intensity linecut, two-dimensional intensity reshapingetc. Three-dimensional indexing is also implemented to determine the space group and lattice parameters of buried organized nanoscopic structures in supported thin films.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popova, Evdokia; Rodgers, Theron M.; Gong, Xinyi
A novel data science workflow is developed and demonstrated to extract process-structure linkages (i.e., reduced-order model) for microstructure evolution problems when the final microstructure depends on (simulation or experimental) processing parameters. Our workflow consists of four main steps: data pre-processing, microstructure quantification, dimensionality reduction, and extraction/validation of process-structure linkages. These methods that can be employed within each step vary based on the type and amount of available data. In this paper, this data-driven workflow is applied to a set of synthetic additive manufacturing microstructures obtained using the Potts-kinetic Monte Carlo (kMC) approach. Additive manufacturing techniques inherently produce complex microstructures thatmore » can vary significantly with processing conditions. Using the developed workflow, a low-dimensional data-driven model was established to correlate process parameters with the predicted final microstructure. In addition, the modular workflows developed and presented in this work facilitate easy dissemination and curation by the broader community.« less
Popova, Evdokia; Rodgers, Theron M.; Gong, Xinyi; ...
2017-03-13
A novel data science workflow is developed and demonstrated to extract process-structure linkages (i.e., reduced-order model) for microstructure evolution problems when the final microstructure depends on (simulation or experimental) processing parameters. Our workflow consists of four main steps: data pre-processing, microstructure quantification, dimensionality reduction, and extraction/validation of process-structure linkages. These methods that can be employed within each step vary based on the type and amount of available data. In this paper, this data-driven workflow is applied to a set of synthetic additive manufacturing microstructures obtained using the Potts-kinetic Monte Carlo (kMC) approach. Additive manufacturing techniques inherently produce complex microstructures thatmore » can vary significantly with processing conditions. Using the developed workflow, a low-dimensional data-driven model was established to correlate process parameters with the predicted final microstructure. In addition, the modular workflows developed and presented in this work facilitate easy dissemination and curation by the broader community.« less
Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema
2016-08-10
Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.
From causal dynamical triangulations to astronomical observations
NASA Astrophysics Data System (ADS)
Mielczarek, Jakub
2017-09-01
This letter discusses phenomenological aspects of dimensional reduction predicted by the Causal Dynamical Triangulations (CDT) approach to quantum gravity. The deformed form of the dispersion relation for the fields defined on the CDT space-time is reconstructed. Using the Fermi satellite observations of the GRB 090510 source we find that the energy scale of the dimensional reduction is E* > 0.7 \\sqrt{4-d\\text{UV}} \\cdot 1010 \\text{GeV} at (95% CL), where d\\text{UV} is the value of the spectral dimension in the UV limit. By applying the deformed dispersion relation to the cosmological perturbations it is shown that, for a scenario when the primordial perturbations are formed in the UV region, the scalar power spectrum PS \\propto kn_S-1 , where n_S-1≈ \\frac{3 r (d\\text{UV}-2)}{(d\\text{UV}-1)r-48} . Here, r is the tensor-to-scalar ratio. We find that within the considered model, the predicted from CDT deviation from the scale invariance (n_S=1) is in contradiction with the up to date Planck and BICEP2.
Sultan, Nabil; Garziglia, Sébastien; Ruffine, Livio
2016-05-27
Over the past years, several studies have raised concerns about the possible interactions between methane hydrate decomposition and external change. To carry out such an investigation, it is essential to characterize the baseline dynamics of gas hydrate systems related to natural geological and sedimentary processes. This is usually treated through the analysis of sulfate-reduction coupled to anaerobic oxidation of methane (AOM). Here, we model sulfate reduction coupled with AOM as a two-dimensional (2D) problem including, advective and diffusive transport. This is applied to a case study from a deep-water site off Nigeria's coast where lateral methane advection through turbidite layers was suspected. We show by analyzing the acquired data in combination with computational modeling that a two-dimensional approach is able to accurately describe the recent past dynamics of such a complex natural system. Our results show that the sulfate-methane-transition-zone (SMTZ) is not a vertical barrier for dissolved sulfate and methane. We also show that such a modeling is able to assess short timescale variations in the order of decades to centuries.
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
2016-01-01
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
Online Sequential Projection Vector Machine with Adaptive Data Mean Update
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958
Synthesis of Ultrathin Si Nanosheets from Natural Clays for Lithium-Ion Battery Anodes.
Ryu, Jaegeon; Hong, Dongki; Choi, Sinho; Park, Soojin
2016-02-23
Two-dimensional Si nanosheets have been studied as a promising candidate for lithium-ion battery anode materials. However, Si nanosheets reported so far showed poor cycling performances and required further improvements. In this work, we utilize inexpensive natural clays for preparing high quality Si nanosheets via a one-step simultaneous molten salt-induced exfoliation and chemical reduction process. This approach produces high purity mesoporous Si nanosheets in high yield. As a control experiment, two-step process (pre-exfoliated silicate sheets and subsequent chemical reduction) cannot sustain their original two-dimensional structure. In contrast, one-step method results in a production of 5 nm-thick highly porous Si nanosheets. Carbon-coated Si nanosheet anodes exhibit a high reversible capacity of 865 mAh g(-1) at 1.0 A g(-1) with an outstanding capacity retention of 92.3% after 500 cycles. It also delivers high rate capability, corresponding to a capacity of 60% at 20 A g(-1) compared to that of 2.0 A g(-1). Furthermore, the Si nanosheet electrodes show volume expansion of only 42% after 200 cycles.
Al-Qazzaz, Noor Kamal; Ali, Sawal; Ahmad, Siti Anom; Escudero, Javier
2017-07-01
The aim of the present study was to discriminate the electroencephalogram (EEG) of 5 patients with vascular dementia (VaD), 15 patients with stroke-related mild cognitive impairment (MCI), and 15 control normal subjects during a working memory (WM) task. We used independent component analysis (ICA) and wavelet transform (WT) as a hybrid preprocessing approach for EEG artifact removal. Three different features were extracted from the cleaned EEG signals: spectral entropy (SpecEn), permutation entropy (PerEn) and Tsallis entropy (TsEn). Two classification schemes were applied - support vector machine (SVM) and k-nearest neighbors (kNN) - with fuzzy neighborhood preserving analysis with QR-decomposition (FNPAQR) as a dimensionality reduction technique. The FNPAQR dimensionality reduction technique increased the SVM classification accuracy from 82.22% to 90.37% and from 82.6% to 86.67% for kNN. These results suggest that FNPAQR consistently improves the discrimination of VaD, MCI patients and control normal subjects and it could be a useful feature selection to help the identification of patients with VaD and MCI.
Online Sequential Projection Vector Machine with Adaptive Data Mean Update.
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.
Inertial Manifold and Large Deviations Approach to Reduced PDE Dynamics
NASA Astrophysics Data System (ADS)
Cardin, Franco; Favretti, Marco; Lovison, Alberto
2017-09-01
In this paper a certain type of reaction-diffusion equation—similar to the Allen-Cahn equation—is the starting point for setting up a genuine thermodynamic reduction i.e. involving a finite number of parameters or collective variables of the initial system. We firstly operate a finite Lyapunov-Schmidt reduction of the cited reaction-diffusion equation when reformulated as a variational problem. In this way we gain a finite-dimensional ODE description of the initial system which preserves the gradient structure of the original one and that is exact for the static case and only approximate for the dynamic case. Our main concern is how to deal with this approximate reduced description of the initial PDE. To start with, we note that our approximate reduced ODE is similar to the approximate inertial manifold introduced by Temam and coworkers for Navier-Stokes equations. As a second approach, we take into account the uncertainty (loss of information) introduced with the above mentioned approximate reduction by considering the stochastic version of the ODE. We study this reduced stochastic system using classical tools from large deviations, viscosity solutions and weak KAM Hamilton-Jacobi theory. In the last part we suggest a possible use of a result of our approach in the comprehensive treatment non equilibrium thermodynamics given by Macroscopic Fluctuation Theory.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
Whitham modulation theory for (2 + 1)-dimensional equations of Kadomtsev–Petviashvili type
NASA Astrophysics Data System (ADS)
Ablowitz, Mark J.; Biondini, Gino; Rumanov, Igor
2018-05-01
Whitham modulation theory for certain two-dimensional evolution equations of Kadomtsev–Petviashvili (KP) type is presented. Three specific examples are considered in detail: the KP equation, the two-dimensional Benjamin–Ono (2DBO) equation and a modified KP (m2KP) equation. A unified derivation is also provided. In the case of the m2KP equation, the corresponding Whitham modulation system exhibits features different from the other two. The approach presented here does not require integrability of the original evolution equation. Indeed, while the KP equation is known to be a completely integrable equation, the 2DBO equation and the m2KP equation are not known to be integrable. In each of the cases considered, the Whitham modulation system obtained consists of five first-order quasilinear partial differential equations. The Riemann problem (i.e. the analogue of the Gurevich–Pitaevskii problem) for the one-dimensional reduction of the m2KP equation is studied. For the m2KP equation, the system of modulation equations is used to analyze the linear stability of traveling wave solutions.
Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K
2013-08-01
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.
Multivariate Strategies in Functional Magnetic Resonance Imaging
ERIC Educational Resources Information Center
Hansen, Lars Kai
2007-01-01
We discuss aspects of multivariate fMRI modeling, including the statistical evaluation of multivariate models and means for dimensional reduction. In a case study we analyze linear and non-linear dimensional reduction tools in the context of a "mind reading" predictive multivariate fMRI model.
Two component-three dimensional catalysis
Schwartz, Michael; White, James H.; Sammells, Anthony F.
2002-01-01
This invention relates to catalytic reactor membranes having a gas-impermeable membrane for transport of oxygen anions. The membrane has an oxidation surface and a reduction surface. The membrane is coated on its oxidation surface with an adherent catalyst layer and is optionally coated on its reduction surface with a catalyst that promotes reduction of an oxygen-containing species (e.g., O.sub.2, NO.sub.2, SO.sub.2, etc.) to generate oxygen anions on the membrane. The reactor has an oxidation zone and a reduction zone separated by the membrane. A component of an oxygen containing gas in the reduction zone is reduced at the membrane and a reduced species in a reactant gas in the oxidation zone of the reactor is oxidized. The reactor optionally contains a three-dimensional catalyst in the oxidation zone. The adherent catalyst layer and the three-dimensional catalyst are selected to promote a desired oxidation reaction, particularly a partial oxidation of a hydrocarbon.
Application of diffusion maps to identify human factors of self-reported anomalies in aviation.
Andrzejczak, Chris; Karwowski, Waldemar; Mikusinski, Piotr
2012-01-01
A study investigating what factors are present leading to pilots submitting voluntary anomaly reports regarding their flight performance was conducted. Diffusion Maps (DM) were selected as the method of choice for performing dimensionality reduction on text records for this study. Diffusion Maps have seen successful use in other domains such as image classification and pattern recognition. High-dimensionality data in the form of narrative text reports from the NASA Aviation Safety Reporting System (ASRS) were clustered and categorized by way of dimensionality reduction. Supervised analyses were performed to create a baseline document clustering system. Dimensionality reduction techniques identified concepts or keywords within records, and allowed the creation of a framework for an unsupervised document classification system. Results from the unsupervised clustering algorithm performed similarly to the supervised methods outlined in the study. The dimensionality reduction was performed on 100 of the most commonly occurring words within 126,000 text records describing commercial aviation incidents. This study demonstrates that unsupervised machine clustering and organization of incident reports is possible based on unbiased inputs. Findings from this study reinforced traditional views on what factors contribute to civil aviation anomalies, however, new associations between previously unrelated factors and conditions were also found.
Webb, C A; Weber, M; Mundy, E A; Killgore, W D S
2014-10-01
Studies investigating structural brain abnormalities in depression have typically employed a categorical rather than dimensional approach to depression [i.e., comparing subjects with Diagnostic and Statistical Manual of Mental Disorders (DSM)-defined major depressive disorder (MDD) v. healthy controls]. The National Institute of Mental Health, through their Research Domain Criteria initiative, has encouraged a dimensional approach to the study of psychopathology as opposed to an over-reliance on categorical (e.g., DSM-based) diagnostic approaches. Moreover, subthreshold levels of depressive symptoms (i.e., severity levels below DSM criteria) have been found to be associated with a range of negative outcomes, yet have been relatively neglected in neuroimaging research. To examine the extent to which depressive symptoms--even at subclinical levels--are linearly related to gray matter volume reductions in theoretically important brain regions, we employed whole-brain voxel-based morphometry in a sample of 54 participants. The severity of mild depressive symptoms, even in a subclinical population, was associated with reduced gray matter volume in the orbitofrontal cortex, anterior cingulate, thalamus, superior temporal gyrus/temporal pole and superior frontal gyrus. A conjunction analysis revealed concordance across two separate measures of depression. Reduced gray matter volume in theoretically important brain regions can be observed even in a sample that does not meet DSM criteria for MDD, but who nevertheless report relatively elevated levels of depressive symptoms. Overall, these findings highlight the need for additional research using dimensional conceptual and analytic approaches, as well as further investigation of subclinical populations.
Prediction With Dimension Reduction of Multiple Molecular Data Sources for Patient Survival.
Kaplan, Adam; Lock, Eric F
2017-01-01
Predictive modeling from high-dimensional genomic data is often preceded by a dimension reduction step, such as principal component analysis (PCA). However, the application of PCA is not straightforward for multisource data, wherein multiple sources of 'omics data measure different but related biological components. In this article, we use recent advances in the dimension reduction of multisource data for predictive modeling. In particular, we apply exploratory results from Joint and Individual Variation Explained (JIVE), an extension of PCA for multisource data, for prediction of differing response types. We conduct illustrative simulations to illustrate the practical advantages and interpretability of our approach. As an application example, we consider predicting survival for patients with glioblastoma multiforme from 3 data sources measuring messenger RNA expression, microRNA expression, and DNA methylation. We also introduce a method to estimate JIVE scores for new samples that were not used in the initial dimension reduction and study its theoretical properties; this method is implemented in the R package R.JIVE on CRAN, in the function jive.predict.
NASA Astrophysics Data System (ADS)
Prescott, Aaron M.; Abel, Steven M.
2016-12-01
The rational design of network behavior is a central goal of synthetic biology. Here, we combine in silico evolution with nonlinear dimensionality reduction to redesign the responses of fixed-topology signaling networks and to characterize sets of kinetic parameters that underlie various input-output relations. We first consider the earliest part of the T cell receptor (TCR) signaling network and demonstrate that it can produce a variety of input-output relations (quantified as the level of TCR phosphorylation as a function of the characteristic TCR binding time). We utilize an evolutionary algorithm (EA) to identify sets of kinetic parameters that give rise to: (i) sigmoidal responses with the activation threshold varied over 6 orders of magnitude, (ii) a graded response, and (iii) an inverted response in which short TCR binding times lead to activation. We also consider a network with both positive and negative feedback and use the EA to evolve oscillatory responses with different periods in response to a change in input. For each targeted input-output relation, we conduct many independent runs of the EA and use nonlinear dimensionality reduction to embed the resulting data for each network in two dimensions. We then partition the results into groups and characterize constraints placed on the parameters by the different targeted response curves. Our approach provides a way (i) to guide the design of kinetic parameters of fixed-topology networks to generate novel input-output relations and (ii) to constrain ranges of biological parameters using experimental data. In the cases considered, the network topologies exhibit significant flexibility in generating alternative responses, with distinct patterns of kinetic rates emerging for different targeted responses.
Single block three-dimensional volume grids about complex aerodynamic vehicles
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Weilmuenster, K. James
1993-01-01
This paper presents an alternate approach for the generation of volumetric grids for supersonic and hypersonic flows about complex configurations. The method uses parametric two dimensional block face grid definition within the framework of GRIDGEN2D. The incorporation of face decomposition reduces complex surfaces to simple shapes. These simple shapes are combined to obtain the final face definition. The advantages of this method include the reduction of overall grid generation time through the use of vectorized computer code, the elimination of the need to generate matching block faces, and the implementation of simplified boundary conditions. A simple axisymmetric grid is used to illustrate this method. In addition, volume grids for two complex configurations, the Langley Lifting Body (HL-20) and the Space Shuttle Orbiter, are shown.
Multi-dimensional photonic states from a quantum dot
NASA Astrophysics Data System (ADS)
Lee, J. P.; Bennett, A. J.; Stevenson, R. M.; Ellis, D. J. P.; Farrer, I.; Ritchie, D. A.; Shields, A. J.
2018-04-01
Quantum states superposed across multiple particles or degrees of freedom offer an advantage in the development of quantum technologies. Creating these states deterministically and with high efficiency is an ongoing challenge. A promising approach is the repeated excitation of multi-level quantum emitters, which have been shown to naturally generate light with quantum statistics. Here we describe how to create one class of higher dimensional quantum state, a so called W-state, which is superposed across multiple time bins. We do this by repeated Raman scattering of photons from a charged quantum dot in a pillar microcavity. We show this method can be scaled to larger dimensions with no reduction in coherence or single-photon character. We explain how to extend this work to enable the deterministic creation of arbitrary time-bin encoded qudits.
NASA Astrophysics Data System (ADS)
Dolgov, S. V.; Smirnov, A. P.; Tyrtyshnikov, E. E.
2014-04-01
We consider numerical modeling of the Farley-Buneman instability in the Earth's ionosphere plasma. The ion behavior is governed by the kinetic Vlasov equation with the BGK collisional term in the four-dimensional phase space, and since the finite difference discretization on a tensor product grid is used, this equation becomes the most computationally challenging part of the scheme. To relax the complexity and memory consumption, an adaptive model reduction using the low-rank separation of variables, namely the Tensor Train format, is employed. The approach was verified via a prototype MATLAB implementation. Numerical experiments demonstrate the possibility of efficient separation of space and velocity variables, resulting in the solution storage reduction by a factor of order tens.
Iterative methods for dose reduction and image enhancement in tomography
Miao, Jianwei; Fahimian, Benjamin Pooya
2012-09-18
A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.
Data on Support Vector Machines (SVM) model to forecast photovoltaic power.
Malvoni, M; De Giorgi, M G; Congedo, P M
2016-12-01
The data concern the photovoltaic (PV) power, forecasted by a hybrid model that considers weather variations and applies a technique to reduce the input data size, as presented in the paper entitled "Photovoltaic forecast based on hybrid pca-lssvm using dimensionality reducted data" (M. Malvoni, M.G. De Giorgi, P.M. Congedo, 2015) [1]. The quadratic Renyi entropy criteria together with the principal component analysis (PCA) are applied to the Least Squares Support Vector Machines (LS-SVM) to predict the PV power in the day-ahead time frame. The data here shared represent the proposed approach results. Hourly PV power predictions for 1,3,6,12, 24 ahead hours and for different data reduction sizes are provided in Supplementary material.
Dimensionality Control of d-orbital Occupation in Oxide Superlattices
Jeong, Da Woon; Choi, Woo Seok; Okamoto, Satoshi; Kim, Jae–Young; Kim, Kyung Wan; Moon, Soon Jae; Cho, Deok–Yong; Lee, Ho Nyung; Noh, Tae Won
2014-01-01
Manipulating the orbital state in a strongly correlated electron system is of fundamental and technological importance for exploring and developing novel electronic phases. Here, we report an unambiguous demonstration of orbital occupancy control between t2g and eg multiplets in quasi-two-dimensional transition metal oxide superlattices (SLs) composed of a Mott insulator LaCoO3 and a band insulator LaAlO3. As the LaCoO3 sublayer thickness approaches its fundamental limit (i.e. one unit-cell-thick), the electronic state of the SLs changed from a Mott insulator, in which both t2g and eg orbitals are partially filled, to a band insulator by completely filling (emptying) the t2g (eg) orbitals. We found the reduction of dimensionality has a profound effect on the electronic structure evolution, which is, whereas, insensitive to the epitaxial strain. The remarkable orbital controllability shown here offers a promising pathway for novel applications such as catalysis and photovoltaics, where the energy of d level is an essential parameter. PMID:25134975
Using learning automata to determine proper subset size in high-dimensional spaces
NASA Astrophysics Data System (ADS)
Seyyedi, Seyyed Hossein; Minaei-Bidgoli, Behrouz
2017-03-01
In this paper, we offer a new method called FSLA (Finding the best candidate Subset using Learning Automata), which combines the filter and wrapper approaches for feature selection in high-dimensional spaces. Considering the difficulties of dimension reduction in high-dimensional spaces, FSLA's multi-objective functionality is to determine, in an efficient manner, a feature subset that leads to an appropriate tradeoff between the learning algorithm's accuracy and efficiency. First, using an existing weighting function, the feature list is sorted and selected subsets of the list of different sizes are considered. Then, a learning automaton verifies the performance of each subset when it is used as the input space of the learning algorithm and estimates its fitness upon the algorithm's accuracy and the subset size, which determines the algorithm's efficiency. Finally, FSLA introduces the fittest subset as the best choice. We tested FSLA in the framework of text classification. The results confirm its promising performance of attaining the identified goal.
Enhanced superconductivity in atomically thin TaS2
Navarro-Moratalla, Efrén; Island, Joshua O.; Mañas-Valero, Samuel; Pinilla-Cienfuegos, Elena; Castellanos-Gomez, Andres; Quereda, Jorge; Rubio-Bollinger, Gabino; Chirolli, Luca; Silva-Guillén, Jose Angel; Agraït, Nicolás; Steele, Gary A.; Guinea, Francisco; van der Zant, Herre S. J.; Coronado, Eugenio
2016-01-01
The ability to exfoliate layered materials down to the single layer limit has presented the opportunity to understand how a gradual reduction in dimensionality affects the properties of bulk materials. Here we use this top–down approach to address the problem of superconductivity in the two-dimensional limit. The transport properties of electronic devices based on 2H tantalum disulfide flakes of different thicknesses are presented. We observe that superconductivity persists down to the thinnest layer investigated (3.5 nm), and interestingly, we find a pronounced enhancement in the critical temperature from 0.5 to 2.2 K as the layers are thinned down. In addition, we propose a tight-binding model, which allows us to attribute this phenomenon to an enhancement of the effective electron–phonon coupling constant. This work provides evidence that reducing the dimensionality can strengthen superconductivity as opposed to the weakening effect that has been reported in other 2D materials so far. PMID:26984768
A Multivariate Granger Causality Concept towards Full Brain Functional Connectivity.
Schmidt, Christoph; Pester, Britta; Schmid-Hertel, Nicole; Witte, Herbert; Wismüller, Axel; Leistritz, Lutz
2016-01-01
Detecting changes of spatially high-resolution functional connectivity patterns in the brain is crucial for improving the fundamental understanding of brain function in both health and disease, yet still poses one of the biggest challenges in computational neuroscience. Currently, classical multivariate Granger Causality analyses of directed interactions between single process components in coupled systems are commonly restricted to spatially low- dimensional data, which requires a pre-selection or aggregation of time series as a preprocessing step. In this paper we propose a new fully multivariate Granger Causality approach with embedded dimension reduction that makes it possible to obtain a representation of functional connectivity for spatially high-dimensional data. The resulting functional connectivity networks may consist of several thousand vertices and thus contain more detailed information compared to connectivity networks obtained from approaches based on particular regions of interest. Our large scale Granger Causality approach is applied to synthetic and resting state fMRI data with a focus on how well network community structure, which represents a functional segmentation of the network, is preserved. It is demonstrated that a number of different community detection algorithms, which utilize a variety of algorithmic strategies and exploit topological features differently, reveal meaningful information on the underlying network module structure.
Rotorcraft In-Plane Noise Reduction Using Active/Passive Approaches with Induced Vibration Tracking
NASA Astrophysics Data System (ADS)
Chia, Miang Hwee
A comprehensive study of the use of active and passive approaches for in-plane noise reduction, including the vibrations induced during noise reduction, was conducted on a hingeless rotor configuration resembling the MBB BO-105 rotor. First, a parametric study was performed to examine the effects of rotor blade stiffness on the vibration and noise reduction performance of a 20%c plain trailing edge flap and a 1.5%c sliding microflap. This was accomplished using a comprehensive code AVINOR (for Active VIbration and NOise Reduction). A two-dimensional unsteady reduced order aerodynamic model (ROM), using the Rational Function Approximation approach and CFD-based oscillatory aerodynamic load data, was used in the comprehensive code. The study identified a hingeless blade configuration with torsional frequency of 3.17/rev as an optimum configuration for studying vibration and noise reduction using on-blade control devices such as flaps or microflaps. Subsequently, a new suite of computational tools capable of predicting in-plane low frequency sound pressure level (LFSPL) rotorcraft noise and its control was developed, replacing the acoustic module WOPWOP in AVINOR with a new acoustic module HELINOIR (for HELIcopter NOIse Reduction), which overcomes certain limitations associated with WOPWOP. The new suite, consisting of the AVINOR/HELINOIR combination, was used to study active flaps, as well as microflaps operating in closed-loop mode for in-plane noise reduction. An alternative passive in-plane noise reduction approach using modification to the blade tip in the 10%R outboard region was also studied. The new suite consisting of the AVINOR/HELINOIR combination based on a compact aeroacoustic model was validated by comparing with wind tunnel test results, and subsequently verified by comparing with computational results. For active control, the in-plane noise reduction obtained with a single 20%c plain trailing edge flap during level flight at a moderate advance ratio was examined. Different configurations of far-field and near-field feedback microphone locations were examined to develop a fundamental understanding of the feedback microphone locations on the noise reduction process A near-field microphone located on the tip of a nose boom was found to produce a LFSPL reduction of up to 6dB. However, this noise reduction was accompanied by an out-of-plane noise increase of 18dB and 60% increase in vertical hub shear. For passive control, three tip geometries having sweep, dihedral, and anhedral, were considered. The tip dihedral reduced LFSPL by up to 2dB without a vibratory load penalty. However, this was accompanied by an increase in the mid frequency sound pressure levels (MFSPL). The tip sweep and tip anhedral produced an increase in in-plane LFSPL below the horizon. A comparison of the active and passive approaches indicated that active approaches implemented by a plain flap with a feedback microphone located on the nose boom is superior to the passive control approaches. However, there is a general trade-off between LFSPL reduction, MFSPL generation and vibratory hub loads induced by noise control.
A Comparative Study of Rat Lung Decellularization by Chemical Detergents for Lung Tissue Engineering
Tebyanian, Hamid; Karami, Ali; Motavallian, Ebrahim; Aslani, Jafar; Samadikuchaksaraei, Ali; Arjmand, Babak; Nourani, Mohammad Reza
2017-01-01
BACKGROUND: Lung disease is the most common cause of death in the world. The last stage of pulmonary diseases is lung transplantation. Limitation and shortage of donor organs cause to appear tissue engineering field. Decellularization is a hope for producing intact ECM in the development of engineered organs. AIM: The goal of the decellularization process is to remove cellular and nuclear material while retaining lung three-dimensional and molecular proteins. Different concentration of detergents was used for finding the best approach in lung decellularization. MATERIAL AND METHODS: In this study, three-time approaches (24, 48 and 96 h) with four detergents (CHAPS, SDS, SDC and Triton X-100) were used for decellularizing rat lungs for maintaining of three-dimensional lung architecture and ECM protein composition which have significant roles in differentiation and migration of stem cells. This comparative study determined that variable decellularization approaches can cause significantly different effects on decellularized lungs. RESULTS: Results showed that destruction was increased with increasing the detergent concentration. Single detergent showed a significant reduction in maintaining of three-dimensional of lung and ECM proteins (Collagen and Elastin). But, the best methods were mixed detergents of SDC and CHAPS in low concentration in 48 and 96 h decellularization. CONCLUSION: Decellularized lung tissue can be used in the laboratory to study various aspects of pulmonary biology and physiology and also, these results can be used in the continued improvement of engineered lung tissue. PMID:29362610
Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo
2017-01-01
Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR. An advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyping of grapevine and additional crops.
A parametric model order reduction technique for poroelastic finite element models.
Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico
2017-10-01
This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.
2012-01-01
Background Dimensionality reduction (DR) enables the construction of a lower dimensional space (embedding) from a higher dimensional feature space while preserving object-class discriminability. However several popular DR approaches suffer from sensitivity to choice of parameters and/or presence of noise in the data. In this paper, we present a novel DR technique known as consensus embedding that aims to overcome these problems by generating and combining multiple low-dimensional embeddings, hence exploiting the variance among them in a manner similar to ensemble classifier schemes such as Bagging. We demonstrate theoretical properties of consensus embedding which show that it will result in a single stable embedding solution that preserves information more accurately as compared to any individual embedding (generated via DR schemes such as Principal Component Analysis, Graph Embedding, or Locally Linear Embedding). Intelligent sub-sampling (via mean-shift) and code parallelization are utilized to provide for an efficient implementation of the scheme. Results Applications of consensus embedding are shown in the context of classification and clustering as applied to: (1) image partitioning of white matter and gray matter on 10 different synthetic brain MRI images corrupted with 18 different combinations of noise and bias field inhomogeneity, (2) classification of 4 high-dimensional gene-expression datasets, (3) cancer detection (at a pixel-level) on 16 image slices obtained from 2 different high-resolution prostate MRI datasets. In over 200 different experiments concerning classification and segmentation of biomedical data, consensus embedding was found to consistently outperform both linear and non-linear DR methods within all applications considered. Conclusions We have presented a novel framework termed consensus embedding which leverages ensemble classification theory within dimensionality reduction, allowing for application to a wide range of high-dimensional biomedical data classification and segmentation problems. Our generalizable framework allows for improved representation and classification in the context of both imaging and non-imaging data. The algorithm offers a promising solution to problems that currently plague DR methods, and may allow for extension to other areas of biomedical data analysis. PMID:22316103
TPSLVM: a dimensionality reduction algorithm based on thin plate splines.
Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming
2014-10-01
Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.
NASA Astrophysics Data System (ADS)
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.
Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap
NASA Astrophysics Data System (ADS)
Spiwok, Vojtěch; Králová, Blanka
2011-12-01
Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling.
N-Dimensional LLL Reduction Algorithm with Pivoted Reflection
Deng, Zhongliang; Zhu, Di
2018-01-01
The Lenstra-Lenstra-Lovász (LLL) lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO) communication systems and carrier phase positioning in global navigation satellite system (GNSS) to solve the integer least squares (ILS) problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL), expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm. PMID:29351224
A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE
NASA Technical Reports Server (NTRS)
Truong, T. K.
1994-01-01
This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.
Hypothesis on the nature of time
NASA Astrophysics Data System (ADS)
Coumbe, D. N.
2015-06-01
We present numerical evidence that fictitious diffusing particles in the causal dynamical triangulation (CDT) approach to quantum gravity exceed the speed of light on small distance scales. We argue this superluminal behavior is responsible for the appearance of dimensional reduction in the spectral dimension. By axiomatically enforcing a scale invariant speed of light we show that time must dilate as a function of relative scale, just as it does as a function of relative velocity. By calculating the Hausdorff dimension of CDT diffusion paths we present a seemingly equivalent dual description in terms of a scale dependent Wick rotation of the metric. Such a modification to the nature of time may also have relevance for other approaches to quantum gravity.
2010-01-01
Background Epistasis is recognized as a fundamental part of the genetic architecture of individuals. Several computational approaches have been developed to model gene-gene interactions in case-control studies, however, none of them is suitable for time-dependent analysis. Herein we introduce the Survival Dimensionality Reduction (SDR) algorithm, a non-parametric method specifically designed to detect epistasis in lifetime datasets. Results The algorithm requires neither specification about the underlying survival distribution nor about the underlying interaction model and proved satisfactorily powerful to detect a set of causative genes in synthetic epistatic lifetime datasets with a limited number of samples and high degree of right-censorship (up to 70%). The SDR method was then applied to a series of 386 Dutch patients with active rheumatoid arthritis that were treated with anti-TNF biological agents. Among a set of 39 candidate genes, none of which showed a detectable marginal effect on anti-TNF responses, the SDR algorithm did find that the rs1801274 SNP in the FcγRIIa gene and the rs10954213 SNP in the IRF5 gene non-linearly interact to predict clinical remission after anti-TNF biologicals. Conclusions Simulation studies and application in a real-world setting support the capability of the SDR algorithm to model epistatic interactions in candidate-genes studies in presence of right-censored data. Availability: http://sourceforge.net/projects/sdrproject/ PMID:20691091
Suzuki, Eduardo Yugo; Watanabe, Masayo; Buranastidporn, Boonsiva; Baba, Yoshiyuki; Ohyama, Kimie; Ishii, Masatoshi
2006-01-01
The simultaneous use of cleft reduction and maxillary advancement by distraction osteogenesis has not been applied routinely because of the difficulty in three-dimensional control and stabilization of the transported segments. This report describes a new approach of simultaneous bilateral alveolar cleft reduction and maxillary advancement by distraction osteogenesis combined with autogenous bone grafting. A custom-made Twin-Track device was used to allow bilateral alveolar cleft closure combined with simultaneous maxillary advancement, using distraction osteogenesis and a rigid external distraction system in a bilateral cleft lip and palate patient. After a maxillary Le Fort I osteotomy, autogenous iliac bone graft was placed in the cleft spaces before suturing. A latency period of six days was observed before activation. The rate of activation was one mm/d for the maxillary advancement and 0.5 mm/d for the segmental transport. Accordingly, the concave facial appearance was improved with acceptable occlusion, and complete bilateral cleft closure was attained. No adjustments were necessary to the vector of the transported segments during the activation and no complications were observed. The proposed Twin-Track device, based on the concept of track-guided bone transport, permitted three-dimensional control over the distraction processes allowing simultaneous cleft closure, maxillary distraction, and autogenous bone grafting. The combined simultaneous approach is extremely advantageous in correcting severe deformities, reducing the number of surgical interventions and, consequently, the total treatment time.
A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.
Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua
2016-05-01
Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.
A reduction for spiking integrate-and-fire network dynamics ranging from homogeneity to synchrony.
Zhang, J W; Rangan, A V
2015-04-01
In this paper we provide a general methodology for systematically reducing the dynamics of a class of integrate-and-fire networks down to an augmented 4-dimensional system of ordinary-differential-equations. The class of integrate-and-fire networks we focus on are homogeneously-structured, strongly coupled, and fluctuation-driven. Our reduction succeeds where most current firing-rate and population-dynamics models fail because we account for the emergence of 'multiple-firing-events' involving the semi-synchronous firing of many neurons. These multiple-firing-events are largely responsible for the fluctuations generated by the network and, as a result, our reduction faithfully describes many dynamic regimes ranging from homogeneous to synchronous. Our reduction is based on first principles, and provides an analyzable link between the integrate-and-fire network parameters and the relatively low-dimensional dynamics underlying the 4-dimensional augmented ODE.
NASA Astrophysics Data System (ADS)
Hunziker, Jürg; Laloy, Eric; Linde, Niklas
2016-04-01
Deterministic inversion procedures can often explain field data, but they only deliver one final subsurface model that depends on the initial model and regularization constraints. This leads to poor insights about the uncertainties associated with the inferred model properties. In contrast, probabilistic inversions can provide an ensemble of model realizations that accurately span the range of possible models that honor the available calibration data and prior information allowing a quantitative description of model uncertainties. We reconsider the problem of inferring the dielectric permittivity (directly related to radar velocity) structure of the subsurface by inversion of first-arrival travel times from crosshole ground penetrating radar (GPR) measurements. We rely on the DREAM_(ZS) algorithm that is a state-of-the-art Markov chain Monte Carlo (MCMC) algorithm. Such algorithms need several orders of magnitude more forward simulations than deterministic algorithms and often become infeasible in high parameter dimensions. To enable high-resolution imaging with MCMC, we use a recently proposed dimensionality reduction approach that allows reproducing 2D multi-Gaussian fields with far fewer parameters than a classical grid discretization. We consider herein a dimensionality reduction from 5000 to 257 unknowns. The first 250 parameters correspond to a spectral representation of random and uncorrelated spatial fluctuations while the remaining seven geostatistical parameters are (1) the standard deviation of the data error, (2) the mean and (3) the variance of the relative electric permittivity, (4) the integral scale along the major axis of anisotropy, (5) the anisotropy angle, (6) the ratio of the integral scale along the minor axis of anisotropy to the integral scale along the major axis of anisotropy and (7) the shape parameter of the Matérn function. The latter essentially defines the type of covariance function (e.g., exponential, Whittle, Gaussian). We present an improved formulation of the dimensionality reduction, and numerically show how it reduces artifacts in the generated models and provides better posterior estimation of the subsurface geostatistical structure. We next show that the results of the method compare very favorably against previous deterministic and stochastic inversion results obtained at the South Oyster Bacterial Transport Site in Virginia, USA. The long-term goal of this work is to enable MCMC-based full waveform inversion of crosshole GPR data.
Roger M. Rowell; Rebecca E. Ibach; James McSweeny; Thomas Nilsson
2009-01-01
Reductions in hygroscopicity, increased dimensional stability and decay resistance of heat-treated wood depend on decomposition of a large portion of the hemicelluloses in the wood cell wall. In theory, these hemicelluloses are converted to small organic molecules, water and volatile furan-type intermediates that can polymerize in the cell wall. Reductions in...
Locating landmarks on high-dimensional free energy surfaces
Chen, Ming; Yu, Tang-Qing; Tuckerman, Mark E.
2015-01-01
Coarse graining of complex systems possessing many degrees of freedom can often be a useful approach for analyzing and understanding key features of these systems in terms of just a few variables. The relevant energy landscape in a coarse-grained description is the free energy surface as a function of the coarse-grained variables, which, despite the dimensional reduction, can still be an object of high dimension. Consequently, navigating and exploring this high-dimensional free energy surface is a nontrivial task. In this paper, we use techniques from multiscale modeling, stochastic optimization, and machine learning to devise a strategy for locating minima and saddle points (termed “landmarks”) on a high-dimensional free energy surface “on the fly” and without requiring prior knowledge of or an explicit form for the surface. In addition, we propose a compact graph representation of the landmarks and connections between them, and we show that the graph nodes can be subsequently analyzed and clustered based on key attributes that elucidate important properties of the system. Finally, we show that knowledge of landmark locations allows for the efficient determination of their relative free energies via enhanced sampling techniques. PMID:25737545
SLLE for predicting membrane protein types.
Wang, Meng; Yang, Jie; Xu, Zhi-Jie; Chou, Kuo-Chen
2005-01-07
Introduction of the concept of pseudo amino acid composition (PROTEINS: Structure, Function, and Genetics 43 (2001) 246; Erratum: ibid. 44 (2001) 60) has made it possible to incorporate a considerable amount of sequence-order effects by representing a protein sample in terms of a set of discrete numbers, and hence can significantly enhance the prediction quality of membrane protein type. As a continuous effort along such a line, the Supervised Locally Linear Embedding (SLLE) technique for nonlinear dimensionality reduction is introduced (Science 22 (2000) 2323). The advantage of using SLLE is that it can reduce the operational space by extracting the essential features from the high-dimensional pseudo amino acid composition space, and that the cluster-tolerant capacity can be increased accordingly. As a consequence by combining these two approaches, high success rates have been observed during the tests of self-consistency, jackknife and independent data set, respectively, by using the simplest nearest neighbour classifier. The current approach represents a new strategy to deal with the problems of protein attribute prediction, and hence may become a useful vehicle in the area of bioinformatics and proteomics.
Topological data analysis of contagion maps for examining spreading processes on networks.
Taylor, Dane; Klimm, Florian; Harrington, Heather A; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A; Mucha, Peter J
2015-07-21
Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth's surface; however, in modern contagions long-range edges-for example, due to airline transportation or communication media-allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct 'contagion maps' that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks.
Topological data analysis of contagion maps for examining spreading processes on networks
NASA Astrophysics Data System (ADS)
Taylor, Dane; Klimm, Florian; Harrington, Heather A.; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A.; Mucha, Peter J.
2015-07-01
Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth's surface; however, in modern contagions long-range edges--for example, due to airline transportation or communication media--allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct `contagion maps' that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks.
Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA
NASA Astrophysics Data System (ADS)
He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong
2018-04-01
This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.
A Geometric Method for Model Reduction of Biochemical Networks with Polynomial Rate Functions.
Samal, Satya Swarup; Grigoriev, Dima; Fröhlich, Holger; Weber, Andreas; Radulescu, Ovidiu
2015-12-01
Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low-dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors.
Interatrial septum pacing guided by three-dimensional intracardiac echocardiography.
Szili-Torok, Tamas; Kimman, Geert Jan P; Scholten, Marcoen F; Ligthart, Jurgen; Bruining, Nico; Theuns, Dominic A M J; Klootwijk, Peter J; Roelandt, Jos R T C; Jordaens, Luc J
2002-12-18
Currently, the interatrial septum (IAS) pacing site is indirectly selected by fluoroscopy and P-wave analysis. The aim of the present study was to develop a novel approach for IAS pacing using intracardiac echocardiography (ICE). Interatrial septum pacing may be beneficial for the prevention of paroxysmal atrial fibrillation. Cross-sectional images are acquired during a pull-back of the ICE transducer from the superior vena cava into the inferior vena cava by an electrocardiogram- and respiration-gated technique. Both atria are then reconstructed using three-dimensional (3D) imaging. Using an "en face" view of the IAS, the desired pacing site is selected. Following lead placement and electrical testing, another 3D reconstruction is performed to verify the final lead position. Twelve patients were included in this study. The IAS pacing was achieved in all patients including six suprafossal (SF) and six infrafossal (IF) lead locations all confirmed by 3D imaging. The mean duration times of atrial lead implantation and fluoroscopy were 70 +/- 48.9 min and 23.7 +/- 20.6 min, respectively. The IAS pacing resulted in a significant reduction of the P-wave duration as compared to sinus rhythm (98.9 +/- 19.3 ms vs. 141.3 +/- 8.6 ms; p < 0.002). The SF pacing showed a greater reduction of the P-wave duration than IF pacing (59.4 +/- 6.6 ms vs. 30.2 +/- 13.6 ms; p < 0.004). Three-dimensional ICE is a feasible tool for guiding IAS pacing.
NASA Astrophysics Data System (ADS)
Ye, Fei; Marchetti, P. A.; Su, Z. B.; Yu, L.
2017-09-01
The relation between braid and exclusion statistics is examined in one-dimensional systems, within the framework of Chern-Simons statistical transmutation in gauge invariant form with an appropriate dimensional reduction. If the matter action is anomalous, as for chiral fermions, a relation between braid and exclusion statistics can be established explicitly for both mutual and nonmutual cases. However, if it is not anomalous, the exclusion statistics of emergent low energy excitations is not necessarily connected to the braid statistics of the physical charged fields of the system. Finally, we also discuss the bosonization of one-dimensional anyonic systems through T-duality. Dedicated to the memory of Mario Tonin.
Long-range prediction of Indian summer monsoon rainfall using data mining and statistical approaches
NASA Astrophysics Data System (ADS)
H, Vathsala; Koolagudi, Shashidhar G.
2017-10-01
This paper presents a hybrid model to better predict Indian summer monsoon rainfall. The algorithm considers suitable techniques for processing dense datasets. The proposed three-step algorithm comprises closed itemset generation-based association rule mining for feature selection, cluster membership for dimensionality reduction, and simple logistic function for prediction. The application of predicting rainfall into flood, excess, normal, deficit, and drought based on 36 predictors consisting of land and ocean variables is presented. Results show good accuracy in the considered study period of 37years (1969-2005).
Kernel PLS-SVC for Linear and Nonlinear Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Matthews, Bryan
2003-01-01
A new methodology for discrimination is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by support vector machines for classification. Close connection of orthonormalized PLS and Fisher's approach to linear discrimination or equivalently with canonical correlation analysis is described. This gives preference to use orthonormalized PLS over principal component analysis. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of the classification finger movement periods versus non-movement periods based on electroencephalogram.
Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.
Classification Accuracy Increase Using Multisensor Data Fusion
NASA Astrophysics Data System (ADS)
Makarau, A.; Palubinskas, G.; Reinartz, P.
2011-09-01
The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc.
NASA Astrophysics Data System (ADS)
Aytaç Korkmaz, Sevcan; Binol, Hamidullah
2018-03-01
Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.
Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification
NASA Astrophysics Data System (ADS)
Sharif, I.; Khare, S.
2014-11-01
With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.
Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhbardeh, Alireza; Jacobs, Michael A.; Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205
2012-04-15
Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), andmore » diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both synthetic and clinical data. In the synthetic data, the authors demonstrated the performance of the NLDR method compared with conventional linear DR methods. The NLDR approach enabled successful segmentation of the structures, whereas, in most cases, PCA and MDS failed. The NLDR approach was able to segment different breast tissue types with a high accuracy and the embedded image of the breast MRI data demonstrated fuzzy boundaries between the different types of breast tissue, i.e., fatty, glandular, and tissue with lesions (>86%). Conclusions: The proposed hybrid NLDR methods were able to segment clinical breast data with a high accuracy and construct an embedded image that visualized the contribution of different radiological parameters.« less
Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap.
Spiwok, Vojtěch; Králová, Blanka
2011-12-14
Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling. © 2011 American Institute of Physics
Musser, Jonathan W.
2008-01-01
Potential flow characteristics of future flooding along a 4.8-mile reach of the Flint River in Albany, Georgia, were simulated using recent digital-elevation-model data and the U.S. Geological Survey finite-element surface-water modeling system for two-dimensional flow in the horizontal plane (FESWMS-2DH). The model was run at four water-surface altitudes at the Flint River at Albany streamgage (02352500): 181.5-foot (ft) altitude with a flow of 61,100 cubic feet per second (ft3/s), 184.5-ft altitude with a flow of 75,400 ft3/s, 187.5-ft altitude with a flow of 91,700 ft3/s, and 192.5-ft altitude with a flow of 123,000 ft3/s. The model was run to measure changes in inundated areas and water-surface altitudes for eight scenarios of possible modifications to the 4.8-mile reach on the Flint River. The eight scenarios include removing a human-made peninsula located downstream from Oglethorpe Boulevard, increasing the opening under the Oakridge Drive bridge, adding culverts to the east Oakridge Drive bridge approach, adding culverts to the east and west Oakridge Drive bridge approaches, adding an overflow across the oxbow north of Oakridge Drive, making the overflow into a channel, removing the Oakridge Drive bridge, and adding a combination of an oxbow overflow and culverts on both Oakridge Drive bridge approaches. The modeled inundation and water-surface altitude changes were mapped for use in evaluating the river modifications. The most effective scenario at reducing inundated area was the combination scenario. At the 187.5-ft altitude, the inundated area decreased from 4.24 square miles to 4.00 square miles. The remove-peninsula scenario was the least effective with a reduction in inundated area of less than 0.01 square miles. In all scenarios, the inundated area reduction increased with water-surface altitude, peaking at the 187.5-ft altitude. The inundated area reduction then decreased at the gage altitude of 192.5 ft.
Higher-dimensional Bianchi type-VIh cosmologies
NASA Astrophysics Data System (ADS)
Lorenz-Petzold, D.
1985-09-01
The higher-dimensional perfect fluid equations of a generalization of the (1 + 3)-dimensional Bianchi type-VIh space-time are discussed. Bianchi type-V and Bianchi type-III space-times are also included as special cases. It is shown that the Chodos-Detweiler (1980) mechanism of cosmological dimensional-reduction is possible in these cases.
Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio
2015-12-01
This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.
Thomas, Minta; De Brabanter, Kris; De Moor, Bart
2014-05-10
DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.
An object-oriented data reduction system in Fortran
NASA Technical Reports Server (NTRS)
Bailey, J.
1992-01-01
A data reduction system for the AAO two-degree field project is being developed using an object-oriented approach. Rather than use an object-oriented language (such as C++) the system is written in Fortran and makes extensive use of existing subroutine libraries provided by the UK Starlink project. Objects are created using the extensible N-dimensional Data Format (NDF) which itself is based on the Hierarchical Data System (HDS). The software consists of a class library, with each class corresponding to a Fortran subroutine with a standard calling sequence. The methods of the classes provide operations on NDF objects at a similar level of functionality to the applications of conventional data reduction systems. However, because they are provided as callable subroutines, they can be used as building blocks for more specialist applications. The class library is not dependent on a particular software environment thought it can be used effectively in ADAM applications. It can also be used from standalone Fortran programs. It is intended to develop a graphical user interface for use with the class library to form the 2dF data reduction system.
Rauh, Virginia A.; Margolis, Amy
2016-01-01
Background Environmental exposures play a critical role in the genesis of some child mental health problems. Methods We open with a discussion of children’s vulnerability to neurotoxic substances, changes in the distribution of toxic exposures, and co-occurrence of social and physical exposures. We address trends in prevalence of mental health disorders, and approaches to the definition of disorders that are sensitive to the subtle effects of toxic exposures. We suggest broadening outcomes to include dimensional measures of autism spectrum disorders, attention deficit hyperactivity disorder, and child learning capacity, as well as direct assessment of brain function. Findings We consider the impact of two important exposures on children’s mental health: lead and pesticides. We argue that longitudinal research designs may capture the cascading effects of exposures across biological systems and the full-range of neuropsychological endpoints. Neuroimaging is a valuable tool for observing brain maturation under varying environmental conditions. A dimensional approach to measurement may be sensitive to subtle sub-clinical toxic effects, permitting the development of exposure-related profiles and testing of complex functional relationships between brain and behavior. Questions about the neurotoxic effects of chemicals become more pressing when viewed through the lens of environmental justice. Conclusions Reduction in the burden of child mental health disorders will require longitudinal study of neurotoxic exposures, incorporating dimensional approaches to outcome assessment and measures of brain function. Research that seeks to identify links between toxic exposures and mental health outcomes has enormous public health and societal value. PMID:26987761
Rauh, Virginia A; Margolis, Amy E
2016-07-01
Environmental exposures play a critical role in the genesis of some child mental health problems. We open with a discussion of children's vulnerability to neurotoxic substances, changes in the distribution of toxic exposures, and cooccurrence of social and physical exposures. We address trends in prevalence of mental health disorders, and approaches to the definition of disorders that are sensitive to the subtle effects of toxic exposures. We suggest broadening outcomes to include dimensional measures of autism spectrum disorders, attention-deficit hyperactivity disorder, and child learning capacity, as well as direct assessment of brain function. We consider the impact of two important exposures on children's mental health: lead and pesticides. We argue that longitudinal research designs may capture the cascading effects of exposures across biological systems and the full-range of neuropsychological endpoints. Neuroimaging is a valuable tool for observing brain maturation under varying environmental conditions. A dimensional approach to measurement may be sensitive to subtle subclinical toxic effects, permitting the development of exposure-related profiles and testing of complex functional relationships between brain and behavior. Questions about the neurotoxic effects of chemicals become more pressing when viewed through the lens of environmental justice. Reduction in the burden of child mental health disorders will require longitudinal study of neurotoxic exposures, incorporating dimensional approaches to outcome assessment, and measures of brain function. Research that seeks to identify links between toxic exposures and mental health outcomes has enormous public health and societal value. © 2016 Association for Child and Adolescent Mental Health.
Econo-ESA in semantic text similarity.
Rahutomo, Faisal; Aritsugi, Masayoshi
2014-01-01
Explicit semantic analysis (ESA) utilizes an immense Wikipedia index matrix in its interpreter part. This part of the analysis multiplies a large matrix by a term vector to produce a high-dimensional concept vector. A similarity measurement between two texts is performed between two concept vectors with numerous dimensions. The cost is expensive in both interpretation and similarity measurement steps. This paper proposes an economic scheme of ESA, named econo-ESA. We investigate two aspects of this proposal: dimensional reduction and experiments with various data. We use eight recycling test collections in semantic text similarity. The experimental results show that both the dimensional reduction and test collection characteristics can influence the results. They also show that an appropriate concept reduction of econo-ESA can decrease the cost with minor differences in the results from the original ESA.
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas
2017-12-01
Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lens, Eelco, E-mail: e.lens@amc.uva.nl; Horst, Astrid van der; Versteijne, Eva
2015-07-01
Purpose: The midventilation (midV) approach can be used to take respiratory-induced pancreatic tumor motion into account during radiation therapy. In this study, the dosimetric consequences for organs at risk and tumor coverage of using a midV approach compared with using an internal target volume (ITV) were investigated. Methods and Materials: For each of the 18 patients, 2 treatment plans (25 × 2.0 Gy) were created, 1 using an ITV and 1 using a midV approach. The midV dose distribution was blurred using the respiratory-induced motion from 4-dimensional computed tomography. The resulting planning target volume (PTV) coverage for this blurred dosemore » distribution was analyzed; PTV coverage was required to be at least V{sub 95%} >98%. In addition, the change in PTV size and the changes in V{sub 10Gy}, V{sub 20Gy}, V{sub 30Gy}, V{sub 40Gy}, D{sub mean} and D{sub 2cc} for the stomach and for the duodenum were analyzed; differences were tested for significance using the Wilcoxon signed-rank test. Results: Using a midV approach resulted in sufficient target coverage. A highly significant PTV size reduction of 13.9% (P<.001) was observed. Also, all dose parameters for the stomach and duodenum, except the D{sub 2cc} of the duodenum, improved significantly (P≤.002). Conclusions: By using the midV approach to account for respiratory-induced tumor motion, a significant PTV reduction and significant dose reductions to the stomach and to the duodenum can be achieved when irradiating pancreatic tumors.« less
Integrand Reduction Reloaded: Algebraic Geometry and Finite Fields
NASA Astrophysics Data System (ADS)
Sameshima, Ray D.; Ferroglia, Andrea; Ossola, Giovanni
2017-01-01
The evaluation of scattering amplitudes in quantum field theory allows us to compare the phenomenological prediction of particle theory with the measurement at collider experiments. The study of scattering amplitudes, in terms of their symmetries and analytic properties, provides a theoretical framework to develop techniques and efficient algorithms for the evaluation of physical cross sections and differential distributions. Tree-level calculations have been known for a long time. Loop amplitudes, which are needed to reduce the theoretical uncertainty, are more challenging since they involve a large number of Feynman diagrams, expressed as integrals of rational functions. At one-loop, the problem has been solved thanks to the combined effect of integrand reduction, such as the OPP method, and unitarity. However, plenty of work is still needed at higher orders, starting with the two-loop case. Recently, integrand reduction has been revisited using algebraic geometry. In this presentation, we review the salient features of integrand reduction for dimensionally regulated Feynman integrals, and describe an interesting technique for their reduction based on multivariate polynomial division. We also show a novel approach to improve its efficiency by introducing finite fields. Supported in part by the National Science Foundation under Grant PHY-1417354.
Predicting structured metadata from unstructured metadata.
Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier
2016-01-01
Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data-defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. © The Author(s) 2016. Published by Oxford University Press.
Predicting structured metadata from unstructured metadata
Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier
2016-01-01
Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data—defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. Database URL: http://www.yeastgenome.org/ PMID:28637268
NASA Astrophysics Data System (ADS)
Sasaki, Yutaka; Yi, Myeong-Jong; Choi, Jihyang; Son, Jeong-Sul
2015-01-01
We present frequency- and time-domain three-dimensional (3-D) inversion approaches that can be applied to transient electromagnetic (TEM) data from a grounded-wire source using a PC. In the direct time-domain approach, the forward solution and sensitivity were obtained in the frequency domain using a finite-difference technique, and the frequency response was then Fourier-transformed using a digital filter technique. In the frequency-domain approach, TEM data were Fourier-transformed using a smooth-spectrum inversion method, and the recovered frequency response was then inverted. The synthetic examples show that for the time derivative of magnetic field, frequency-domain inversion of TEM data performs almost as well as time-domain inversion, with a significant reduction in computational time. In our synthetic studies, we also compared the resolution capabilities of the ground and airborne TEM and controlled-source audio-frequency magnetotelluric (CSAMT) data resulting from a common grounded wire. An airborne TEM survey at 200-m elevation achieved a resolution for buried conductors almost comparable to that of the ground TEM method. It is also shown that the inversion of CSAMT data was able to detect a 3-D resistivity structure better than the TEM inversion, suggesting an advantage of electric-field measurements over magnetic-field-only measurements.
A data-driven approach for modeling post-fire debris-flow volumes and their uncertainty
Friedel, Michael J.
2011-01-01
This study demonstrates the novel application of genetic programming to evolve nonlinear post-fire debris-flow volume equations from variables associated with a data-driven conceptual model of the western United States. The search space is constrained using a multi-component objective function that simultaneously minimizes root-mean squared and unit errors for the evolution of fittest equations. An optimization technique is then used to estimate the limits of nonlinear prediction uncertainty associated with the debris-flow equations. In contrast to a published multiple linear regression three-variable equation, linking basin area with slopes greater or equal to 30 percent, burn severity characterized as area burned moderate plus high, and total storm rainfall, the data-driven approach discovers many nonlinear and several dimensionally consistent equations that are unbiased and have less prediction uncertainty. Of the nonlinear equations, the best performance (lowest prediction uncertainty) is achieved when using three variables: average basin slope, total burned area, and total storm rainfall. Further reduction in uncertainty is possible for the nonlinear equations when dimensional consistency is not a priority and by subsequently applying a gradient solver to the fittest solutions. The data-driven modeling approach can be applied to nonlinear multivariate problems in all fields of study.
Acosta-Mesa, Héctor-Gabriel; Rechy-Ramírez, Fernando; Mezura-Montes, Efrén; Cruz-Ramírez, Nicandro; Hernández Jiménez, Rodolfo
2014-06-01
In this work, we present a novel application of time series discretization using evolutionary programming for the classification of precancerous cervical lesions. The approach optimizes the number of intervals in which the length and amplitude of the time series should be compressed, preserving the important information for classification purposes. Using evolutionary programming, the search for a good discretization scheme is guided by a cost function which considers three criteria: the entropy regarding the classification, the complexity measured as the number of different strings needed to represent the complete data set, and the compression rate assessed as the length of the discrete representation. This discretization approach is evaluated using a time series data based on temporal patterns observed during a classical test used in cervical cancer detection; the classification accuracy reached by our method is compared with the well-known times series discretization algorithm SAX and the dimensionality reduction method PCA. Statistical analysis of the classification accuracy shows that the discrete representation is as efficient as the complete raw representation for the present application, reducing the dimensionality of the time series length by 97%. This representation is also very competitive in terms of classification accuracy when compared with similar approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Prakash, Bhaskaran David; Esuvaranathan, Kesavan; Ho, Paul C; Pasikanti, Kishore Kumar; Chan, Eric Chun Yong; Yap, Chun Wei
2013-05-21
A fully automated and computationally efficient Pearson's correlation change classification (APC3) approach is proposed and shown to have overall comparable performance with both an average accuracy and an average AUC of 0.89 ± 0.08 but is 3.9 to 7 times faster, easier to use and have low outlier susceptibility in contrast to other dimensional reduction and classification combinations using only the total ion chromatogram (TIC) intensities of GC/MS data. The use of only the TIC permits the possible application of APC3 to other metabonomic data such as LC/MS TICs or NMR spectra. A RapidMiner implementation is available for download at http://padel.nus.edu.sg/software/padelapc3.
Wake Management Strategies for Reduction of Turbomachinery Fan Noise
NASA Technical Reports Server (NTRS)
Waitz, Ian A.
1998-01-01
The primary objective of our work was to evaluate and test several wake management schemes for the reduction of turbomachinery fan noise. Throughout the course of this work we relied on several tools. These include 1) Two-dimensional steady boundary-layer and wake analyses using MISES (a thin-shear layer Navier-Stokes code), 2) Two-dimensional unsteady wake-stator interaction simulations using UNSFLO, 3) Three-dimensional, steady Navier-Stokes rotor simulations using NEWT, 4) Internal blade passage design using quasi-one-dimensional passage flow models developed at MIT, 5) Acoustic modeling using LINSUB, 6) Acoustic modeling using VO72, 7) Experiments in a low-speed cascade wind-tunnel, and 8) ADP fan rig tests in the MIT Blowdown Compressor.
Principal polynomial analysis.
Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus
2014-11-01
This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.
A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bal, Guillaume, E-mail: gb2030@columbia.edu; Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov; Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030
2011-08-20
Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or amore » airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.« less
Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf
2010-07-01
Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.
Simplifying the representation of complex free-energy landscapes using sketch-map
Ceriotti, Michele; Tribello, Gareth A.; Parrinello, Michele
2011-01-01
A new scheme, sketch-map, for obtaining a low-dimensional representation of the region of phase space explored during an enhanced dynamics simulation is proposed. We show evidence, from an examination of the distribution of pairwise distances between frames, that some features of the free-energy surface are inherently high-dimensional. This makes dimensionality reduction problematic because the data does not satisfy the assumptions made in conventional manifold learning algorithms We therefore propose that when dimensionality reduction is performed on trajectory data one should think of the resultant embedding as a quickly sketched set of directions rather than a road map. In other words, the embedding tells one about the connectivity between states but does not provide the vectors that correspond to the slow degrees of freedom. This realization informs the development of sketch-map, which endeavors to reproduce the proximity information from the high-dimensionality description in a space of lower dimensionality even when a faithful embedding is not possible. PMID:21730167
Approaches for Achieving Superlubricity in Two-Dimensional Materials.
Berman, Diana; Erdemir, Ali; Sumant, Anirudha V
2018-03-27
Controlling friction and reducing wear of moving mechanical systems is important in many applications, from nanoscale electromechanical systems to large-scale car engines and wind turbines. Accordingly, multiple efforts are dedicated to design materials and surfaces for efficient friction and wear manipulation. Recent advances in two-dimensional (2D) materials, such as graphene, hexagonal boron nitride, molybdenum disulfide, and other 2D materials opened an era for conformal, atomically thin solid lubricants. However, the process of effectively incorporating 2D films requires a fundamental understanding of the atomistic origins of friction. In this review, we outline basic mechanisms for frictional energy dissipation during sliding of two surfaces against each other, and the procedures for manipulating friction and wear by introducing 2D materials at the tribological interface. Finally, we highlight recent progress in implementing 2D materials for friction reduction to near-zero values-superlubricity-across scales from nano- up to macroscale contacts.
NASA Astrophysics Data System (ADS)
Parsons, Todd L.; Rogers, Tim
2017-10-01
Systems composed of large numbers of interacting agents often admit an effective coarse-grained description in terms of a multidimensional stochastic dynamical system, driven by small-amplitude intrinsic noise. In applications to biological, ecological, chemical and social dynamics it is common for these models to posses quantities that are approximately conserved on short timescales, in which case system trajectories are observed to remain close to some lower-dimensional subspace. Here, we derive explicit and general formulae for a reduced-dimension description of such processes that is exact in the limit of small noise and well-separated slow and fast dynamics. The Michaelis-Menten law of enzyme-catalysed reactions, and the link between the Lotka-Volterra and Wright-Fisher processes are explored as a simple worked examples. Extensions of the method are presented for infinite dimensional systems and processes coupled to non-Gaussian noise sources.
Quasi-one-dimensional arrangement of silver nanoparticles templated by cellulose microfibrils.
Wu, Min; Kuga, Shigenori; Huang, Yong
2008-09-16
We demonstrate a simple, facile approach to the deposition of silver nanoparticles on the surface of cellulose microfibrils with a quasi-one-dimensional arrangement. The process involves the generation of aldehyde groups by oxidizing the surface of cellulose microfibrils and then the assembly of silver nanoparticles on the surface by means of the silver mirror reaction. The linear nature of the microfibrils and the relatively uniform surface chemical modification result in a uniform linear distribution of silver particles along the microfibrils. The effects of various reaction parameters, such as the reaction time for the reduction process and employed starting materials, have been investigated by transmission electron microscopy (TEM) and ultraviolet-visible spectroscopy. Additionally, the products were examined for their electric current-voltage characteristics, the results showing that these materials had an electric conductivity of approximately 5 S/cm, being different from either the oxidated cellulose or bulk silver materials by many orders of magnitude.
Restoration of dimensional reduction in the random-field Ising model at five dimensions
NASA Astrophysics Data System (ADS)
Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D <6 to their values in the pure Ising model at D -2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
Restoration of dimensional reduction in the random-field Ising model at five dimensions.
Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤D<6 to their values in the pure Ising model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
Dimensional reduction for a SIR type model
NASA Astrophysics Data System (ADS)
Cahyono, Edi; Soeharyadi, Yudi; Mukhsar
2018-03-01
Epidemic phenomena are often modeled in the form of dynamical systems. Such model has also been used to model spread of rumor, spread of extreme ideology, and dissemination of knowledge. Among the simplest is SIR (susceptible, infected and recovered) model, a model that consists of three compartments, and hence three variables. The variables are functions of time which represent the number of subpopulations, namely suspect, infected and recovery. The sum of the three is assumed to be constant. Hence, the model is actually two dimensional which sits in three-dimensional ambient space. This paper deals with the reduction of a SIR type model into two variables in two-dimensional ambient space to understand the geometry and dynamics better. The dynamics is studied, and the phase portrait is presented. The two dimensional model preserves the equilibrium and the stability. The model has been applied for knowledge dissemination, which has been the interest of knowledge management.
Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam
2012-01-15
Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.
NASA Astrophysics Data System (ADS)
Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina
2014-03-01
We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.
Reduced nonlinear prognostic model construction from high-dimensional data
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander
2017-04-01
Construction of a data-driven model of evolution operator using universal approximating functions can only be statistically justified when the dimension of its phase space is small enough, especially in the case of short time series. At the same time in many applications real-measured data is high-dimensional, e.g. it is space-distributed and multivariate in climate science. Therefore it is necessary to use efficient dimensionality reduction methods which are also able to capture key dynamical properties of the system from observed data. To address this problem we present a Bayesian approach to an evolution operator construction which incorporates two key reduction steps. First, the data is decomposed into a set of certain empirical modes, such as standard empirical orthogonal functions or recently suggested nonlinear dynamical modes (NDMs) [1], and the reduced space of corresponding principal components (PCs) is obtained. Then, the model of evolution operator for PCs is constructed which maps a number of states in the past to the current state. The second step is to reduce this time-extended space in the past using appropriate decomposition methods. Such a reduction allows us to capture only the most significant spatio-temporal couplings. The functional form of the evolution operator includes separately linear, nonlinear (based on artificial neural networks) and stochastic terms. Explicit separation of the linear term from the nonlinear one allows us to more easily interpret degree of nonlinearity as well as to deal better with smooth PCs which can naturally occur in the decompositions like NDM, as they provide a time scale separation. Results of application of the proposed method to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510
Calculation of the rotor induced download on airfoils
NASA Technical Reports Server (NTRS)
Lee, C. S.
1989-01-01
Interactions between the rotors and wing of a rotary wing aircraft in hover have a significant detrimental effect on its payload performance. The reduction of payload results from the wake of lifting rotors impinging on the wing, which is at 90 deg angle of attack in hover. This vertical drag, often referred as download, can be as large as 15 percent of the total rotor thrust in hover. The rotor wake is a three-dimensional, unsteady flow with concentrated tip vortices. With the rotor tip vortices impinging on the upper surface of the wing, the flow over the wing is not only three-dimensional and unsteady, but also separated from the leading and trailing edges. A simplified two-dimensional model was developed to demonstrate the stability of the methodology. The flow model combines a panel method to represent the rotor and the wing, and a vortex method to track the wing wake. A parametric study of the download on a 20 percent thick elliptical airfoil below a rotor disk of uniform inflow was performed. Comparisons with experimental data are made where the data are available. This approach is now being extended to three-dimensional flows. Preliminary results on a wing at 90 deg angle of attack in free stream is presented.
Zhang, Zhao; Zhao, Mingbo; Chow, Tommy W S
2012-12-01
In this work, sub-manifold projections based semi-supervised dimensionality reduction (DR) problem learning from partial constrained data is discussed. Two semi-supervised DR algorithms termed Marginal Semi-Supervised Sub-Manifold Projections (MS³MP) and orthogonal MS³MP (OMS³MP) are proposed. MS³MP in the singular case is also discussed. We also present the weighted least squares view of MS³MP. Based on specifying the types of neighborhoods with pairwise constraints (PC) and the defined manifold scatters, our methods can preserve the local properties of all points and discriminant structures embedded in the localized PC. The sub-manifolds of different classes can also be separated. In PC guided methods, exploring and selecting the informative constraints is challenging and random constraint subsets significantly affect the performance of algorithms. This paper also introduces an effective technique to select the informative constraints for DR with consistent constraints. The analytic form of the projection axes can be obtained by eigen-decomposition. The connections between this work and other related work are also elaborated. The validity of the proposed constraint selection approach and DR algorithms are evaluated by benchmark problems. Extensive simulations show that our algorithms can deliver promising results over some widely used state-of-the-art semi-supervised DR techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.
Design and analysis of compound flexible skin based on deformable honeycomb
NASA Astrophysics Data System (ADS)
Zou, Tingting; Zhou, Li
2017-04-01
In this study, we focused at the development and verification of a robust framework for surface crack detection in steel pipes using measured vibration responses; with the presence of multiple progressive damage occurring in different locations within the structure. Feature selection, dimensionality reduction, and multi-class support vector machine were established for this purpose. Nine damage cases, at different locations, orientations and length, were introduced into the pipe structure. The pipe was impacted 300 times using an impact hammer, after each damage case, the vibration data were collected using 3 PZT wafers which were installed on the outer surface of the pipe. At first, damage sensitive features were extracted using the frequency response function approach followed by recursive feature elimination for dimensionality reduction. Then, a multi-class support vector machine learning algorithm was employed to train the data and generate a statistical model. Once the model is established, decision values and distances from the hyper-plane were generated for the new collected data using the trained model. This process was repeated on the data collected from each sensor. Overall, using a single sensor for training and testing led to a very high accuracy reaching 98% in the assessment of the 9 damage cases used in this study.
NASA Astrophysics Data System (ADS)
Mustapha, S.; Braytee, A.; Ye, L.
2017-04-01
In this study, we focused at the development and verification of a robust framework for surface crack detection in steel pipes using measured vibration responses; with the presence of multiple progressive damage occurring in different locations within the structure. Feature selection, dimensionality reduction, and multi-class support vector machine were established for this purpose. Nine damage cases, at different locations, orientations and length, were introduced into the pipe structure. The pipe was impacted 300 times using an impact hammer, after each damage case, the vibration data were collected using 3 PZT wafers which were installed on the outer surface of the pipe. At first, damage sensitive features were extracted using the frequency response function approach followed by recursive feature elimination for dimensionality reduction. Then, a multi-class support vector machine learning algorithm was employed to train the data and generate a statistical model. Once the model is established, decision values and distances from the hyper-plane were generated for the new collected data using the trained model. This process was repeated on the data collected from each sensor. Overall, using a single sensor for training and testing led to a very high accuracy reaching 98% in the assessment of the 9 damage cases used in this study.
NASA Astrophysics Data System (ADS)
Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.
2016-10-01
X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.
Empirical modeling ENSO dynamics with complex-valued artificial neural networks
NASA Astrophysics Data System (ADS)
Seleznev, Aleksei; Gavrilov, Andrey; Mukhin, Dmitry
2016-04-01
The main difficulty in empirical reconstructing the distributed dynamical systems (e.g. regional climate systems, such as El-Nino-Southern Oscillation - ENSO) is a huge amount of observational data comprising time-varying spatial fields of several variables. An efficient reduction of system's dimensionality thereby is essential for inferring an evolution operator (EO) for a low-dimensional subsystem that determines the key properties of the observed dynamics. In this work, to efficient reduction of observational data sets we use complex-valued (Hilbert) empirical orthogonal functions which are appropriate, by their nature, for describing propagating structures unlike traditional empirical orthogonal functions. For the approximation of the EO, a universal model in the form of complex-valued artificial neural network is suggested. The effectiveness of this approach is demonstrated by predicting both the Jin-Neelin-Ghil ENSO model [1] behavior and real ENSO variability from sea surface temperature anomalies data [2]. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Jin, F.-F., J. D. Neelin, and M. Ghil, 1996: El Ni˜no/Southern Oscillation and the annual cycle: subharmonic frequency locking and aperiodicity. Physica D, 98, 442-465. 2. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/
Nonlinear structures: Cnoidal, soliton, and periodical waves in quantum semiconductor plasma
NASA Astrophysics Data System (ADS)
Tolba, R. E.; El-Bedwehy, N. A.; Moslem, W. M.; El-Labany, S. K.; Yahia, M. E.
2016-01-01
Properties and emerging conditions of various nonlinear acoustic waves in a three dimensional quantum semiconductor plasma are explored. A plasma fluid model characterized by degenerate pressures, exchange correlation, and quantum recoil forces is established and solved. Our analysis approach is based on the reductive perturbation theory for deriving the Kadomtsev-Petviashvili equation from the fluid model and solving it by using Painlevé analysis to come up with different nonlinear solutions that describe different pulse profiles such as cnoidal, soliton, and periodical pulses. The model is then employed to recognize the possible perturbations in GaN semiconductor.
Nonlinear structures: Cnoidal, soliton, and periodical waves in quantum semiconductor plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tolba, R. E., E-mail: tolba-math@yahoo.com; El-Bedwehy, N. A., E-mail: nab-elbedwehy@yahoo.com; Moslem, W. M., E-mail: wmmoslem@hotmail.com
2016-01-15
Properties and emerging conditions of various nonlinear acoustic waves in a three dimensional quantum semiconductor plasma are explored. A plasma fluid model characterized by degenerate pressures, exchange correlation, and quantum recoil forces is established and solved. Our analysis approach is based on the reductive perturbation theory for deriving the Kadomtsev-Petviashvili equation from the fluid model and solving it by using Painlevé analysis to come up with different nonlinear solutions that describe different pulse profiles such as cnoidal, soliton, and periodical pulses. The model is then employed to recognize the possible perturbations in GaN semiconductor.
Coordinate metrology using scanning probe microscopes
NASA Astrophysics Data System (ADS)
Marinello, F.; Savio, E.; Bariani, P.; Carmignato, S.
2009-08-01
New positioning, probing and measuring strategies in coordinate metrology are needed for the accomplishment of true three-dimensional characterization of microstructures, with uncertainties in the nanometre range. In the present work, the implementation of scanning probe microscopes (SPMs) as systems for coordinate metrology is discussed. A new non-raster measurement approach is proposed, where the probe is moved to sense points along free paths on the sample surface, with no loss of accuracy with respect to traditional raster scanning and scan time reduction. Furthermore, new probes featuring long tips with innovative geometries suitable for coordinate metrology through SPMs are examined and reported.
A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems
NASA Astrophysics Data System (ADS)
Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix
2018-03-01
We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.
Impact of embedded voids on thin-films with high thermal expansion coefficients mismatch
NASA Astrophysics Data System (ADS)
Khafagy, Khaled H.; Hatem, Tarek M.; Bedair, Salah M.
2018-01-01
Using technology to reduce defects at heterogeneous interfaces of thin-films is at a high-priority for modern semiconductors. The current work utilizes a three-dimensional multiple-slip crystal-plasticity model and specialized finite-element formulations to study the impact of the embedded void approach (EVA) to reduce defects in thin-films deposited on a substrate with a highly mismatched thermal expansion coefficient, in particular, the growth of an InGaN thin-film on a Si substrate, where EVA has shown a remarkable reduction in stresses on the side of the embedded voids.
Reduced Dynamics of the Non-holonomic Whipple Bicycle
NASA Astrophysics Data System (ADS)
Boyer, Frédéric; Porez, Mathieu; Mauny, Johan
2018-06-01
Though the bicycle is a familiar object of everyday life, modeling its full nonlinear three-dimensional dynamics in a closed symbolic form is a difficult issue for classical mechanics. In this article, we address this issue without resorting to the usual simplifications on the bicycle kinematics nor its dynamics. To derive this model, we use a general reduction-based approach in the principal fiber bundle of configurations of the three-dimensional bicycle. This includes a geometrically exact model of the contacts between the wheels and the ground, the explicit calculation of the kernel of constraints, along with the dynamics of the system free of any external forces, and its projection onto the kernel of admissible velocities. The approach takes benefits of the intrinsic formulation of geometric mechanics. Along the path toward the final equations, we show that the exact model of the bicycle dynamics requires to cope with a set of non-symmetric constraints with respect to the structural group of its configuration fiber bundle. The final reduced dynamics are simulated on several examples representative of the bicycle. As expected the constraints imposed by the ground contacts, as well as the energy conservation, are satisfied, while the dynamics can be numerically integrated in real time.
Two approaches for introduction of wheat straw lignin into rigid polyurethane foams
NASA Astrophysics Data System (ADS)
Arshanitsa, A.; Paberza, A.; Vevere, L.; Cabulis, U.; Telysheva, G.
2014-05-01
In present work the BIOLIGNIN{trade mark, serif} obtained in the result of wheat straw organosolv processing in CIMV pilot plant (France) was investigated as a component of rigid polyurethanes (PUR) foam systems. Different separate approaches of lignin introduction into PUR foam system were studied: as filler without chemical preprocessing and as liquid lignopolyol obtained by lignin oxypropylation in alkali conditions. The incorporation of increasing amount of lignin as filler into reference PUR foam systems on the basis of mixture of commercial polyethers Lupranol 3300 and Lupranol 3422 steadily decreased the compression characteristics of foams, their dimensional stability and hydrophobicity. The complete substitution of Lupranol 3300 by lignopolyol increases its cell structure uniformity and dimensional stability and does not reduce the physical-mechanical properties of foam. In both cases the incorporation of lignin into PUR foam leads to the decreasing of maximum values of thermodegradation rates. The lignin filler can be introduced into lignopolyol based PUR foam in higher quantity than in the reference Lupranol based PUR without reduction of compression characteristics of material. In this work the optimal lignin content in the end product - PUR foam as both polyol and filler is 16%.
New multigrid approach for three-dimensional unstructured, adaptive grids
NASA Technical Reports Server (NTRS)
Parthasarathy, Vijayan; Kallinderis, Y.
1994-01-01
A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.
Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics
NASA Astrophysics Data System (ADS)
Wehmeyer, Christoph; Noé, Frank
2018-06-01
Inspired by the success of deep learning techniques in the physical and chemical sciences, we apply a modification of an autoencoder type deep neural network to the task of dimension reduction of molecular dynamics data. We can show that our time-lagged autoencoder reliably finds low-dimensional embeddings for high-dimensional feature spaces which capture the slow dynamics of the underlying stochastic processes—beyond the capabilities of linear dimension reduction techniques.
NASA Astrophysics Data System (ADS)
Xu, Wentao; Lee, Yeongjun; Min, Sung-Yong; Park, Cheolmin; Lee, Tae-Woo
2016-09-01
Resistive random-access memory (RRAM) is a candidate next generation nonvolatile memory due to its high access speed, high density and ease of fabrication. Especially, cross-point-access allows cross-bar arrays that lead to high-density cells in a two-dimensional planar structure. Use of such designs could be compatible with the aggressive scaling down of memory devices, but existing methods such as optical or e-beam lithographic approaches are too complicated. One-dimensional inorganic nanowires (i-NWs) are regarded as ideal components of nanoelectronics to circumvent the limitations of conventional lithographic approaches. However, post-growth alignment of these i-NWs precisely on a large area with individual control is still a difficult challenge. Here, we report a simple, inexpensive, and rapid method to fabricate two-dimensional arrays of perpendicularly-aligned, individually-conductive Cu-NWs with a nanometer-scale CuxO layer sandwiched at each cross point, by using an inorganic-nanowire-digital-alignment technique (INDAT) and a one-step reduction process. In this approach, the oxide layer is self-formed and patterned, so conventional deposition and lithography are not necessary. INDAT eliminates the difficulties of alignment and scalable fabrication that are encountered when using currently-available techniques that use inorganic nanowires. This simple process facilitates fabrication of cross-point nonvolatile memristor arrays. Fabricated arrays had reproducible resistive switching behavior, high on/off current ratio (Ion/Ioff) 10 6 and extensive cycling endurance. This is the first report of memristors with the resistive switching oxide layer self-formed, self-patterned and self-positioned; we envision that the new features of the technique will provide great opportunities for future nano-electronic circuits.
FeynArts model file for MSSM transition counterterms from DREG to DRED
NASA Astrophysics Data System (ADS)
Stöckinger, Dominik; Varšo, Philipp
2012-02-01
The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.
Three-dimensional Monte Carlo calculation of atmospheric thermal heating rates
NASA Astrophysics Data System (ADS)
Klinger, Carolin; Mayer, Bernhard
2014-09-01
We present a fast Monte Carlo method for thermal heating and cooling rates in three-dimensional atmospheres. These heating/cooling rates are relevant particularly in broken cloud fields. We compare forward and backward photon tracing methods and present new variance reduction methods to speed up the calculations. For this application it turns out that backward tracing is in most cases superior to forward tracing. Since heating rates may be either calculated as the difference between emitted and absorbed power per volume or alternatively from the divergence of the net flux, both approaches have been tested. We found that the absorption/emission method is superior (with respect to computational time for a given uncertainty) if the optical thickness of the grid box under consideration is smaller than about 5 while the net flux divergence may be considerably faster for larger optical thickness. In particular, we describe the following three backward tracing methods: the first and most simple method (EMABS) is based on a random emission of photons in the grid box of interest and a simple backward tracing. Since only those photons which cross the grid box boundaries contribute to the heating rate, this approach behaves poorly for large optical thicknesses which are common in the thermal spectral range. For this reason, the second method (EMABS_OPT) uses a variance reduction technique to improve the distribution of the photons in a way that more photons are started close to the grid box edges and thus contribute to the result which reduces the uncertainty. The third method (DENET) uses the flux divergence approach where - in backward Monte Carlo - all photons contribute to the result, but in particular for small optical thickness the noise becomes large. The three methods have been implemented in MYSTIC (Monte Carlo code for the phYSically correct Tracing of photons In Cloudy atmospheres). All methods are shown to agree within the photon noise with each other and with a discrete ordinate code for a one-dimensional case. Finally a hybrid method is built using a combination of EMABS_OPT and DENET, and application examples are shown. It should be noted that for this application, only little improvement is gained by EMABS_OPT compared to EMABS.
A maximum entropy reconstruction technique for tomographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.
2013-04-01
This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Hongjian; Huang, Hongwei; Xu, Kang
2017-09-26
Monolayered photocatalytic materials have attracted huge research interests in terms of their large specific surface area and ample active sites. Sillén-structured layered BiOX (X = Cl, Br, I) casts great prospects owing to their strong photo-oxidation ability and high stability. Fabrication of monolayered BiOX by a facile, low-cost, and scalable approach is highly challenging and anticipated. Herein, we describe the large-scale preparation of monolayered BiOBr nanosheets with a thickness of ~0.85 nm via a readily achievable liquid-phase exfoliation strategy with assistance of formamide at ambient conditions. The as-obtained monolayered BiOBr nanosheets are allowed diverse superiorities, such as enhanced specific surfacemore » area, promoted band structure, and strengthened charge separation. Profiting from these benefits, the advanced BiOBr monolayers not only show excellent adsorption and photodegradation performance for treating contaminants, but also demonstrate a greatly promoted photocatalytic activity for CO2 reduction into CO and CH4. Additionally, monolayered BiOI nanosheets have also been obtained by the same synthetic approach. Our work offers a mild and general approach for preparation of monolayered BiOX, and may have huge potential to be extended to the synthesis of other single-layer two-dimensional materials.« less
Information extraction from dynamic PS-InSAR time series using machine learning
NASA Astrophysics Data System (ADS)
van de Kerkhof, B.; Pankratius, V.; Chang, L.; van Swol, R.; Hanssen, R. F.
2017-12-01
Due to the increasing number of SAR satellites, with shorter repeat intervals and higher resolutions, SAR data volumes are exploding. Time series analyses of SAR data, i.e. Persistent Scatterer (PS) InSAR, enable the deformation monitoring of the built environment at an unprecedented scale, with hundreds of scatterers per km2, updated weekly. Potential hazards, e.g. due to failure of aging infrastructure, can be detected at an early stage. Yet, this requires the operational data processing of billions of measurement points, over hundreds of epochs, updating this data set dynamically as new data come in, and testing whether points (start to) behave in an anomalous way. Moreover, the quality of PS-InSAR measurements is ambiguous and heterogeneous, which will yield false positives and false negatives. Such analyses are numerically challenging. Here we extract relevant information from PS-InSAR time series using machine learning algorithms. We cluster (group together) time series with similar behaviour, even though they may not be spatially close, such that the results can be used for further analysis. First we reduce the dimensionality of the dataset in order to be able to cluster the data, since applying clustering techniques on high dimensional datasets often result in unsatisfying results. Our approach is to apply t-distributed Stochastic Neighbor Embedding (t-SNE), a machine learning algorithm for dimensionality reduction of high-dimensional data to a 2D or 3D map, and cluster this result using Density-Based Spatial Clustering of Applications with Noise (DBSCAN). The results show that we are able to detect and cluster time series with similar behaviour, which is the starting point for more extensive analysis into the underlying driving mechanisms. The results of the methods are compared to conventional hypothesis testing as well as a Self-Organising Map (SOM) approach. Hypothesis testing is robust and takes the stochastic nature of the observations into account, but is time consuming. Therefore, we successively apply our machine learning approach with the hypothesis testing approach in order to benefit from both the reduced computation time of the machine learning approach as from the robust quality metrics of hypothesis testing. We acknowledge support from NASA AISTNNX15AG84G (PI V. Pankratius)
Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497
Analytical and phenomenological studies of rotating turbulence
NASA Technical Reports Server (NTRS)
Mahalov, Alex; Zhou, YE
1995-01-01
A framework, which combines mathematical analysis, closure theory, and phenomenological treatment, is developed to study the spectral transfer process and reduction of dimensionality in turbulent flows that are subject to rotation. First, we outline a mathematical procedure that is particularly appropriate for problems with two disparate time scales. The approach which is based on the Green's method leads to the Poincare velocity variables and the Poincare transformation when applied to rotating turbulence. The effects of the rotation are now reflected in the modifications to the convolution of a nonlinear term. The Poincare transformed equations are used to obtain a time-dependent analog of the Taylor-Proudman theorem valid in the asymptotic limit when the non-dimensional parameter mu is identical to Omega(t) approaches infinity (Omega is the rotation rate and t is the time). The 'split' of the energy transfer in both direct and inverse directions is established. Secondly, we apply the Eddy-Damped-Quasinormal-Markovian (EDQNM) closure to the Poincare transformed Euler/Navier-Stokes equations. This closure leads to expressions for the spectral energy transfer. In particular, an unique triple velocity decorrelation time is derived with an explicit dependence on the rotation rate. This provides an important input for applying the phenomenological treatment of Zhou. In order to characterize the relative strength of rotation, another non-dimensional number, a spectral Rossby number, which is defined as the ratio of rotation and turbulence time scales, is introduced. Finally, the energy spectrum and the spectral eddy viscosity are deduced.
A Fast Approach to Automatic Detection of Brain Lesions
Koley, Subhranil; Chakraborty, Chandan; Mainero, Caterina; Fischl, Bruce; Aganj, Iman
2017-01-01
Template matching is a popular approach to computer-aided detection of brain lesions from magnetic resonance (MR) images. The outcomes are often sufficient for localizing lesions and assisting clinicians in diagnosis. However, processing large MR volumes with three-dimensional (3D) templates is demanding in terms of computational resources, hence the importance of the reduction of computational complexity of template matching, particularly in situations in which time is crucial (e.g. emergent stroke). In view of this, we make use of 3D Gaussian templates with varying radii and propose a new method to compute the normalized cross-correlation coefficient as a similarity metric between the MR volume and the template to detect brain lesions. Contrary to the conventional fast Fourier transform (FFT) based approach, whose runtime grows as O(N logN) with the number of voxels, the proposed method computes the cross-correlation in O(N). We show through our experiments that the proposed method outperforms the FFT approach in terms of computational time, and retains comparable accuracy. PMID:29082383
BELM: Bayesian extreme learning machine.
Soria-Olivas, Emilio; Gómez-Sanchis, Juan; Martín, José D; Vila-Francés, Joan; Martínez, Marcelino; Magdalena, José R; Serrano, Antonio J
2011-03-01
The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.
Upon Generating (2+1)-dimensional Dynamical Systems
NASA Astrophysics Data System (ADS)
Zhang, Yufeng; Bai, Yang; Wu, Lixin
2016-06-01
Under the framework of the Adler-Gel'fand-Dikii(AGD) scheme, we first propose two Hamiltonian operator pairs over a noncommutative ring so that we construct a new dynamical system in 2+1 dimensions, then we get a generalized special Novikov-Veselov (NV) equation via the Manakov triple. Then with the aid of a special symmetric Lie algebra of a reductive homogeneous group G, we adopt the Tu-Andrushkiw-Huang (TAH) scheme to generate a new integrable (2+1)-dimensional dynamical system and its Hamiltonian structure, which can reduce to the well-known (2+1)-dimensional Davey-Stewartson (DS) hierarchy. Finally, we extend the binormial residue representation (briefly BRR) scheme to the super higher dimensional integrable hierarchies with the help of a super subalgebra of the super Lie algebra sl(2/1), which is also a kind of symmetric Lie algebra of the reductive homogeneous group G. As applications, we obtain a super 2+1 dimensional MKdV hierarchy which can be reduced to a super 2+1 dimensional generalized AKNS equation. Finally, we compare the advantages and the shortcomings for the three schemes to generate integrable dynamical systems.
Sensitivity Analysis for Probabilistic Neural Network Structure Reduction.
Kowalski, Piotr A; Kusy, Maciej
2018-05-01
In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the probabilistic neural network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.
NASA Technical Reports Server (NTRS)
Chevallier, J. P.; Vaucheret, X.
1986-01-01
A synthesis of current trends in the reduction and computation of wall effects is presented. Some of the points discussed include: (1) for the two-dimensional, transonic tests, various control techniques of boundary conditions are used with adaptive walls offering high precision in determining reference conditions and residual corrections. A reduction in the boundary layer effects of the lateral walls is obtained at T2; (2) for the three-dimensional tests, the methods for the reduction of wall effects are still seldom applied due to a lesser need and to their complexity; (3) the supports holding the model of the probes have to be taken into account in the estimation of perturbatory effects.
Wideband radar cross section reduction using two-dimensional phase gradient metasurfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yongfeng; Qu, Shaobo; Wang, Jiafu
2014-06-02
Phase gradient metasurface (PGMs) are artificial surfaces that can provide pre-defined in-plane wave-vectors to manipulate the directions of refracted/reflected waves. In this Letter, we propose to achieve wideband radar cross section (RCS) reduction using two-dimensional (2D) PGMs. A 2D PGM was designed using a square combination of 49 split-ring sub-unit cells. The PGM can provide additional wave-vectors along the two in-plane directions simultaneously, leading to either surface wave conversion, deflected reflection, or diffuse reflection. Both the simulation and experiment results verified the wide-band, polarization-independent, high-efficiency RCS reduction induced by the 2D PGM.
Symmetry reduction and exact solutions of two higher-dimensional nonlinear evolution equations.
Gu, Yongyi; Qi, Jianming
2017-01-01
In this paper, symmetries and symmetry reduction of two higher-dimensional nonlinear evolution equations (NLEEs) are obtained by Lie group method. These NLEEs play an important role in nonlinear sciences. We derive exact solutions to these NLEEs via the [Formula: see text]-expansion method and complex method. Five types of explicit function solutions are constructed, which are rational, exponential, trigonometric, hyperbolic and elliptic function solutions of the variables in the considered equations.
Graph embedding and extensions: a general framework for dimensionality reduction.
Yan, Shuicheng; Xu, Dong; Zhang, Benyu; Zhang, Hong-Jiang; Yang, Qiang; Lin, Stephen
2007-01-01
Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.
NASA Astrophysics Data System (ADS)
Kusratmoko, Eko; Wibowo, Adi; Cholid, Sofyan; Pin, Tjiong Giok
2017-07-01
This paper presents the results of applications of participatory three dimensional mapping (P3DM) method for fqcilitating the people of Cibanteng' village to compile a landslide disaster risk reduction program. Physical factors, as high rainfall, topography, geology and land use, and coupled with the condition of demographic and social-economic factors, make up the Cibanteng region highly susceptible to landslides. During the years 2013-2014 has happened 2 times landslides which caused economic losses, as a result of damage to homes and farmland. Participatory mapping is one part of the activities of community-based disaster risk reduction (CBDRR)), because of the involvement of local communities is a prerequisite for sustainable disaster risk reduction. In this activity, participatory mapping method are done in two ways, namely participatory two-dimensional mapping (P2DM) with a focus on mapping of disaster areas and participatory three-dimensional mapping (P3DM) with a focus on the entire territory of the village. Based on the results P3DM, the ability of the communities in understanding the village environment spatially well-tested and honed, so as to facilitate the preparation of the CBDRR programs. Furthermore, the P3DM method can be applied to another disaster areas, due to it becomes a medium of effective dialogue between all levels of involved communities.
A three-dimensional quality-guided phase unwrapping method for MR elastography
NASA Astrophysics Data System (ADS)
Wang, Huifang; Weaver, John B.; Perreard, Irina I.; Doyley, Marvin M.; Paulsen, Keith D.
2011-07-01
Magnetic resonance elastography (MRE) uses accumulated phases that are acquired at multiple, uniformly spaced relative phase offsets, to estimate harmonic motion information. Heavily wrapped phase occurs when the motion is large and unwrapping procedures are necessary to estimate the displacements required by MRE. Two unwrapping methods were developed and compared in this paper. The first method is a sequentially applied approach. The three-dimensional MRE phase image block for each slice was processed by two-dimensional unwrapping followed by a one-dimensional phase unwrapping approach along the phase-offset direction. This unwrapping approach generally works well for low noise data. However, there are still cases where the two-dimensional unwrapping method fails when noise is high. In this case, the baseline of the corrupted regions within an unwrapped image will not be consistent. Instead of separating the two-dimensional and one-dimensional unwrapping in a sequential approach, an interleaved three-dimensional quality-guided unwrapping method was developed to combine both the two-dimensional phase image continuity and one-dimensional harmonic motion information. The quality of one-dimensional harmonic motion unwrapping was used to guide the three-dimensional unwrapping procedures and it resulted in stronger guidance than in the sequential method. In this work, in vivo results generated by the two methods were compared.
Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A
2007-01-01
The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.
Ding, Jiarui; Condon, Anne; Shah, Sohrab P
2018-05-21
Single-cell RNA-sequencing has great potential to discover cell types, identify cell states, trace development lineages, and reconstruct the spatial organization of cells. However, dimension reduction to interpret structure in single-cell sequencing data remains a challenge. Existing algorithms are either not able to uncover the clustering structures in the data or lose global information such as groups of clusters that are close to each other. We present a robust statistical model, scvis, to capture and visualize the low-dimensional structures in single-cell gene expression data. Simulation results demonstrate that low-dimensional representations learned by scvis preserve both the local and global neighbor structures in the data. In addition, scvis is robust to the number of data points and learns a probabilistic parametric mapping function to add new data points to an existing embedding. We then use scvis to analyze four single-cell RNA-sequencing datasets, exemplifying interpretable two-dimensional representations of the high-dimensional single-cell RNA-sequencing data.
NASA Astrophysics Data System (ADS)
Zhang, Yong-Xing; Jia, Yong
2016-12-01
Three-dimensional Fe-ethylene glycol (Fe-EG) complex microspheres were synthesized by a facile hydrothermal method, and were characterized by field emission scanning electron microscopy and transmission electron microscopy. The adsorption as well as reduction properties of the obtained Fe-EG complex microspheres towards Cr(VI) ions were studied. The experiment data of adsorption kinetic and isotherm were fitted by nonlinear regression approach. In neutral condition, the maximum adsorption capacity was 49.78 mg g-1 at room temperature, and was increased with the increasing of temperature. Thermodynamic parameters including the Gibbs free energy, standard enthalpy and standard entropy revealed that adsorption of Cr(VI) was a feasible, spontaneous and endothermic process. Spectroscopic analysis revealed the adsorption of Cr(VI) was a physical adsorption process. The adsorbed CrO42- ions were partly reduced to Cr(OH)3 by Fe(II) ions and the organic groups in the Fe-EG complex.
Internal Kinematics of the Tongue Following Volume Reduction
SHCHERBATYY, VOLODYMYR; PERKINS, JONATHAN A.; LIU, ZI-JUN
2008-01-01
This study was undertaken to determine the functional consequences following tongue volume reduction on tongue internal kinematics during mastication and neuromuscular stimulation in a pig model. Six ultrasonic-crystals were implanted into the tongue body in a wedge-shaped configuration which allows recording distance changes in the bilateral length (LENG) and posterior thickness (THICK), as well as anterior (AW), posterior dorsal (PDW), and ventral (PVW) widths in 12 Yucatan-minipigs. Six animals received a uniform mid-sagittal tongue volume reduction surgery (reduction), and the other six had identical incisions without tissue removal (sham). The initial-distances among each crystal-pairs were recorded before, and immediately after surgery to calculate the dimensional losses. Referring to the initial-distance there were 3−66% and 1−4% tongue dimensional losses by the reduction and sham surgeries, respectively. The largest deformation in sham animals during mastication was in AW, significantly larger than LENG, PDW, PVW, and THICK (P < 0.01−0.001). In reduction animals, however, these deformational changes significantly diminished and enhanced in the anterior and posterior tongue, respectively (P < 0.05−0.001). In both groups, neuromuscular stimulation produced deformational ranges that were 2−4 times smaller than those occurred during chewing. Furthermore, reduction animals showed significantly decreased ranges of deformation in PVW, LENG, and THICK (P < 0.05−0.01). These results indicate that tongue volume reduction alters the tongue internal kinematics, and the dimensional losses in the anterior tongue caused by volume reduction can be compensated by increased deformations in the posterior tongue during mastication. This compensatory effect, however, diminishes during stimulation of the hypoglossal nerve and individual tongue muscles. PMID:18484603
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
Ensemble based on static classifier selection for automated diagnosis of Mild Cognitive Impairment.
Nanni, Loris; Lumini, Alessandra; Zaffonato, Nicolò
2018-05-15
Alzheimer's disease (AD) is the most common cause of neurodegenerative dementia in the elderly population. Scientific research is very active in the challenge of designing automated approaches to achieve an early and certain diagnosis. Recently an international competition among AD predictors has been organized: "A Machine learning neuroimaging challenge for automated diagnosis of Mild Cognitive Impairment" (MLNeCh). This competition is based on pre-processed sets of T1-weighted Magnetic Resonance Images (MRI) to be classified in four categories: stable AD, individuals with MCI who converted to AD, individuals with MCI who did not convert to AD and healthy controls. In this work, we propose a method to perform early diagnosis of AD, which is evaluated on MLNeCh dataset. Since the automatic classification of AD is based on the use of feature vectors of high dimensionality, different techniques of feature selection/reduction are compared in order to avoid the curse-of-dimensionality problem, then the classification method is obtained as the combination of Support Vector Machines trained using different clusters of data extracted from the whole training set. The multi-classifier approach proposed in this work outperforms all the stand-alone method tested in our experiments. The final ensemble is based on a set of classifiers, each trained on a different cluster of the training data. The proposed ensemble has the great advantage of performing well using a very reduced version of the data (the reduction factor is more than 90%). The MATLAB code for the ensemble of classifiers will be publicly available 1 to other researchers for future comparisons. Copyright © 2017 Elsevier B.V. All rights reserved.
In vivo self-gated 23 Na MRI at 7 T using an oval-shaped body resonator.
Platt, Tanja; Umathum, Reiner; Fiedler, Thomas M; Nagel, Armin M; Bitz, Andreas K; Maier, Florian; Bachert, Peter; Ladd, Mark E; Wielpütz, Mark O; Kauczor, Hans-Ulrich; Behl, Nicolas G R
2018-02-09
This work faces three challenges of sodium ( 23 Na) torso MRI on the way to quantitative 23 Na MRI: Development of a 23 Na radiofrequency transmit and receive coil covering a large part of the human body in width and length for 23 Na MRI at 7 T; reduction of blurring due to respiration in free-breathing 23 Na MRI using a self-gating approach; and reduction of image noise using a compressed-sensing reconstruction. An oval-shaped birdcage resonator with a large field of view of (400 mm) 3 and a homogeneous transmit and receive field distribution was designed, simulated, and implemented on a 7T MR system. In free-breathing 3-dimensional radial 23 Na MRI (acquisition time ≈ 30 minutes), retrospective respiratory self-gating was applied, which sorts the acquired projections into two respiratory states based on the intrinsic respiration-dependent signal changes. Furthermore, a 3-dimensional dictionary-learning compressed-sensing reconstruction was applied. The developed body coil provided homogeneous radiofrequency excitation (flip angle error of 4.9% in central region of interest of 23 × 13 × 10 cm 3 ) and homogeneous signal reception. The self-gating approach allowed for separation of the full data set into two subsets associated with different respiratory states (inhaled and exhaled), and thereby reduced blurring due to respiration in the separated images. Image noise was markedly reduced by the compressed-sensing algorithm. The presented body coil enables full body width 23 Na MRI with long z-axis coverage at 7 T for the first time. Additionally, the retrospective respiratory self-gating performance is demonstrated for free-breathing lung and abdominal 23 Na MRI in 3 subjects. © 2018 International Society for Magnetic Resonance in Medicine.
Integration of Tidal Prism Model and HSPF for simulating indicator bacteria in coastal watersheds
NASA Astrophysics Data System (ADS)
Sobel, Rose S.; Rifai, Hanadi S.; Petersen, Christina M.
2017-09-01
Coastal water quality is strongly influenced by tidal fluctuations and water chemistry. There is a need for rigorous models that are not computationally or economically prohibitive, but still allow simulation of the hydrodynamics and bacteria sources for coastal, tidally influenced streams and bayous. This paper presents a modeling approach that links a Tidal Prism Model (TPM) implemented in an Excel-based modeling environment with a watershed runoff model (Hydrologic Simulation Program FORTRAN, HSPF) for such watersheds. The TPM is a one-dimensional mass balance approach that accounts for loading from tidal exchange, runoff, point sources and bacteria die-off at an hourly time step resolution. The novel use of equal high-resolution time steps in this study allowed seamless integration of the TPM and HSPF. The linked model was calibrated to flow and E. Coli data (for HSPF), and salinity and enterococci data (for the TPM) for a coastal stream in Texas. Sensitivity analyses showed the TPM to be most influenced by changes in net decay rates followed by tidal and runoff loads, respectively. Management scenarios were evaluated with the developed linked models to assess the impact of runoff load reductions and improved wastewater treatment plant quality and to determine the areas of critical need for such reductions. Achieving water quality standards for bacteria required load reductions that ranged from zero to 90% for the modeled coastal stream.
Three New (2+1)-dimensional Integrable Systems and Some Related Darboux Transformations
NASA Astrophysics Data System (ADS)
Guo, Xiu-Rong
2016-06-01
We introduce two operator commutators by using different-degree loop algebras of the Lie algebra A1, then under the framework of zero curvature equations we generate two (2+1)-dimensional integrable hierarchies, including the (2+1)-dimensional shallow water wave (SWW) hierarchy and the (2+1)-dimensional Kaup-Newell (KN) hierarchy. Through reduction of the (2+1)-dimensional hierarchies, we get a (2+1)-dimensional SWW equation and a (2+1)-dimensional KN equation. Furthermore, we obtain two Darboux transformations of the (2+1)-dimensional SWW equation. Similarly, the Darboux transformations of the (2+1)-dimensional KN equation could be deduced. Finally, with the help of the spatial spectral matrix of SWW hierarchy, we generate a (2+1) heat equation and a (2+1) nonlinear generalized SWW system containing inverse operators with respect to the variables x and y by using a reduction spectral problem from the self-dual Yang-Mills equations. Supported by the National Natural Science Foundation of China under Grant No. 11371361, the Shandong Provincial Natural Science Foundation of China under Grant Nos. ZR2012AQ011, ZR2013AL016, ZR2015EM042, National Social Science Foundation of China under Grant No. 13BJY026, the Development of Science and Technology Project under Grant No. 2015NS1048 and A Project of Shandong Province Higher Educational Science and Technology Program under Grant No. J14LI58
NASA Astrophysics Data System (ADS)
Jones, T.; Detwiler, R. L.
2017-12-01
Fractures act as dominant pathways for fluid flow in low-permeability rocks. However, in many subsurface environments, fluid rock reactions can lead to mineral precipitation, which alters fracture surface geometry and reduces fracture permeability. In natural fractures, surface mineralogy and roughness are often heterogeneous, leading to variations in both velocity and reactive surface area. The combined effects of surface roughness and mineral heterogeneity can lead to large disparities in local precipitation rates that are difficult to predict due to the strong coupling between dissolved mineral transport and reactions at the fracture surface. Recent experimental observations suggest that mineral precipitation in a heterogeneous fracture may promote preferential flow and focus large dissolved ion concentrations into regions with limited reactive surface area. Here, we build on these observations using reactive transport simulations. Reactive transport is simulated with a quasi-steady-state 2D model that uses a depth-averaged mass-transfer relationship to describe dissolved mineral transport across the fracture aperture and local precipitation reactions. Mineral precipitation-induced changes to fracture surface geometry are accounted for using two different approaches: (1) by only allowing reactive minerals to grow vertically, and (2) by allowing three-dimensional mineral growth at reaction sites. Preliminary results from simulations using (1) suggest that precipitation-induced aperture reduction focuses flow into thin flow paths. This flow focusing causes a reduction in the fracture-scale precipitation rate, and precipitation ceases when the reaction zone extends the entire length of the fracture. This approach reproduces experimental observations at early time reasonably well, but as precipitation proceeds, reaction sites can grow laterally along the fracture surfaces, which is not predicted by (1). To account for three-dimensional mineral growth (2), we have incorporated a level-set-method based approach for tracking the mineral interfaces in three dimensions. This provides a mechanistic approach for simulating the dynamics of the formation, and eventual closing, of preferential flow paths by precipitation-induced aperture alteration, that do not occur using (1).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Gaurav; Raju, Mandhapati P.; Sung, Chih-Jen
2010-07-15
In modeling rapid compression machine (RCM) experiments, zero-dimensional approach is commonly used along with an associated heat loss model. The adequacy of such approach has not been validated for hydrocarbon fuels. The existence of multi-dimensional effects inside an RCM due to the boundary layer, roll-up vortex, non-uniform heat release, and piston crevice could result in deviation from the zero-dimensional assumption, particularly for hydrocarbons exhibiting two-stage ignition and strong thermokinetic interactions. The objective of this investigation is to assess the adequacy of zero-dimensional approach in modeling RCM experiments under conditions of two-stage ignition and negative temperature coefficient (NTC) response. Computational fluidmore » dynamics simulations are conducted for n-heptane ignition in an RCM and the validity of zero-dimensional approach is assessed through comparisons over the entire NTC region. Results show that the zero-dimensional model based on the approach of 'adiabatic volume expansion' performs very well in adequately predicting the first-stage ignition delays, although quantitative discrepancy for the prediction of the total ignition delays and pressure rise in the first-stage ignition is noted even when the roll-up vortex is suppressed and a well-defined homogeneous core is retained within an RCM. Furthermore, the discrepancy is pressure dependent and decreases as compressed pressure is increased. Also, as ignition response becomes single-stage at higher compressed temperatures, discrepancy from the zero-dimensional simulations reduces. Despite of some quantitative discrepancy, the zero-dimensional modeling approach is deemed satisfactory from the viewpoint of the ignition delay simulation. (author)« less
Reduction of Large Dynamical Systems by Minimization of Evolution Rate
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.
1999-01-01
Reduction of a large system of equations to a lower-dimensional system of similar dynamics is investigated. For dynamical systems with disparate timescales, a criterion for determining redundant dimensions and a general reduction method based on the minimization of evolution rate are proposed.
Charged black holes in compactified spacetimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karlovini, Max; Unge, Rikard von
2005-11-15
We construct and investigate a compactified version of the four-dimensional Reissner-Nordstroem-Taub-NUT solution, generalizing the compactified Schwarzschild black hole that has been previously studied by several workers. Our approach to compactification is based on dimensional reduction with respect to the stationary Killing vector, resulting in three-dimensional gravity coupled to a nonlinear sigma model. Knowing that the original noncompactified solution corresponds to a target space geodesic, the problem can be linearized much in the same way as in the case of no electric or Taub-NUT charge. An interesting feature of the solution family is that, for nonzero electric charge but vanishing Taub-NUTmore » charge, the solution has a curvature singularity on a torus that surrounds the event horizon, but this singularity is removed when the Taub-NUT charge is switched on. We also treat the Schwarzschild case in a more complete way than has been done previously. In particular, the asymptotic solution (the Levi-Civita solution with the height coordinate made periodic) has to our knowledge only been calculated up to a determination of the mass parameter. The periodic Levi-Civita solution contains three essential parameters, however, and the remaining two are explicitly calculated here.« less
Sample Dimensionality Effects on d' and Proportion of Correct Responses in Discrimination Testing.
Bloom, David J; Lee, Soo-Yeun
2016-09-01
Products in the food and beverage industry have varying levels of dimensionality ranging from pure water to multicomponent food products, which can modify sensory perception and possibly influence discrimination testing results. The objectives of the study were to determine the impact of (1) sample dimensionality and (2) complex formulation changes on the d' and proportion of correct response of the 3-AFC and triangle methods. Two experiments were conducted using 47 prescreened subjects who performed either triangle or 3-AFC test procedures. In Experiment I, subjects performed 3-AFC and triangle tests using model solutions with different levels of dimensionality. Samples increased in dimensionality from 1-dimensional sucrose in water solution to 3-dimensional sucrose, citric acid, and flavor in water solution. In Experiment II, subjects performed 3-AFC and triangle tests using 3-dimensional solutions. Sample pairs differed in all 3 dimensions simultaneously to represent complex formulation changes. Two forms of complexity were compared: dilution, where all dimensions decreased in the same ratio, and compensation, where a dimension was increased to compensate for a reduction in another. The proportion of correct responses decreased for both methods when the dimensionality was increased from 1- to 2-dimensional samples. No reduction in correct responses was observed from 2- to 3-dimensional samples. No significant differences in d' were demonstrated between the 2 methods when samples with complex formulation changes were tested. Results reveal an impact on proportion of correct responses due to sample dimensionality and should be explored further using a wide range of sample formulations. © 2016 Institute of Food Technologists®
Nonlinear dimensionality reduction of data lying on the multicluster manifold.
Meng, Deyu; Leung, Yee; Fung, Tung; Xu, Zongben
2008-08-01
A new method, which is called decomposition-composition (D-C) method, is proposed for the nonlinear dimensionality reduction (NLDR) of data lying on the multicluster manifold. The main idea is first to decompose a given data set into clusters and independently calculate the low-dimensional embeddings of each cluster by the decomposition procedure. Based on the intercluster connections, the embeddings of all clusters are then composed into their proper positions and orientations by the composition procedure. Different from other NLDR methods for multicluster data, which consider associatively the intracluster and intercluster information, the D-C method capitalizes on the separate employment of the intracluster neighborhood structures and the intercluster topologies for effective dimensionality reduction. This, on one hand, isometrically preserves the rigid-body shapes of the clusters in the embedding process and, on the other hand, guarantees the proper locations and orientations of all clusters. The theoretical arguments are supported by a series of experiments performed on the synthetic and real-life data sets. In addition, the computational complexity of the proposed method is analyzed, and its efficiency is theoretically analyzed and experimentally demonstrated. Related strategies for automatic parameter selection are also examined.
Akram, M Nadeem; Tong, Zhaomin; Ouyang, Guangmin; Chen, Xuyuan; Kartashov, Vladimir
2010-06-10
We utilize spatial and angular diversity to achieve speckle reduction in laser illumination. Both free-space and imaging geometry configurations are considered. A fast two-dimensional scanning micromirror is employed to steer the laser beam. A simple experimental setup is built to demonstrate the application of our technique in a two-dimensional laser picture projection. Experimental results show that the speckle contrast factor can be reduced down to 5% within the integration time of the detector.
Argyres–Douglas theories, S 1 reductions, and topological symmetries
Buican, Matthew; Nishinaka, Takahiro
2015-12-21
In a recent paper, we proposed closed-form expressions for the superconformal indices of the (A(1), A(2n-3)) and(A(1), D-2n) Argyres-Douglas (AD) superconformal field theories (SCFTs) in the Schur limit. Following up on our results, we turn our attention to the small S-1 regime of these indices. As expected on general grounds, our study reproduces the S-3 partition functions of the resulting dimensionally reduced theories. However, we show that in all cases-with the exception of the reduction of the (A(1), D-4) SCFTcertain imaginary partners of real mass terms are turned on in the corresponding mirror theories. We interpret these deformations as Rmore » symmetry mixing with the topological symmetries of the direct S-1 reductions. Moreover, we argue that these shifts occur in any of our theories whose four-dimensional N = 2 superconformal U(1)(R) symmetry does not obey an SU(2) quantization condition. We then use our R symmetry map to find the fourdimensional ancestors of certain three-dimensional operators. Somewhat surprisingly, this picture turns out to imply that the scaling dimensions of many of the chiral operators of the four-dimensional theory are encoded in accidental symmetries of the three-dimensional theory. We also comment on the implications of our work on the space of general N = 2 SCFTs.« less
Argyres–Douglas theories, S 1 reductions, and topological symmetries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buican, Matthew; Nishinaka, Takahiro
In a recent paper, we proposed closed-form expressions for the superconformal indices of the (A(1), A(2n-3)) and(A(1), D-2n) Argyres-Douglas (AD) superconformal field theories (SCFTs) in the Schur limit. Following up on our results, we turn our attention to the small S-1 regime of these indices. As expected on general grounds, our study reproduces the S-3 partition functions of the resulting dimensionally reduced theories. However, we show that in all cases-with the exception of the reduction of the (A(1), D-4) SCFTcertain imaginary partners of real mass terms are turned on in the corresponding mirror theories. We interpret these deformations as Rmore » symmetry mixing with the topological symmetries of the direct S-1 reductions. Moreover, we argue that these shifts occur in any of our theories whose four-dimensional N = 2 superconformal U(1)(R) symmetry does not obey an SU(2) quantization condition. We then use our R symmetry map to find the fourdimensional ancestors of certain three-dimensional operators. Somewhat surprisingly, this picture turns out to imply that the scaling dimensions of many of the chiral operators of the four-dimensional theory are encoded in accidental symmetries of the three-dimensional theory. We also comment on the implications of our work on the space of general N = 2 SCFTs.« less
Wang, Guang-Ye; Huang, Wen-Jun; Song, Qi; Qin, Yun-Tian; Liang, Jin-Feng
2016-12-01
Acetabular fractures have always been very challenging for orthopedic surgeons; therefore, appropriate preoperative evaluation and planning are particularly important. This study aimed to explore the application methods and clinical value of preoperative computer simulation (PCS) in treating pelvic and acetabular fractures. Spiral computed tomography (CT) was performed on 13 patients with pelvic and acetabular fractures, and Digital Imaging and Communications in Medicine (DICOM) data were then input into Mimics software to reconstruct three-dimensional (3D) models of actual pelvic and acetabular fractures for preoperative simulative reduction and fixation, and to simulate each surgical procedure. The times needed for virtual surgical modeling and reduction and fixation were also recorded. The average fracture-modeling time was 45 min (30-70 min), and the average time for bone reduction and fixation was 28 min (16-45 min). Among the surgical approaches planned for these 13 patients, 12 were finally adopted; 12 cases used the simulated surgical fixation, and only 1 case used a partial planned fixation method. PCS can provide accurate surgical plans and data support for actual surgeries.
Green reduction of graphene oxide by ascorbic acid
NASA Astrophysics Data System (ADS)
Khosroshahi, Zahra; Kharaziha, Mahshid; Karimzadeh, Fathallah; Allafchian, Alireza
2018-01-01
Graphene, a single layer of sp2-hybridized carbon atoms in a hexagonal (two-dimensional honey-comb) lattice, has attracted strong scientific and technological interest due to its novel and excellent optical, chemical, electrical, mechanical and thermal properties. The solution-processable chemical reduction of Graphene oxide (GO is considered as the most favorable method regarding mass production of graphene. Generally, the reduction of GO is carried out by chemical approaches using different reductants such as hydrazine and sodium borohydride. These components are corrosive, combustible and highly toxic which may be dangerous for personnel health and the environment. Hence, these reducing agents are not promising choice for reducing of graphene oxide (GO). As a consequence, it is necessary for further development and optimization of eco-friendly, natural reducing agent for clean and effective reduction of GO. Ascorbic acid, an eco-friendly and natural reducing agents, having a mild reductive ability and nontoxic property. So, the aim of this research was to green synthesis of GO with ascorbic acid. For this purpose, the required amount of NaOH and ascorbic acid were added to GO solution (0.5 mg/ml) and were heated at 95 °C for 1 hour. According to the X-ray powder diffraction (XRD), scanning electron microscopy (SEM), and electrochemical results, GO were reduced with ascorbic acid like hydrazine with better electrochemical properties and ascorbic acid is an ideal substitute for hydrazine in the reduction of graphene oxide process.
NASA Astrophysics Data System (ADS)
So, Hongyun; Senesky, Debbie G.
2016-01-01
In this letter, three-dimensional gateless AlGaN/GaN high electron mobility transistors (HEMTs) were demonstrated with 54% reduction in electrical resistance and 73% increase in surface area compared with conventional gateless HEMTs on planar substrates. Inverted pyramidal AlGaN/GaN surfaces were microfabricated using potassium hydroxide etched silicon with exposed (111) surfaces and metal-organic chemical vapor deposition of coherent AlGaN/GaN thin films. In addition, electrical characterization of the devices showed that a combination of series and parallel connections of the highly conductive two-dimensional electron gas along the pyramidal geometry resulted in a significant reduction in electrical resistance at both room and high temperatures (up to 300 °C). This three-dimensional HEMT architecture can be leveraged to realize low-power and reliable power electronics, as well as harsh environment sensors with increased surface area.
NASA Astrophysics Data System (ADS)
Zhang, Zhaoguo; Huang, Zhengfeng; Cheng, Xudong; Wang, Qingli; Chen, Yi; Dong, Peimei; Zhang, Xiwen
2015-11-01
The influence of nitrogen-source on the photocatalytic properties of nitrogen-doped titanium dioxide is herein first investigated from the perspective of the chemical bond form of the nitrogen element in the nitrogen-source. The definitive role of groups such as Nsbnd N from the nitrogen-source on the surface of as-prepared samples in the selectivity of the dominant product of photocatalytic reduction is demonstrated. Well-crystallized one-dimensional Nsbnd TiO2 nanorod arrays with a preferred orientation of the rutile (3 1 0) facet are manufactured via a hydrothermal treatment using hydrazine and ammonia variously as the source of nitrogen. Significant selectivity of the dominant reduced products has been exhibited for Nsbnd TiO2 prepared from different nitrogen-sources in carbon dioxide photocatalytic reduction under visible light illumination. CH4 is the main product with N2H4-doped Nsbnd TiO2, while CO is the main product with NH3-doped Nsbnd TiO2, which can be attributed to the existence of the reducing Nsbnd N groups in the N2H4-doped Nsbnd TiO2 surfaces after the hydrothermal treatment. Compared with the approaches previously reported, the facile one-step route utilized here accomplishes the fabrication of Nsbnd TiO2 possessing visible-light activity and attainment of selectivity of dominant photocatalytic reduction product simultaneously by choosing a nitrogen-source with appropriate chemical bond form, which provides a completely new approach to understanding the effects of doping treatment on photocatalytic properties.
The quantum n-body problem in dimension d ⩾ n – 1: ground state
NASA Astrophysics Data System (ADS)
Miller, Willard, Jr.; Turbiner, Alexander V.; Escobar-Ruiz, M. A.
2018-05-01
We employ generalized Euler coordinates for the n body system in dimensional space, which consists of the centre-of-mass vector, relative (mutual) mass-independent distances r ij and angles as remaining coordinates. We prove that the kinetic energy of the quantum n-body problem for can be written as the sum of three terms: (i) kinetic energy of centre-of-mass, (ii) the second order differential operator which depends on relative distances alone and (iii) the differential operator which annihilates any angle-independent function. The operator has a large reflection symmetry group and in variables is an algebraic operator, which can be written in terms of generators of the hidden algebra . Thus, makes sense of the Hamiltonian of a quantum Euler–Arnold top in a constant magnetic field. It is conjectured that for any n, the similarity-transformed is the Laplace–Beltrami operator plus (effective) potential; thus, it describes a -dimensional quantum particle in curved space. This was verified for . After de-quantization the similarity-transformed becomes the Hamiltonian of the classical top with variable tensor of inertia in an external potential. This approach allows a reduction of the dn-dimensional spectral problem to a -dimensional spectral problem if the eigenfunctions depend only on relative distances. We prove that the ground state function of the n body problem depends on relative distances alone.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
A data reduction package for multiple object spectroscopy
NASA Technical Reports Server (NTRS)
Hill, J. M.; Eisenhamer, J. D.; Silva, D. R.
1986-01-01
Experience with fiber-optic spectrometers has demonstrated improvements in observing efficiency for clusters of 30 or more objects that must in turn be matched by data reduction capability increases. The Medusa Automatic Reduction System reduces data generated by multiobject spectrometers in the form of two-dimensional images containing 44 to 66 individual spectra, using both software and hardware improvements to efficiently extract the one-dimensional spectra. Attention is given to the ridge-finding algorithm for automatic location of the spectra in the CCD frame. A simultaneous extraction of calibration frames allows an automatic wavelength calibration routine to determine dispersion curves, and both line measurements and cross-correlation techniques are used to determine galaxy redshifts.
Hwang, Ju-Ae; Yang, Heung-Mo; Hong, Doo-Pyo; Joo, Sung-Yeon; Choi, Yoon-La; Park, Joo-Hung; Lazar, Alexander J; Pollock, Raphael E; Lev, Dina; Kim, Sung Joo
2014-10-15
Liposarcoma is one of the most common histologic types of soft tissue sarcoma and is frequently an aggressive cancer with poor outcome. Hence, alternative approaches other than surgical excision are necessary to improve treatment of well-differentiated/dedifferentiated liposarcoma (WDLPS/DDLPS). For this reason, we performed a two-dimensional gel electrophoresis (2-DE) and matrix-assisted laser desorption/ionization-time of flight mass spectrometry/mass spectrometry (MALDI-TOF/MS) analysis to identify new factors for WDLPS and DDLPS. Among the selected candidate proteins, gankyrin, known to be an oncoprotein, showed a significantly high level of expression pattern and inversely low expression of p53/p21 in WDLPS and DDLPS tissues, suggesting possible utility as a new predictive factor. Moreover, inhibition of gankyrin not only led to reduction of in vitro cell growth ability including cell proliferation, colony-formation, and migration, but also in vivo DDLPS cell tumorigenesis, perhaps via downregulation of the p53 tumor suppressor gene and its p21 target and also reduction of AKT/mTOR signal activation. This study identifies gankyrin, for the first time, as new potential predictive and oncogenic factor of WDLPS and DDLPS, suggesting the potential for service as a future LPS therapeutic approach.
Spectral Regression Discriminant Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Pan, Y.; Wu, J.; Huang, H.; Liu, J.
2012-08-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.
Synthetic dimensions for cold atoms from shaking a harmonic trap
NASA Astrophysics Data System (ADS)
Price, Hannah M.; Ozawa, Tomoki; Goldman, Nathan
2017-02-01
We introduce a simple scheme to implement synthetic dimensions in ultracold atomic gases, which only requires two basic and ubiquitous ingredients: the harmonic trap, which confines the atoms, combined with a periodic shaking. In our approach, standard harmonic oscillator eigenstates are reinterpreted as lattice sites along a synthetic dimension, while the coupling between these lattice sites is controlled by the applied time modulation. The phase of this modulation enters as a complex hopping phase, leading straightforwardly to an artificial magnetic field upon adding a second dimension. We show that this artificial gauge field has important consequences, such as the counterintuitive reduction of average energy under resonant driving, or the realization of quantum Hall physics. Our approach offers significant advantages over previous implementations of synthetic dimensions, providing an intriguing route towards higher-dimensional topological physics and strongly-correlated states.
NASA Technical Reports Server (NTRS)
Gabrielsen, R. E.
1981-01-01
Present approaches to solving the stationary Navier-Stokes equations are of limited value; however, there does exist an equivalent representation of the problem that has significant potential in solving such problems. This is due to the fact that the equivalent representation consists of a sequence of Fredholm integral equations of the second kind, and the solving of this type of problem is very well developed. For the problem in this form, there is an excellent chance to also determine explicit error estimates, since bounded, rather than unbounded, linear operators are dealt with.
Dynamical behavior of susceptible-infected-recovered-susceptible epidemic model on weighted networks
NASA Astrophysics Data System (ADS)
Wu, Qingchu; Zhang, Fei
2018-02-01
We study susceptible-infected-recovered-susceptible epidemic model in weighted, regular, and random complex networks. We institute a pairwise-type mathematical model with a general transmission rate to evaluate the influence of the link-weight distribution on the spreading process. Furthermore, we develop a dimensionality reduction approach to derive the condition for the contagion outbreak. Finally, we analyze the influence of the heterogeneity of weight distribution on the outbreak condition for the scenario with a linear transmission rate. Our theoretical analysis is in agreement with stochastic simulations, showing that the heterogeneity of link-weight distribution can have a significant effect on the epidemic dynamics.
ERIC Educational Resources Information Center
Hoko, J. Aaron; LeBlanc, Judith M.
1988-01-01
Because disabled learners may profit from procedures using gradual stimulus change, this study utilized a microcomputer to investigate the effectiveness of stimulus equalization, an error reduction procedure involving an abrupt but temporary reduction of dimensional complexity. The procedure was found to be generally effective and implications for…
A two-dimensional lattice equation as an extension of the Heideman-Hogan recurrence
NASA Astrophysics Data System (ADS)
Kamiya, Ryo; Kanki, Masataka; Mase, Takafumi; Tokihiro, Tetsuji
2018-03-01
We consider a two dimensional extension of the so-called linearizable mappings. In particular, we start from the Heideman-Hogan recurrence, which is known as one of the linearizable Somos-like recurrences, and introduce one of its two dimensional extensions. The two dimensional lattice equation we present is linearizable in both directions, and has the Laurent and the coprimeness properties. Moreover, its reduction produces a generalized family of the Heideman-Hogan recurrence. Higher order examples of two dimensional linearizable lattice equations related to the Dana Scott recurrence are also discussed.
Development of an interactive anatomical three-dimensional eye model.
Allen, Lauren K; Bhattacharyya, Siddhartha; Wilson, Timothy D
2015-01-01
The discrete anatomy of the eye's intricate oculomotor system is conceptually difficult for novice students to grasp. This is problematic given that this group of muscles represents one of the most common sites of clinical intervention in the treatment of ocular motility disorders and other eye disorders. This project was designed to develop a digital, interactive, three-dimensional (3D) model of the muscles and cranial nerves of the oculomotor system. Development of the 3D model utilized data from the Visible Human Project (VHP) dataset that was refined using multiple forms of 3D software. The model was then paired with a virtual user interface in order to create a novel 3D learning tool for the human oculomotor system. Development of the virtual eye model was done while attempting to adhere to the principles of cognitive load theory (CLT) and the reduction of extraneous load in particular. The detailed approach, digital tools employed, and the CLT guidelines are described herein. © 2014 American Association of Anatomists.
Optically-sectioned two-shot structured illumination microscopy with Hilbert-Huang processing.
Patorski, Krzysztof; Trusiak, Maciej; Tkaczyk, Tomasz
2014-04-21
We introduce a fast, simple, adaptive and experimentally robust method for reconstructing background-rejected optically-sectioned images using two-shot structured illumination microscopy. Our innovative data demodulation method needs two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement between two frames is not required. Upon frames subtraction the input pattern with increased grid modulation is obtained. The first demodulation stage comprises two-dimensional data processing based on the empirical mode decomposition for the object spatial frequency selection (noise reduction and bias term removal). The second stage consists in calculating high contrast image using the two-dimensional spiral Hilbert transform. Our algorithm effectiveness is compared with the results calculated for the same input data using structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. Results of our approach compare very favorably with SIM and HiLo techniques.
Lopes, Bianca Rebelo; Cassiano, Neila Maria; Carvalho, Daniela Miarelli; Moisés, Elaine Christine Dantas; Cass, Quezia Bezerra
2018-02-20
A two-dimensional liquid chromatography system coupled to triple quadrupole tandem mass spectrometer (2D LC-MS/MS) was employed for the determination of fluoxetine (FLU) and norfluoxetine (N-FLU) in colostrum and mature milk by direct sample injection. With a run time of 12 min representing a gain in throughput analysis, the validated methods furnished selectivity, extraction efficiency, accuracy, and precision in accordance with the criteria preconized by the European Medicines Agency guidelines. With a linear range of 3.00-150 ng/mL for FLU and 4.00-200 ng/mL for N-FLU they were applied to the analysis of colostrum and mature milk samples from nursing mothers. The paper discusses the differences and similarity of sample preparation for this two sample matrices. The herein reported methods are an advance in sample preparation procedures providing waste reduction and a sustainable approach. Copyright © 2017 Elsevier B.V. All rights reserved.
On the Impact of Wind Farms on a Convective Atmospheric Boundary Layer
NASA Astrophysics Data System (ADS)
Lu, Hao; Porté-Agel, Fernando
2015-10-01
With the rapid growth in the number of wind turbines installed worldwide, a demand exists for a clear understanding of how wind farms modify land-atmosphere exchanges. Here, we conduct three-dimensional large-eddy simulations to investigate the impact of wind farms on a convective atmospheric boundary layer. Surface temperature and heat flux are determined using a surface thermal energy balance approach, coupled with the solution of a three-dimensional heat equation in the soil. We study several cases of aligned and staggered wind farms with different streamwise and spanwise spacings. The farms consist of Siemens SWT-2.3-93 wind turbines. Results reveal that, in the presence of wind turbines, the stability of the atmospheric boundary layer is modified, the boundary-layer height is increased, and the magnitude of the surface heat flux is slightly reduced. Results also show an increase in land-surface temperature, a slight reduction in the vertically-integrated temperature, and a heterogeneous spatial distribution of the surface heat flux.
Performance and analysis of a three-dimensional nonorthogonal laser Doppler anemometer
NASA Technical Reports Server (NTRS)
Snyder, P. K.; Orloff, K. L.; Aoyagi, K.
1981-01-01
A three dimensional laser Doppler anemometer with a nonorthogonal third axis coupled by 14 deg was designed and tested. A highly three dimensional flow field of a jet in a crossflow was surveyed to test the three dimensional capability of the instrument. Sample data are presented demonstrating the ability of the 3D LDA to resolve three orthogonal velocity components. Modifications to the optics, signal processing electronics, and data reduction methods are suggested.
Generation Algorithm of Discrete Line in Multi-Dimensional Grids
NASA Astrophysics Data System (ADS)
Du, L.; Ben, J.; Li, Y.; Wang, R.
2017-09-01
Discrete Global Grids System (DGGS) is a kind of digital multi-resolution earth reference model, in terms of structure, it is conducive to the geographical spatial big data integration and mining. Vector is one of the important types of spatial data, only by discretization, can it be applied in grids system to make process and analysis. Based on the some constraint conditions, this paper put forward a strict definition of discrete lines, building a mathematic model of the discrete lines by base vectors combination method. Transforming mesh discrete lines issue in n-dimensional grids into the issue of optimal deviated path in n-minus-one dimension using hyperplane, which, therefore realizing dimension reduction process in the expression of mesh discrete lines. On this basis, we designed a simple and efficient algorithm for dimension reduction and generation of the discrete lines. The experimental results show that our algorithm not only can be applied in the two-dimensional rectangular grid, also can be applied in the two-dimensional hexagonal grid and the three-dimensional cubic grid. Meanwhile, when our algorithm is applied in two-dimensional rectangular grid, it can get a discrete line which is more similar to the line in the Euclidean space.
Detection of anomaly in human retina using Laplacian Eigenmaps and vectorized matched filtering
NASA Astrophysics Data System (ADS)
Yacoubou Djima, Karamatou A.; Simonelli, Lucia D.; Cunningham, Denise; Czaja, Wojciech
2015-03-01
We present a novel method for automated anomaly detection on auto fluorescent data provided by the National Institute of Health (NIH). This is motivated by the need for new tools to improve the capability of diagnosing macular degeneration in its early stages, track the progression over time, and test the effectiveness of new treatment methods. In previous work, macular anomalies have been detected automatically through multiscale analysis procedures such as wavelet analysis or dimensionality reduction algorithms followed by a classification algorithm, e.g., Support Vector Machine. The method that we propose is a Vectorized Matched Filtering (VMF) algorithm combined with Laplacian Eigenmaps (LE), a nonlinear dimensionality reduction algorithm with locality preserving properties. By applying LE, we are able to represent the data in the form of eigenimages, some of which accentuate the visibility of anomalies. We pick significant eigenimages and proceed with the VMF algorithm that classifies anomalies across all of these eigenimages simultaneously. To evaluate our performance, we compare our method to two other schemes: a matched filtering algorithm based on anomaly detection on single images and a combination of PCA and VMF. LE combined with VMF algorithm performs best, yielding a high rate of accurate anomaly detection. This shows the advantage of using a nonlinear approach to represent the data and the effectiveness of VMF, which operates on the images as a data cube rather than individual images.
Three-Dimensional FIB/EBSD Characterization of Irradiated HfAl3-Al Composite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hua, Zilong; Guillen, Donna Post; Harris, William
2016-09-01
A thermal neutron absorbing material, comprised of 28.4 vol% HfAl3 in an Al matrix, was developed to serve as a conductively cooled thermal neutron filter to enable fast flux materials and fuels testing in a pressurized water reactor. In order to observe the microstructural change of the HfAl3-Al composite due to neutron irradiation, an EBSD-FIB characterization approach is developed and presented in this paper. Using the focused ion beam (FIB), the sample was fabricated to 25µm × 25µm × 20 µm and mounted on the grid. A series of operations were carried out repetitively on the sample top surface tomore » prepare it for scanning electron microscopy (SEM). First, a ~100-nm layer was removed by high voltage FIB milling. Then, several cleaning passes were performed on the newly exposed surface using low voltage FIB milling to improve the SEM image quality. Last, the surface was scanned by Electron Backscattering Diffraction (EBSD) to obtain the two-dimensional image. After 50 to 100 two-dimensional images were collected, the images were stacked to reconstruct a three-dimensional model using DREAM.3D software. Two such reconstructed three-dimensional models were obtained from samples of the original and post-irradiation HfAl3-Al composite respectively, from which the most significant microstructural change caused by neutron irradiation apparently is the size reduction of both HfAl3 and Al grains. The possible reason is the thermal expansion and related thermal strain from the thermal neutron absorption. This technique can be applied to three-dimensional microstructure characterization of irradiated materials.« less
Krueger, Robert F; Skodol, Andrew E; Livesley, W John; Shrout, Patrick E; Huang, Yueqin
2007-01-01
Personality disorder researchers have long considered the utility of dimensional approaches to diagnosis, signaling the need to consider a dimensional approach for personality disorders in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V). Nevertheless, a dimensional approach to personality disorders in DSM-V is more likely to succeed if it represents an orderly and logical progression from the categorical system in DSM-IV. With these considerations and opportunities in mind, the authors sought to delineate ways of synthesizing categorical and dimensional approaches to personality disorders that could inform the construction of DSM-V. This discussion resulted in (1) the idea of having a set of core descriptive elements of personality for DSM-V, (2) an approach to rating those elements for specific patients, (3) a way of combining those elements into personality disorder prototypes, and (4) a revised conception of personality disorder as a construct separate from personality traits. Copyright (c) 2007 John Wiley & Sons, Ltd.
Gauged supergravities from M-theory reductions
NASA Astrophysics Data System (ADS)
Katmadas, Stefanos; Tomasiello, Alessandro
2018-04-01
In supergravity compactifications, there is in general no clear prescription on how to select a finite-dimensional family of metrics on the internal space, and a family of forms on which to expand the various potentials, such that the lower-dimensional effective theory is supersymmetric. We propose a finite-dimensional family of deformations for regular Sasaki-Einstein seven-manifolds M 7, relevant for M-theory compactifications down to four dimensions. It consists of integrable Cauchy-Riemann structures, corresponding to complex deformations of the Calabi-Yau cone M 8 over M 7. The non-harmonic forms we propose are the ones contained in one of the Kohn-Rossi cohomology groups, which is finite-dimensional and naturally controls the deformations of Cauchy-Riemann structures. The same family of deformations can be also described in terms of twisted cohomology of the base M 6, or in terms of Milnor cycles arising in deformations of M 8. Using existing results on SU(3) structure compactifications, we briefly discuss the reduction of M-theory on our class of deformed Sasaki-Einstein manifolds to four-dimensional gauged supergravity.
Cocco, Simona; Monasson, Remi; Weigt, Martin
2013-01-01
Various approaches have explored the covariation of residues in multiple-sequence alignments of homologous proteins to extract functional and structural information. Among those are principal component analysis (PCA), which identifies the most correlated groups of residues, and direct coupling analysis (DCA), a global inference method based on the maximum entropy principle, which aims at predicting residue-residue contacts. In this paper, inspired by the statistical physics of disordered systems, we introduce the Hopfield-Potts model to naturally interpolate between these two approaches. The Hopfield-Potts model allows us to identify relevant ‘patterns’ of residues from the knowledge of the eigenmodes and eigenvalues of the residue-residue correlation matrix. We show how the computation of such statistical patterns makes it possible to accurately predict residue-residue contacts with a much smaller number of parameters than DCA. This dimensional reduction allows us to avoid overfitting and to extract contact information from multiple-sequence alignments of reduced size. In addition, we show that low-eigenvalue correlation modes, discarded by PCA, are important to recover structural information: the corresponding patterns are highly localized, that is, they are concentrated in few sites, which we find to be in close contact in the three-dimensional protein fold. PMID:23990764
Leclerc, Arnaud; Carrington, Tucker
2014-05-07
We propose an iterative method for computing vibrational spectra that significantly reduces the memory cost of calculations. It uses a direct product primitive basis, but does not require storing vectors with as many components as there are product basis functions. Wavefunctions are represented in a basis each of whose functions is a sum of products (SOP) and the factorizable structure of the Hamiltonian is exploited. If the factors of the SOP basis functions are properly chosen, wavefunctions are linear combinations of a small number of SOP basis functions. The SOP basis functions are generated using a shifted block power method. The factors are refined with a rank reduction algorithm to cap the number of terms in a SOP basis function. The ideas are tested on a 20-D model Hamiltonian and a realistic CH3CN (12 dimensional) potential. For the 20-D problem, to use a standard direct product iterative approach one would need to store vectors with about 10(20) components and would hence require about 8 × 10(11) GB. With the approach of this paper only 1 GB of memory is necessary. Results for CH3CN agree well with those of a previous calculation on the same potential.
The Extraction of One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, Robert A.; Gaffney, Richard L., Jr.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e.g. thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
The Art of Extracting One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Gaffney, R. L.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e:g: thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
Advances in reduction techniques for tire contact problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1995-01-01
Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.
Extra-dimensional models on the lattice
Knechtli, Francesco; Rinaldi, Enrico
2016-08-05
In this paper we summarize the ongoing effort to study extra-dimensional gauge theories with lattice simulations. In these models the Higgs field is identified with extra-dimensional components of the gauge field. The Higgs potential is generated by quantum corrections and is protected from divergences by the higher dimensional gauge symmetry. Dimensional reduction to four dimensions can occur through compactification or localization. Gauge-Higgs unification models are often studied using perturbation theory. Numerical lattice simulations are used to go beyond these perturbative expectations and to include nonperturbative effects. We describe the known perturbative predictions and their fate in the strongly-coupled regime formore » various extra-dimensional models.« less
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
3D-Hydrogel Based Polymeric Nanoreactors for Silver Nano-Antimicrobial Composites Generation.
Soto-Quintero, Albanelly; Romo-Uribe, Ángel; Bermúdez-Morales, Víctor H; Quijada-Garrido, Isabel; Guarrotxena, Nekane
2017-08-01
This study underscores the development of Ag hydrogel nanocomposites, as smart substrates for antibacterial uses, via innovative in situ reactive and reduction pathways. To this end, two different synthetic strategies were used. Firstly thiol-acrylate (PSA) based hydrogels were attained via thiol-ene and radical polymerization of polyethylene glycol (PEG) and polycaprolactone (PCL). As a second approach, polyurethane (PU) based hydrogels were achieved by condensation polymerization from diisocyanates and PCL and PEG diols. In fact, these syntheses rendered active three-dimensional (3D) hydrogel matrices which were used as nanoreactors for in situ reduction of AgNO₃ to silver nanoparticles. A redox chemistry of stannous catalyst in PU hydrogel yielded spherical AgNPs formation, even at 4 °C in the absence of external reductant; and an appropriate thiol-functionalized polymeric network promoted spherical AgNPs well dispersed through PSA hydrogel network, after heating up the swollen hydrogel at 103 °C in the presence of citrate-reductant. Optical and swelling behaviors of both series of hydrogel nanocomposites were investigated as key factors involved in their antimicrobial efficacy over time. Lastly, in vitro antibacterial activity of Ag loaded hydrogels exposed to Pseudomona aeruginosa and Escherichia coli strains indicated a noticeable sustained inhibitory effect, especially for Ag-PU hydrogel nanocomposites with bacterial inhibition growth capabilities up to 120 h cultivation.
NASA Technical Reports Server (NTRS)
Musick, John A.; Patterson, Mark R.; Dowd, Wesley W.
2002-01-01
Previous engineering research and development has documented the plausibility of applying biomimetic approaches to aerospace engineering. Past cooperation between the Virginia Institute of Marine Science (VIMS) and NASA focused on the drag reduction qualities of the microscale dermal denticles of shark skin. This technology has subsequently been applied to submarines and aircraft. The present study aims to identify and document the three-dimensional geometry of additional macroscale morphologies that potentially confer drag reducing hydrodynamic qualities upon marine animals and which could be applied to enhance the range and endurance of Uninhabited Aerial Vehicles (UAVs). Such morphologies have evolved over eons to maximize organismal energetic efficiency by reducing the energetic input required to maintain cruising speeds in the viscous marine environment. These drag reduction qualities are manifested in several groups of active marine animals commonly encountered by ongoing VIMS research programs: namely sharks, bony fishes such as tunas, and sea turtles. Through spatial data acquired by molding and digital imagery analysis of marine specimens provided by VIMS, NASA aims to construct scale models of these features and to test these potential drag reduction morphologies for application to aircraft design. This report addresses the efforts of VIMS and NASA personnel on this project between January and November 2001.
Reduction technique for tire contact problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1995-01-01
A reduction technique and a computational procedure are presented for predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of the reduction technique, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface.
Three-dimensional collimation of in-plane-propagating light using silicon micromachined mirror
NASA Astrophysics Data System (ADS)
Sabry, Yasser M.; Khalil, Diaa; Saadany, Bassam; Bourouina, Tarik
2014-03-01
We demonstrate light collimation of single-mode optical fibers using deeply-etched three-dimensional curved micromirror on silicon chip. The three-dimensional curvature of the mirror is controlled by a process combining deep reactive ion etching and isotropic etching of silicon. The produced surface is astigmatic with out-of-plane radius of curvature that is about one half the in-plane radius of curvature. Having a 300-μm in-plane radius and incident beam inplane inclined with an angle of 45 degrees with respect to the principal axis, the reflected beam is maintained stigmatic with about 4.25 times reduction in the beam expansion angle in free space and about 12-dB reduction in propagation losses, when received by a limited-aperture detector.
Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S
2017-02-01
B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.
A complex noise reduction method for improving visualization of SD-OCT skin biomedical images
NASA Astrophysics Data System (ADS)
Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Khramov, Alexander G.
2014-05-01
In this paper we consider the original method of solving noise reduction problem for visualization's quality improvement of SD-OCT skin and tumors biomedical images. The principal advantages of OCT are high resolution and possibility of in vivo analysis. We propose a two-stage algorithm: 1) process of raw one-dimensional A-scans of SD-OCT and 2) remove a noise from the resulting B(C)-scans. The general mathematical methods of SD-OCT are unstable: if the noise of the CCD is 1.6% of the dynamic range then result distortions are already 25-40% of the dynamic range. We use at the first stage a resampling of A-scans and simple linear filters to reduce the amount of data and remove the noise of the CCD camera. The efficiency, improving productivity and conservation of the axial resolution when using this approach are showed. At the second stage we use an effective algorithms based on Hilbert-Huang Transform for more accurately noise peaks removal. The effectiveness of the proposed approach for visualization of malignant and benign skin tumors (melanoma, BCC etc.) and a significant improvement of SNR level for different methods of noise reduction are showed. Also in this study we consider a modification of this method depending of a specific hardware and software features of used OCT setup. The basic version does not require any hardware modifications of existing equipment. The effectiveness of proposed method for 3D visualization of tissues can simplify medical diagnosis in oncology.
REBURNING THERMAL AND CHEMICAL PROCESSES IN A TWO-DIMENSIONAL PILOT-SCALE SYSTEM
The paper describes an experimental investigation of the thermal and chemical processes influencing NOx reduction by natural gas reburning in a two-dimensional pilot-scale combustion system. Reburning effectiveness for initial NOx levels of 50-500 ppm and reburn stoichiometric ra...
NASA Astrophysics Data System (ADS)
Dey, Pinkee; Suslov, Sergey A.
2016-12-01
A finite amplitude instability has been analysed to discover the exact mechanism leading to the appearance of stationary magnetoconvection patterns in a vertical layer of a non-conducting ferrofluid heated from the side and placed in an external magnetic field perpendicular to the walls. The physical results have been obtained using a version of a weakly nonlinear analysis that is based on the disturbance amplitude expansion. It enables a low-dimensional reduction of a full nonlinear problem in supercritical regimes away from a bifurcation point. The details of the reduction are given in comparison with traditional small-parameter expansions. It is also demonstrated that Squire’s transformation can be introduced for higher-order nonlinear terms thus reducing the full three-dimensional problem to its equivalent two-dimensional counterpart and enabling significant computational savings. The full three-dimensional instability patterns are subsequently recovered using the inverse transforms The analysed stationary thermomagnetic instability is shown to occur as a result of a supercritical pitchfork bifurcation.
Three-dimensional mapping of the lateral ventricles in autism
Vidal, Christine N.; Nicolsonln, Rob; Boire, Jean-Yves; Barra, Vincent; DeVito, Timothy J.; Hayashi, Kiralee M.; Geaga, Jennifer A.; Drost, Dick J.; Williamson, Peter C.; Rajakumar, Nagalingam; Toga, Arthur W.; Thompson, Paul M.
2009-01-01
In this study, a computational mapping technique was used to examine the three-dimensional profile of the lateral ventricles in autism. T1-weighted three-dimensional magnetic resonance images of the brain were acquired from 20 males with autism (age: 10.1 ± 3.5 years) and 22 male control subjects (age: 10.7 ± 2.5 years). The lateral ventricles were delineated manually and ventricular volumes were compared between the two groups. Ventricular traces were also converted into statistical three-dimensional maps, based on anatomical surface meshes. These maps were used to visualize regional morphological differences in the thickness of the lateral ventricles between patients and controls. Although ventricular volumes measured using traditional methods did not differ significantly between groups, statistical surface maps revealed subtle, highly localized reductions in ventricular size in patients with autism in the left frontal and occipital horns. These localized reductions in the lateral ventricles may result from exaggerated brain growth early in life. PMID:18502618
Augustin, Moritz; Ladenbauer, Josef; Baumann, Fabian; Obermayer, Klaus
2017-06-01
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models.
Baumann, Fabian; Obermayer, Klaus
2017-01-01
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models. PMID:28644841
Linear dynamical modes as new variables for data-driven ENSO forecast
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen
2018-05-01
A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.
The exchange interaction effects on magnetic properties of the nanostructured CoPt particles
NASA Astrophysics Data System (ADS)
Komogortsev, S. V.; Iskhakov, R. S.; Zimin, A. A.; Filatov, E. Yu.; Korenev, S. V.; Shubin, Yu. V.; Chizhik, N. A.; Yurkin, G. Yu.; Eremin, E. V.
2016-03-01
Various manifestations of the exchange interaction effects in magnetization curves of the CoPt nanostructured particles are demonstrated and discussed. The inter-grain exchange constant A in the sponge-like agglomerates of crystallites is estimated as A=(7±1) pJ/m from the approach magnetization to saturation curves that is in good agreement with A=(6.6±0.5) pJ/m obtained from Bloch T 3/2 law. The fractal dimensionality of the exchange coupled crystallite system in the porous media of the disordered CoPt alloy d=(2.60±0.18) was estimated from the approach magnetization to saturation curve. Coercive force decreases with temperature as Hc T 3/2 which is assumed to be a consequence of the magnetic anisotropy energy reduction due to the thermal spin wave excitations in the investigated CoPt particles.
Ji, Xiaonan; Machiraju, Raghu; Ritter, Alan; Yen, Po-Yin
2017-01-01
Systematic Reviews (SRs) of biomedical literature summarize evidence from high-quality studies to inform clinical decisions, but are time and labor intensive due to the large number of article collections. Article similarities established from textual features have been shown to assist in the identification of relevant articles, thus facilitating the article screening process efficiently. In this study, we visualized article similarities to extend its utilization in practical settings for SR researchers, aiming to promote human comprehension of article distributions and hidden patterns. To prompt an effective visualization in an interpretable, intuitive, and scalable way, we implemented a graph-based network visualization with three network sparsification approaches and a distance-based map projection via dimensionality reduction. We evaluated and compared three network sparsification approaches and the visualization types (article network vs. article map). We demonstrated the effectiveness in revealing article distribution and exhibiting clustering patterns of relevant articles with practical meanings for SRs.
Engesgaard, Peter; Kipp, Kenneth L.
1992-01-01
A one-dimensional prototype geochemical transport model was developed in order to handle simultaneous precipitation-dissolution and oxidation-reduction reactions governed by chemical equilibria. Total aqueous component concentrations are the primary dependent variables, and a sequential iterative approach is used for the calculation. The model was verified by analytical and numerical comparisons and is able to simulate sharp mineral fronts. At a site in Denmark, denitrification has been observed by oxidation of pyrite. Simulation of nitrate movement at this site showed a redox front movement rate of 0.58 m yr−1, which agreed with calculations of others. It appears that the sequential iterative approach is the most practical for extension to multidimensional simulation and for handling large numbers of components and reactions. However, slow convergence may limit the size of redox systems that can be handled.
Dynamic gesture recognition using neural networks: a fundament for advanced interaction construction
NASA Astrophysics Data System (ADS)
Boehm, Klaus; Broll, Wolfgang; Sokolewicz, Michael A.
1994-04-01
Interaction in virtual reality environments is still a challenging task. Static hand posture recognition is currently the most common and widely used method for interaction using glove input devices. In order to improve the naturalness of interaction, and thereby decrease the user-interface learning time, there is a need to be able to recognize dynamic gestures. In this paper we describe our approach to overcoming the difficulties of dynamic gesture recognition (DGR) using neural networks. Backpropagation neural networks have already proven themselves to be appropriate and efficient for posture recognition. However, the extensive amount of data involved in DGR requires a different approach. Because of features such as topology preservation and automatic-learning, Kohonen Feature Maps are particularly suitable for the reduction of the high dimensional data space that is the result of a dynamic gesture, and are thus implemented for this task.
Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain
2016-05-20
Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures.
Preventing Data Ambiguity in Infectious Diseases with Four-Dimensional and Personalized Evaluations
Iandiorio, Michelle J.; Fair, Jeanne M.; Chatzipanagiotou, Stylianos; Ioannidis, Anastasios; Trikka-Graphakos, Eleftheria; Charalampaki, Nikoletta; Sereti, Christina; Tegos, George P.; Hoogesteijn, Almira L.; Rivas, Ariel L.
2016-01-01
Background Diagnostic errors can occur, in infectious diseases, when anti-microbial immune responses involve several temporal scales. When responses span from nanosecond to week and larger temporal scales, any pre-selected temporal scale is likely to miss some (faster or slower) responses. Hoping to prevent diagnostic errors, a pilot study was conducted to evaluate a four-dimensional (4D) method that captures the complexity and dynamics of infectious diseases. Methods Leukocyte-microbial-temporal data were explored in canine and human (bacterial and/or viral) infections, with: (i) a non-structured approach, which measures leukocytes or microbes in isolation; and (ii) a structured method that assesses numerous combinations of interacting variables. Four alternatives of the structured method were tested: (i) a noise-reduction oriented version, which generates a single (one data point-wide) line of observations; (ii) a version that measures complex, three-dimensional (3D) data interactions; (iii) a non-numerical version that displays temporal data directionality (arrows that connect pairs of consecutive observations); and (iv) a full 4D (single line-, complexity-, directionality-based) version. Results In all studies, the non-structured approach revealed non-interpretable (ambiguous) data: observations numerically similar expressed different biological conditions, such as recovery and lack of recovery from infections. Ambiguity was also found when the data were structured as single lines. In contrast, two or more data subsets were distinguished and ambiguity was avoided when the data were structured as complex, 3D, single lines and, in addition, temporal data directionality was determined. The 4D method detected, even within one day, changes in immune profiles that occurred after antibiotics were prescribed. Conclusions Infectious disease data may be ambiguous. Four-dimensional methods may prevent ambiguity, providing earlier, in vivo, dynamic, complex, and personalized information that facilitates both diagnostics and selection or evaluation of anti-microbial therapies. PMID:27411058
Similarity solutions of some two-space-dimensional nonlinear wave evolution equations
NASA Technical Reports Server (NTRS)
Redekopp, L. G.
1980-01-01
Similarity reductions of the two-space-dimensional versions of the Korteweg-de Vries, modified Korteweg-de Vries, Benjamin-Davis-Ono, and nonlinear Schroedinger equations are presented, and some solutions of the reduced equations are discussed. Exact dispersive solutions of the two-dimensional Korteweg-de Vries equation are obtained, and the similarity solution of this equation is shown to be reducible to the second Painleve transcendent.
Hidden symmetries of Eisenhart-Duval lift metrics and the Dirac equation with flux
NASA Astrophysics Data System (ADS)
Cariglia, Marco
2012-10-01
The Eisenhart-Duval lift allows embedding nonrelativistic theories into a Lorentzian geometrical setting. In this paper we study the lift from the point of view of the Dirac equation and its hidden symmetries. We show that dimensional reduction of the Dirac equation for the Eisenhart-Duval metric in general gives rise to the nonrelativistic Lévy-Leblond equation in lower dimension. We study in detail in which specific cases the lower dimensional limit is given by the Dirac equation, with scalar and vector flux, and the relation between lift, reduction, and the hidden symmetries of the Dirac equation. While there is a precise correspondence in the case of the lower dimensional massive Dirac equation with no flux, we find that for generic fluxes it is not possible to lift or reduce all solutions and hidden symmetries. As a by-product of this analysis, we construct new Lorentzian metrics with special tensors by lifting Killing-Yano and closed conformal Killing-Yano tensors and describe the general conformal Killing-Yano tensor of the Eisenhart-Duval lift metrics in terms of lower dimensional forms. Last, we show how, by dimensionally reducing the higher dimensional operators of the massless Dirac equation that are associated with shared hidden symmetries, it is possible to recover hidden symmetry operators for the Dirac equation with flux.
Two dimensional thermo-optic beam steering using a silicon photonic optical phased array
NASA Astrophysics Data System (ADS)
Mahon, Rita; Preussner, Marcel W.; Rabinovich, William S.; Goetz, Peter G.; Kozak, Dmitry A.; Ferraro, Mike S.; Murphy, James L.
2016-03-01
Components for free space optical communication terminals such as lasers, amplifiers, and receivers have all seen substantial reduction in both size and power consumption over the past several decades. However, pointing systems, such as fast steering mirrors and gimbals, have remained large, slow and power-hungry. Optical phased arrays provide a possible solution for non-mechanical beam steering devices that can be compact and lower in power. Silicon photonics is a promising technology for phased arrays because it has the potential to scale to many elements and may be compatible with CMOS technology thereby enabling batch fabrication. For most free space optical communication applications, two-dimensional beam steering is needed. To date, silicon photonic phased arrays have achieved two-dimensional steering by combining thermo-optic steering, in-plane, with wavelength tuning by means of an output grating to give angular tuning, out-of-plane. While this architecture might work for certain static communication links, it would be difficult to implement for moving platforms. Other approaches have required N2 controls for an NxN element phased array, which leads to complexity. Hence, in this work we demonstrate steering using the thermo-optic effect for both dimensions with a simplified steering mechanism requiring only two control signals, one for each steering dimension.
Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain
2018-02-01
Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.
Simulation of Fluid Flow and Collection Efficiency for an SEA Multi-element Probe
NASA Technical Reports Server (NTRS)
Rigby, David L.; Struk, Peter M.; Bidwell, Colin
2014-01-01
Numerical simulations of fluid flow and collection efficiency for a Science Engineering Associates (SEA) multi-element probe are presented. Simulation of the flow field was produced using the Glenn-HT Navier-Stokes solver. Three dimensional unsteady results were produced and then time averaged for the collection efficiency results. Three grid densities were investigated to enable an assessment of grid dependence. Collection efficiencies were generated for three spherical particle sizes, 100, 20, and 5 micron in diameter, using the codes LEWICE3D and LEWICE2D. The free stream Mach number was 0.27, representing a velocity of approximately 86 ms. It was observed that a reduction in velocity of about 15-20 occurred as the flow entered the shroud of the probe.Collection efficiency results indicate a reduction in collection efficiency as particle size is reduced. The reduction with particle size is expected, however, the results tended to be lower than previous results generated for isolated two-dimensional elements. The deviation from the two-dimensional results is more pronounced for the smaller particles and is likely due to the effect of the protective shroud.
NASA Astrophysics Data System (ADS)
Viswanath, Satish; Rosen, Mark; Madabhushi, Anant
2008-03-01
Current techniques for localization of prostatic adenocarcinoma (CaP) via blinded trans-rectal ultrasound biopsy are associated with a high false negative detection rate. While high resolution endorectal in vivo Magnetic Resonance (MR) prostate imaging has been shown to have improved contrast and resolution for CaP detection over ultrasound, similarity in intensity characteristics between benign and cancerous regions on MR images contribute to a high false positive detection rate. In this paper, we present a novel unsupervised segmentation method that employs manifold learning via consensus schemes for detection of cancerous regions from high resolution 1.5 Tesla (T) endorectal in vivo prostate MRI. A significant contribution of this paper is a method to combine multiple weak, lower-dimensional representations of high dimensional feature data in a way analogous to classifier ensemble schemes, and hence create a stable and accurate reduced dimensional representation. After correcting for MR image intensity artifacts, such as bias field inhomogeneity and intensity non-standardness, our algorithm extracts over 350 3D texture features at every spatial location in the MR scene at multiple scales and orientations. Non-linear dimensionality reduction schemes such as Locally Linear Embedding (LLE) and Graph Embedding (GE) are employed to create multiple low dimensional data representations of this high dimensional texture feature space. Our novel consensus embedding method is used to average object adjacencies from within the multiple low dimensional projections so that class relationships are preserved. Unsupervised consensus clustering is then used to partition the objects in this consensus embedding space into distinct classes. Quantitative evaluation on 18 1.5 T prostate MR data against corresponding histology obtained from the multi-site ACRIN trials show a sensitivity of 92.65% and a specificity of 82.06%, which suggests that our method is successfully able to detect suspicious regions in the prostate.
Matrix approach to uncertainty assessment and reduction for modeling terrestrial carbon cycle
NASA Astrophysics Data System (ADS)
Luo, Y.; Xia, J.; Ahlström, A.; Zhou, S.; Huang, Y.; Shi, Z.; Wang, Y.; Du, Z.; Lu, X.
2017-12-01
Terrestrial ecosystems absorb approximately 30% of the anthropogenic carbon dioxide emissions. This estimate has been deduced indirectly: combining analyses of atmospheric carbon dioxide concentrations with ocean observations to infer the net terrestrial carbon flux. In contrast, when knowledge about the terrestrial carbon cycle is integrated into different terrestrial carbon models they make widely different predictions. To improve the terrestrial carbon models, we have recently developed a matrix approach to uncertainty assessment and reduction. Specifically, the terrestrial carbon cycle has been commonly represented by a series of carbon balance equations to track carbon influxes into and effluxes out of individual pools in earth system models. This representation matches our understanding of carbon cycle processes well and can be reorganized into one matrix equation without changing any modeled carbon cycle processes and mechanisms. We have developed matrix equations of several global land C cycle models, including CLM3.5, 4.0 and 4.5, CABLE, LPJ-GUESS, and ORCHIDEE. Indeed, the matrix equation is generic and can be applied to other land carbon models. This matrix approach offers a suite of new diagnostic tools, such as the 3-dimensional (3-D) parameter space, traceability analysis, and variance decomposition, for uncertainty analysis. For example, predictions of carbon dynamics with complex land models can be placed in a 3-D parameter space (carbon input, residence time, and storage potential) as a common metric to measure how much model predictions are different. The latter can be traced to its source components by decomposing model predictions to a hierarchy of traceable components. Then, variance decomposition can help attribute the spread in predictions among multiple models to precisely identify sources of uncertainty. The highly uncertain components can be constrained by data as the matrix equation makes data assimilation computationally possible. We will illustrate various applications of this matrix approach to uncertainty assessment and reduction for terrestrial carbon cycle models.
Progress in multi-dimensional upwind differencing
NASA Technical Reports Server (NTRS)
Vanleer, Bram
1992-01-01
Multi-dimensional upwind-differencing schemes for the Euler equations are reviewed. On the basis of the first-order upwind scheme for a one-dimensional convection equation, the two approaches to upwind differencing are discussed: the fluctuation approach and the finite-volume approach. The usual extension of the finite-volume method to the multi-dimensional Euler equations is not entirely satisfactory, because the direction of wave propagation is always assumed to be normal to the cell faces. This leads to smearing of shock and shear waves when these are not grid-aligned. Multi-directional methods, in which upwind-biased fluxes are computed in a frame aligned with a dominant wave, overcome this problem, but at the expense of robustness. The same is true for the schemes incorporating a multi-dimensional wave model not based on multi-dimensional data but on an 'educated guess' of what they could be. The fluctuation approach offers the best possibilities for the development of genuinely multi-dimensional upwind schemes. Three building blocks are needed for such schemes: a wave model, a way to achieve conservation, and a compact convection scheme. Recent advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results are presented, illustrating the potential of the new multi-dimensional schemes.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
1997-01-01
The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.
Faster and less phototoxic 3D fluorescence microscopy using a versatile compressed sensing scheme
Woringer, Maxime; Darzacq, Xavier; Zimmer, Christophe
2017-01-01
Three-dimensional fluorescence microscopy based on Nyquist sampling of focal planes faces harsh trade-offs between acquisition time, light exposure, and signal-to-noise. We propose a 3D compressed sensing approach that uses temporal modulation of the excitation intensity during axial stage sweeping and can be adapted to fluorescence microscopes without hardware modification. We describe implementations on a lattice light sheet microscope and an epifluorescence microscope, and show that images of beads and biological samples can be reconstructed with a 5-10 fold reduction of light exposure and acquisition time. Our scheme opens a new door towards faster and less damaging 3D fluorescence microscopy. PMID:28788909
de la Vega de León, Antonio; Bajorath, Jürgen
2016-09-01
The concept of chemical space is of fundamental relevance for medicinal chemistry and chemical informatics. Multidimensional chemical space representations are coordinate-based. Chemical space networks (CSNs) have been introduced as a coordinate-free representation. A computational approach is presented for the transformation of multidimensional chemical space into CSNs. The design of transformation CSNs (TRANS-CSNs) is based upon a similarity function that directly reflects distance relationships in original multidimensional space. TRANS-CSNs provide an immediate visualization of coordinate-based chemical space and do not require the use of dimensionality reduction techniques. At low network density, TRANS-CSNs are readily interpretable and make it possible to evaluate structure-activity relationship information originating from multidimensional chemical space.
High dose bystander effects in spatially fractionated radiation therapy
Asur, Rajalakshmi; Butterworth, Karl T.; Penagaricano, Jose A.; Prise, Kevin M.; Griffin, Robert J.
2014-01-01
Traditional radiotherapy of bulky tumors has certain limitations. Spatially fractionated radiation therapy (GRID) and intensity modulated radiotherapy (IMRT) are examples of advanced modulated beam therapies that help in significant reductions in normal tissue damage. GRID refers to the delivery of a single high dose of radiation to a large treatment area that is divided into several smaller fields, while IMRT allows improved dose conformity to the tumor target compared to conventional three-dimensional conformal radiotherapy. In this review, we consider spatially fractionated radiotherapy approaches focusing on GRID and IMRT, and present complementary evidence from different studies which support the role of radiation induced signaling effects in the overall radiobiological rationale for these treatments. PMID:24246848
Li, Y L; Xu, D L; Fu, Y M; Zhou, J X
2011-09-01
This paper presents a systematic study on the stability of a two-dimensional vibration isolation floating raft system with a time-delayed feedback control. Based on the generalized Sturm criterion, the critical control gain for the delay-independent stability region and critical time delays for the stability switches are derived. The critical conditions can provide a theoretical guidance of chaotification design for line spectra reduction. Numerical simulations verify the correctness of the approach. Bifurcation analyses reveal that chaotification is more likely to occur in unstable region defined by these critical conditions, and the stiffness of the floating raft and mass ratio are the sensitive parameters to reduce critical control gain.
Efficient numerical simulation of an electrothermal de-icer pad
NASA Technical Reports Server (NTRS)
Roelke, R. J.; Keith, T. G., Jr.; De Witt, K. J.; Wright, W. B.
1987-01-01
In this paper, a new approach to calculate the transient thermal behavior of an iced electrothermal de-icer pad was developed. The method of splines was used to obtain the temperature distribution within the layered pad. Splines were used in order to create a tridiagonal system of equations that could be directly solved by Gauss elimination. The Stefan problem was solved using the enthalpy method along with a recent implicit technique. Only one to three iterations were needed to locate the melt front during any time step. Computational times were shown to be greatly reduced over those of an existing one dimensional procedure without any reduction in accuracy; the curent technique was more than 10 times faster.
Tissue Cartography: Compressing Bio-Image Data by Dimensional Reduction
Heemskerk, Idse; Streichan, Sebastian J
2017-01-01
High data volumes produced by state-of-the-art optical microscopes encumber research. Taking advantage of the laminar structure of many biological specimens we developed a method that reduces data size and processing time by orders of magnitude, while disentangling signal. The Image Surface Analysis Environment that we implemented automatically constructs an atlas of 2D images for arbitrary shaped, dynamic, and possibly multi-layered “Surfaces of Interest”. Built-in correction for cartographic distortion assures no information on the surface is lost, making it suitable for quantitative analysis. We demonstrate our approach by application to 4D imaging of the D. melanogaster embryo and D. rerio beating heart. PMID:26524242
NASA Astrophysics Data System (ADS)
Hansen, Scott K.; Pandey, Sachin; Karra, Satish; Vesselinov, Velimir V.
2017-12-01
Groundwater contamination by heavy metals is a critical environmental problem for which in situ remediation is frequently the only viable treatment option. For such interventions, a multi-dimensional reactive transport model of relevant biogeochemical processes is invaluable. To this end, we developed a model, chrotran, for in situ treatment, which includes full dynamics for five species: a heavy metal to be remediated, an electron donor, biomass, a nontoxic conservative bio-inhibitor, and a biocide. Direct abiotic reduction by donor-metal interaction as well as donor-driven biomass growth and bio-reduction are modeled, along with crucial processes such as donor sorption, bio-fouling, and biomass death. Our software implementation handles heterogeneous flow fields, as well as arbitrarily many chemical species and amendment injection points, and features full coupling between flow and reactive transport. We describe installation and usage and present two example simulations demonstrating its unique capabilities. One simulation suggests an unorthodox approach to remediation of Cr(VI) contamination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Scott; Pandey, Sachin; Karra, Satish
Groundwater contamination by heavy metals is a critical environmental problem for which in situ remediation is frequently the only viable treatment option. For such interventions, a three-dimensional reactive transport model of relevant biogeochemical processes is invaluable. To this end, we developed a model, CHROTRAN, for in situ treatment, which includes full dynamics for five species: a heavy metal to be remediated, an electron donor, biomass, a nontoxic conservative bio-inhibitor, and a biocide. Direct abiotic reduction by donor-metal interaction as well as donor-driven biomass growth and bio-reduction are modeled, along with crucial processes such as donor sorption, bio-fouling and biomass death.more » Our software implementation handles heterogeneous flow fields, arbitrarily many chemical species and amendment injection points, and features full coupling between flow and reactive transport. We describe installation and usage and present two example simulations demonstrating its unique capabilities. One simulation suggests an unorthodox approach to remediation of Cr(VI) contamination.« less
Hansen, Scott; Pandey, Sachin; Karra, Satish; ...
2017-04-25
Groundwater contamination by heavy metals is a critical environmental problem for which in situ remediation is frequently the only viable treatment option. For such interventions, a three-dimensional reactive transport model of relevant biogeochemical processes is invaluable. To this end, we developed a model, CHROTRAN, for in situ treatment, which includes full dynamics for five species: a heavy metal to be remediated, an electron donor, biomass, a nontoxic conservative bio-inhibitor, and a biocide. Direct abiotic reduction by donor-metal interaction as well as donor-driven biomass growth and bio-reduction are modeled, along with crucial processes such as donor sorption, bio-fouling and biomass death.more » Our software implementation handles heterogeneous flow fields, arbitrarily many chemical species and amendment injection points, and features full coupling between flow and reactive transport. We describe installation and usage and present two example simulations demonstrating its unique capabilities. One simulation suggests an unorthodox approach to remediation of Cr(VI) contamination.« less
Replicating Human Hand Synergies Onto Robotic Hands: A Review on Software and Hardware Strategies.
Salvietti, Gionata
2018-01-01
This review reports the principal solutions proposed in the literature to reduce the complexity of the control and of the design of robotic hands taking inspiration from the organization of the human brain. Several studies in neuroscience concerning the sensorimotor organization of the human hand proved that, despite the complexity of the hand, a few parameters can describe most of the variance in the patterns of configurations and movements. In other words, humans exploit a reduced set of parameters, known in the literature as synergies, to control their hands. In robotics, this dimensionality reduction can be achieved by coupling some of the degrees of freedom (DoFs) of the robotic hand, that results in a reduction of the needed inputs. Such coupling can be obtained at the software level, exploiting mapping algorithm to reproduce human hand organization, and at the hardware level, through either rigid or compliant physical couplings between the joints of the robotic hand. This paper reviews the main solutions proposed for both the approaches.
Sparse partial least squares regression for simultaneous dimension reduction and variable selection
Chun, Hyonho; Keleş, Sündüz
2010-01-01
Partial least squares regression has been an alternative to ordinary least squares for handling multicollinearity in several areas of scientific research since the 1960s. It has recently gained much attention in the analysis of high dimensional genomic data. We show that known asymptotic consistency of the partial least squares estimator for a univariate response does not hold with the very large p and small n paradigm. We derive a similar result for a multivariate response regression with partial least squares. We then propose a sparse partial least squares formulation which aims simultaneously to achieve good predictive performance and variable selection by producing sparse linear combinations of the original predictors. We provide an efficient implementation of sparse partial least squares regression and compare it with well-known variable selection and dimension reduction approaches via simulation experiments. We illustrate the practical utility of sparse partial least squares regression in a joint analysis of gene expression and genomewide binding data. PMID:20107611
Entropic manifestations of topological order in three dimensions
NASA Astrophysics Data System (ADS)
Bullivant, Alex; Pachos, Jiannis K.
2016-03-01
We evaluate the entanglement entropy of exactly solvable Hamiltonians corresponding to general families of three-dimensional topological models. We show that the modification to the entropic area law due to three-dimensional topological properties is richer than the two-dimensional case. In addition to the reduction of the entropy caused by a nonzero vacuum expectation value of contractible loop operators, a topological invariant emerges that increases the entropy if the model consists of nontrivially braiding anyons. As a result the three-dimensional topological entanglement entropy provides only partial information about the two entropic topological invariants.
Lohner, Svenja T; Becker, Dirk; Mangold, Klaus-Michael; Tiehm, Andreas
2011-08-01
This article for the first time demonstrates successful application of electrochemical processes to stimulate sequential reductive/oxidative microbial degradation of perchloroethene (PCE) in mineral medium and in contaminated groundwater. In a flow-through column system, hydrogen generation at the cathode supported reductive dechlorination of PCE to cis-dichloroethene (cDCE), vinyl chloride (VC), and ethene (ETH). Electrolytically generated oxygen at the anode allowed subsequent oxidative degradation of the lower chlorinated metabolites. Aerobic cometabolic degradation of cDCE proved to be the bottleneck for complete metabolite elimination. Total removal of chloroethenes was demonstrated for a PCE load of approximately 1.5 μmol/d. In mineral medium, long-term operation with stainless steel electrodes was demonstrated for more than 300 days. In contaminated groundwater, corrosion of the stainless steel anode occurred, whereas DSA (dimensionally stable anodes) proved to be stable. Precipitation of calcareous deposits was observed at the cathode, resulting in a higher voltage demand and reduced dechlorination activity. With DSA and groundwater from a contaminated site, complete degradation of chloroethenes in groundwater was obtained for two months thus demonstrating the feasibility of the sequential bioelectro-approach for field application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Shaofang; Zhu, Chengzhou; Song, Junhua
2016-12-28
Rational design and construction of Pt-based porous nanostructures with large mesopores have triggered significant considerations because of their high surface area and more efficient mass transport. Hydrochloric acid-induced kinetic reduction of metal precursors in the presence of soft template F-127 and hard template tellurium nanowires has been successfully demonstrated to construct one-dimensional hierarchical porous PtCu alloy nanostructures with large mesopores. Moreover, the electrochemical experiments demonstrated that the resultant PtCu hierarchically porous nanostructures with optimized composition exhibit enhanced electrocatalytic performance for oxygen reduction reaction.
NASA Astrophysics Data System (ADS)
Nadjafikhah, Mehdi; Jafari, Mehdi
2013-12-01
In this paper, partially invariant solutions (PISs) method is applied in order to obtain new four-dimensional Einstein Walker manifolds. This method is based on subgroup classification for the symmetry group of partial differential equations (PDEs) and can be regarded as the generalization of the similarity reduction method. For this purpose, those cases of PISs which have the defect structure δ=1 and are resulted from two-dimensional subalgebras are considered in the present paper. Also it is shown that the obtained PISs are distinct from the invariant solutions that obtained by similarity reduction method.
Ji, Shuiwang
2013-07-11
The structured organization of cells in the brain plays a key role in its functional efficiency. This delicate organization is the consequence of unique molecular identity of each cell gradually established by precise spatiotemporal gene expression control during development. Currently, studies on the molecular-structural association are beginning to reveal how the spatiotemporal gene expression patterns are related to cellular differentiation and structural development. In this article, we aim at a global, data-driven study of the relationship between gene expressions and neuroanatomy in the developing mouse brain. To enable visual explorations of the high-dimensional data, we map the in situ hybridization gene expression data to a two-dimensional space by preserving both the global and the local structures. Our results show that the developing brain anatomy is largely preserved in the reduced gene expression space. To provide a quantitative analysis, we cluster the reduced data into groups and measure the consistency with neuroanatomy at multiple levels. Our results show that the clusters in the low-dimensional space are more consistent with neuroanatomy than those in the original space. Gene expression patterns and developing brain anatomy are closely related. Dimensionality reduction and visual exploration facilitate the study of this relationship.
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang
2017-12-01
Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.
NASA Astrophysics Data System (ADS)
de Wit, Bernard; Reys, Valentin
2017-12-01
Supergravity with eight supercharges in a four-dimensional Euclidean space is constructed at the full non-linear level by performing an off-shell time-like reduction of five-dimensional supergravity. The resulting four-dimensional theory is realized off-shell with the Weyl, vector and tensor supermultiplets and a corresponding multiplet calculus. Hypermultiplets are included as well, but they are themselves only realized with on-shell supersymmetry. We also briefly discuss the non-linear supermultiplet. The off-shell reduction leads to a full understanding of the Euclidean theory. A complete multiplet calculus is presented along the lines of the Minkowskian theory. Unlike in Minkowski space, chiral and anti-chiral multiplets are real and supersymmetric actions are generally unbounded from below. Precisely as in the Minkowski case, where one has different formulations of Poincaré supergravity upon introducing different compensating supermultiplets, one can also obtain different versions of Euclidean supergravity.
Three-dimensional boundary layers approaching separation
NASA Technical Reports Server (NTRS)
Williams, J. C., III
1976-01-01
The theory of semi-similar solutions of the laminar boundary layer equations is applied to several flows in which the boundary layer approaches a three-dimensional separation line. The solutions obtained are used to deduce the nature of three-dimensional separation. It is shown that in these cases separation is of the "ordinary" type. A solution is also presented for a case in which a vortex is embedded within the three-dimensional boundary layer.
NASA Astrophysics Data System (ADS)
Chen, Wen; Wang, Fajie
Based on the implicit calculus equation modeling approach, this paper proposes a speculative concept of the potential and wave operators on negative dimensionality. Unlike the standard partial differential equation (PDE) modeling, the implicit calculus modeling approach does not require the explicit expression of the PDE governing equation. Instead the fundamental solution of physical problem is used to implicitly define the differential operator and to implement simulation in conjunction with the appropriate boundary conditions. In this study, we conjecture an extension of the fundamental solution of the standard Laplace and Helmholtz equations to negative dimensionality. And then by using the singular boundary method, a recent boundary discretization technique, we investigate the potential and wave problems using the fundamental solution on negative dimensionality. Numerical experiments reveal that the physics behaviors on negative dimensionality may differ on positive dimensionality. This speculative study might open an unexplored territory in research.
National Defense Center of Excellence for Industrial Metrology and 3D Imaging
2012-10-18
validation rather than mundane data-reduction/analysis tasks. Indeed, the new financial and technical resources being brought to bear by integrating CT...of extremely fast axial scanners. By replacing the single-spot detector by a detector array, a three-dimensional image is acquired by one depth scan...the number of acquired voxels per complete two-dimensional or three-dimensional image, the axial and lateral resolution, the depth range, the
Scalable Learning for Geostatistics and Speaker Recognition
2011-01-01
of prior knowledge of the model or due to improved robustness requirements). Both these methods have their own advantages and disadvantages. The use...application. If the data is well-correlated and low-dimensional, any prior knowledge available on the data can be used to build a parametric model. In the...absence of prior knowledge , non-parametric methods can be used. If the data is high-dimensional, PCA based dimensionality reduction is often the first
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-01-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices. PMID:25177107
Hong, Doo-Pyo; Joo, Sung-Yeon; Choi, Yoon-La; Park, Joo-Hung; Lazar, Alexander J.; Pollock, Raphael E.; Lev, Dina; Kim, Sung Joo
2014-01-01
Liposarcoma is one of the most common histologic types of soft tissue sarcoma and is frequently an aggressive cancer with poor outcome. Hence, alternative approaches other than surgical excision are necessary to improve treatment of well-differentiated/dedifferentiated liposarcoma (WDLPS/DDLPS). For this reason, we performed a two-dimensional gel electrophoresis (2-DE) and matrix-assisted laser desorption/ionization-time of flight mass spectrometry/mass spectrometry (MALDI-TOF/MS) analysis to identify new factors for WDLPS and DDLPS. Among the selected candidate proteins, gankyrin, known to be an oncoprotein, showed a significantly high level of expression pattern and inversely low expression of p53/p21 in WDLPS and DDLPS tissues, suggesting possible utility as a new predictive factor. Moreover, inhibition of gankyrin not only led to reduction of in vitro cell growth ability including cell proliferation, colony-formation, and migration, but also in vivo DDLPS cell tumorigenesis, perhaps via downregulation of the p53 tumor suppressor gene and its p21 target and also reduction of AKT/mTOR signal activation. This study identifies gankyrin, for the first time, as new potential predictive and oncogenic factor of WDLPS and DDLPS, suggesting the potential for service as a future LPS therapeutic approach. PMID:25238053
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection.
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-11-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
Automating X-ray Fluorescence Analysis for Rapid Astrobiology Surveys.
Thompson, David R; Flannery, David T; Lanka, Ravi; Allwood, Abigail C; Bue, Brian D; Clark, Benton C; Elam, W Timothy; Estlin, Tara A; Hodyss, Robert P; Hurowitz, Joel A; Liu, Yang; Wade, Lawrence A
2015-11-01
A new generation of planetary rover instruments, such as PIXL (Planetary Instrument for X-ray Lithochemistry) and SHERLOC (Scanning Habitable Environments with Raman Luminescence for Organics and Chemicals) selected for the Mars 2020 mission rover payload, aim to map mineralogical and elemental composition in situ at microscopic scales. These instruments will produce large spectral cubes with thousands of channels acquired over thousands of spatial locations, a large potential science yield limited mainly by the time required to acquire a measurement after placement. A secondary bottleneck also faces mission planners after downlink; analysts must interpret the complex data products quickly to inform tactical planning for the next command cycle. This study demonstrates operational approaches to overcome these bottlenecks by specialized early-stage science data processing. Onboard, simple real-time systems can perform a basic compositional assessment, recognizing specific features of interest and optimizing sensor integration time to characterize anomalies. On the ground, statistically motivated visualization can make raw uncalibrated data products more interpretable for tactical decision making. Techniques such as manifold dimensionality reduction can help operators comprehend large databases at a glance, identifying trends and anomalies in data. These onboard and ground-side analyses can complement a quantitative interpretation. We evaluate system performance for the case study of PIXL, an X-ray fluorescence spectrometer. Experiments on three representative samples demonstrate improved methods for onboard and ground-side automation and illustrate new astrobiological science capabilities unavailable in previous planetary instruments. Dimensionality reduction-Planetary science-Visualization.
Computed tomography image-guided surgery in complex acetabular fractures.
Brown, G A; Willis, M C; Firoozbakhsh, K; Barmada, A; Tessman, C L; Montgomery, A
2000-01-01
Eleven complex acetabular fractures in 10 patients were treated by open reduction with internal fixation incorporating computed tomography image guided software intraoperatively. Each of the implants placed under image guidance was found to be accurate and without penetration of the pelvis or joint space. The setup time for the system was minimal. Accuracy in the range of 1 mm was found when registration was precise (eight cases) and was in the range of 3.5 mm when registration was only approximate (three cases). Added benefits included reduced intraoperative fluoroscopic time, less need for more extensive dissection, and obviation of additional surgical approaches in some cases. Compared with a series of similar fractures treated before this image guided series, the reduction in operative time was significant. For patients with complex anterior and posterior combined fractures, the average operation times with and without application of three-dimensional imaging technique were, respectively, 5 hours 15 minutes and 6 hours 14 minutes, revealing 16% less operative time for those who had surgery using image guidance. In the single column fracture group, the operation time for those with three-dimensional imaging application, was 2 hours 58 minutes and for those with traditional surgery, 3 hours 42 minutes, indicating 20% less operative time for those with imaging modality. Intraoperative computed tomography guided imagery was found to be an accurate and suitable method for use in the operative treatment of complex acetabular fractures with substantial displacement.
A novel phase assignment protocol and driving system for a high-density focused ultrasound array.
Caulfield, R Erich; Yin, Xiangtao; Juste, Jose; Hynynen, Kullervo
2007-04-01
Currently, most phased-array systems intended for therapy are one-dimensional (1-D) and use between 5 and 200 elements, with a few two-dimensional (2-D) systems using several hundred elements. The move toward lambda/2 interelement spacing, which provides complete 3-D beam steering, would require a large number of closely spaced elements (0.15 mm to 3 mm). A solution to the resulting problem of cost and cable assembly size, which this study examines, is to quantize the phases available at the array input. By connecting elements with similar phases to a single wire, a significant reduction in the number of incoming lines can be achieved while maintaining focusing and beam steering capability. This study has explored the feasibility of such an approach using computer simulations and experiments with a test circuit driving a 100-element linear array. Simulation results demonstrated that adequate focusing can be obtained with only four phase signals without large increases in the grating lobes or the dimensions of the focus. Experiments showed that the method can be implemented in practice, and adequate focusing can be achieved with four phase signals with a reduction of 20% in the peak pressure amplitude squared when compared with the infinite-phase resolution case. Results indicate that the use of this technique would make it possible to drive more than 10,000 elements with 33 input lines. The implementation of this method could have a large impact on ultrasound therapy and diagnostic devices.
A low-dimensional approach to closed-loop control of a Mach 0.6 jet
NASA Astrophysics Data System (ADS)
Low, Kerwin R.; Berger, Zachary P.; Kostka, Stanislav; ElHadidi, Basman; Gogineni, Sivaram; Glauser, Mark N.
2013-04-01
Simultaneous time-resolved measurements of the near-field hydrodynamic pressure field, 2-component streamwise velocity field, and far-field acoustics are taken for an un-heated, axisymmetric Mach 0.6 jet in co-flow. Synthetic jet actuators placed around the periphery of the nozzle lip provide localized perturbations to the shear layer. The goal of this study was to develop an understanding of how the acoustic nature of the jet responds to unsteady shear layer excitation, and subsequently how this can be used to reduce the far-field noise. Review of the cross-correlations between the most energetic low-order spatial Fourier modes of the pressure and the far-field region reveals that mode 0 has a strong correlation and mode 1 has a weak correlation with the far-field. These modes are emulated with the synthetic jet array and used as drivers of the developing shear layer. In open loop forcing configurations, there is energy transfer among spatial scales, enhanced mixing, a reconfiguration of the low-dimensional spatial structure, and an increase in the overall sound pressure level (OASPL). In the closed loop configuration, changes to these quantities are more subtle but there is a reduction in the overall fluctuating sound pressure level OASPLf by 1.35 dB. It is argued that this reduction is correlated with the closed loop control feeding back the dynamical low-order information measured in the largest noise producing region.
Yu, Wenbao; Park, Taesung
2014-01-01
It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.
The Dimensionality of Cognitive Structure: A MIRT Approach and the Use of Subscores
ERIC Educational Resources Information Center
Cheng, Yi-Ling
2016-01-01
The present study explored the dimensionality of cognitive structure from two approaches. The first approach used a famous relation between Visual Spatial Working Memory (VSWM) and calculation to demonstrate the multidimensional item response analyses when true dimensions are unknown. The second approach explored the detectability of dimensions by…
The University Münster Model Surgery System for Orthognathic Surgery. Part II -- KD-MMS.
Ehmer, Ulrike; Joos, Ulrich; Ziebura, Thomas; Flieger, Stefanie; Wiechmann, Dirk
2013-01-04
Model surgery is an integral part of the planning procedure in orthognathic surgery. Most concepts comprise cutting the dental cast off its socket. The standardized spacer plates of the KD-MMS provide for a non-destructive, reversible and reproducible means of maxillary and/or mandibular plaster cast separation. In the course of development of the system various articulator types were evaluated with regard to their capability to provide a means of realizing the concepts comprised of the KD-MMS. Special attention was dedicated to the ability to perform three-dimensional displacements without cutting of plaster casts. Various utilities were developed to facilitate maxillary displacement in accordance to the planning. Objectives of this development comprised the ability to implement the values established in the course of two-dimensional ceph planning. The system - KD-MMS comprises a set of hardware components as well as a defined procedure. Essential hardware components are red spacer and blue mounting plates. The blue mounting plates replace the standard yellow SAM mounting elements. The red spacers provide for a defined leeway of 8 mm for three-dimensional movements. The non-destructive approach of the KD-MMS makes it possible to conduct different model surgeries with the same plaster casts as well as to restore the initial, pre-surgical situation at any time. Thereby, surgical protocol generation and gnathologic splint construction are facilitated. The KD-MMS hardware components in conjunction with the defined procedures are capable of increasing efficiency and accuracy of model surgery and splint construction. In cases where different surgical approaches need to be evaluated in the course of model surgery, a significant reduction of chair time may be achieved.
Novel gene sets improve set-level classification of prokaryotic gene expression data.
Holec, Matěj; Kuželka, Ondřej; Železný, Filip
2015-10-28
Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.
An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging
NASA Astrophysics Data System (ADS)
Linares, R.; Furfaro, R.
The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.
Drug-target interaction prediction using ensemble learning and dimensionality reduction.
Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong
2017-10-01
Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction. Copyright © 2017 Elsevier Inc. All rights reserved.
Jeon, Sangchoon; Walkup, John T; Woods, Douglas W.; Peterson, Alan; Piacentini, John; Wilhelm, Sabine; Katsovich, Lily; McGuire, Joseph F.; Dziura, James; Scahill, Lawrence
2014-01-01
Objective To compare three statistical strategies for classifying positive treatment response based on a dimensional measure (Yale Global Tic Severity Scale [YGTSS]) and a categorical measure (Clinical Global Impression-Improvement [CGI-I]). Method Subjects (N=232; 69.4% male; ages 9-69 years) with Tourette syndrome or chronic tic disorder participated in one of two 10-week, randomized controlled trials comparing behavioral treatment to supportive therapy. The YGTSS and CGI-I were rated by clinicians blind to treatment assignment. We examined the percent reduction in the YGTSS-Total Tic Score (TTS) against Much Improved or Very Much Improved on the CGI-I, computed a signal detection analysis (SDA) and built a mixture model to classify dimensional response based on the change in the YGTSS-TTS. Results A 25% decrease on the YGTSS-TTS predicted positive response on the CGI-I during the trial. The SDA showed that a 25% reduction in the YGTSS-TTS provided optimal sensitivity (87%) and specificity (84%) for predicting positive response. Using a mixture model without consideration of the CGI-I, the dimensional response was defined by 23% (or greater) reduction on the YGTSS-TTS. The odds ratio (OR) of positive response (OR=5.68, 95% CI=[2.99, 10.78]) on the CGI-I for behavioral intervention was greater than the dimensional response (OR=2.86, 95% CI=[1.65, 4.99]). Conclusion A twenty five percent reduction on the YGTSS-TTS is highly predictive of positive response by all three analytic methods. For trained raters, however, tic severity alone does not drive the classification of positive response. PMID:24001701
Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States
NASA Technical Reports Server (NTRS)
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.
2017-01-01
This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.
The use of 3D-printed titanium mesh tray in treating complex comminuted mandibular fractures
Ma, Junli; Ma, Limin; Wang, Zhifa; Zhu, Xiongjie; Wang, Weijian
2017-01-01
Abstract Rationale: Precise bony reduction and reconstruction of optimal contour in treating comminuted mandibular fractures is very difficult using traditional techniques and devices. The aim of this report is to introduce our experiences in using virtual surgery and three-dimensional (3D) printing technique in treating this clinical challenge. Patient concerns: A 26-year-old man presented with severe trauma in the maxillofacial area due to fall from height. Diagnosis: Computed tomography images revealed middle face fractures and comminuted mandibular fracture including bilateral condyles. Interventions and outcomes: The computed tomography data was used to construct the 3D cranio-maxillofacial models; then the displaced bone fragments were virtually reduced. On the basis of the finalized model, a customized titanium mesh tray was designed and fabricated using selective laser melting technology. During the surgery, a submandibular approach was adopted to repair the mandibular fracture. The reduction and fixation were performed according to preoperative plan, the bone defects in the mental area were reconstructed with iliac bone graft. The 3D-printed mesh tray served as an intraoperative template and carrier of bone graft. The healing process was uneventful, and the patient was satisfied with the mandible contour. Lessons: Virtual surgical planning combined with 3D printing technology enables surgeon to visualize the reduction process preoperatively and guide intraoperative reduction, making the reduction less time consuming and more precise. 3D-printed titanium mesh tray can provide more satisfactory esthetic outcomes in treating complex comminuted mandibular fractures. PMID:28682875
OBJECTIVE REDUCTION OF THE SPACE-TIME DOMAIN DIMENSIONALITY FOR EVALUATING MODEL PERFORMANCE
In the United States, photochemical air quality models are the principal tools used by governmental agencies to develop emission reduction strategies aimed at achieving National Ambient Air Quality Standards (NAAQS). Before they can be applied with confidence in a regulatory sett...
A fast multi-resolution approach to tomographic PIV
NASA Astrophysics Data System (ADS)
Discetti, Stefano; Astarita, Tommaso
2012-03-01
Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy.
Global Interior Robot Localisation by a Colour Content Image Retrieval System
NASA Astrophysics Data System (ADS)
Chaari, A.; Lelandais, S.; Montagne, C.; Ahmed, M. Ben
2007-12-01
We propose a new global localisation approach to determine a coarse position of a mobile robot in structured indoor space using colour-based image retrieval techniques. We use an original method of colour quantisation based on the baker's transformation to extract a two-dimensional colour pallet combining as well space and vicinity-related information as colourimetric aspect of the original image. We conceive several retrieving approaches bringing to a specific similarity measure [InlineEquation not available: see fulltext.] integrating the space organisation of colours in the pallet. The baker's transformation provides a quantisation of the image into a space where colours that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image. Whereas the distance [InlineEquation not available: see fulltext.] provides for partial invariance to translation, sight point small changes, and scale factor. In addition to this study, we developed a hierarchical search module based on the logic classification of images following rooms. This hierarchical module reduces the searching indoor space and ensures an improvement of our system performances. Results are then compared with those brought by colour histograms provided with several similarity measures. In this paper, we focus on colour-based features to describe indoor images. A finalised system must obviously integrate other type of signature like shape and texture.
Simplex volume analysis for finding endmembers in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Hsiao-Chi; Song, Meiping; Chang, Chein-I.
2015-05-01
Using maximal simplex volume as an optimal criterion for finding endmembers is a common approach and has been widely studied in the literature. Interestingly, very little work has been reported on how simplex volume is calculated. It turns out that the issue of calculating simplex volume is much more complicated and involved than what we may think. This paper investigates this issue from two different aspects, geometric structure and eigen-analysis. The geometric structure is derived from its simplex structure whose volume can be calculated by multiplying its base with its height. On the other hand, eigen-analysis takes advantage of the Cayley-Menger determinant to calculate the simplex volume. The major issue of this approach is that when the matrix is ill-rank where determinant is desired. To deal with this problem two methods are generally considered. One is to perform data dimensionality reduction to make the matrix to be of full rank. The drawback of this method is that the original volume has been shrunk and the found volume of a dimensionality-reduced simplex is not the real original simplex volume. Another is to use singular value decomposition (SVD) to find singular values for calculating simplex volume. The dilemma of this method is its instability in numerical calculations. This paper explores all of these three methods in simplex volume calculation. Experimental results show that geometric structure-based method yields the most reliable simplex volume.
Dimensionality Analysis of "CBAL"™ Writing Tests. Research Report. ETS RR-13-10
ERIC Educational Resources Information Center
Fu, Jianbin; Chung, Seunghee; Wise, Maxwell
2013-01-01
The Cognitively Based Assessment of, for, and as Learning ("CBAL"™) research initiative is aimed at developing an innovative approach to K-12 assessment based on cognitive competency models. Because the choice of scoring and equating approaches depends on test dimensionality, the dimensional structure of CBAL tests must be understood.…
NASA Astrophysics Data System (ADS)
Baier, B. C.; Brune, W. H.; Miller, D. O.; Lefer, B. L.
2015-12-01
Tropospheric ozone (O3) is a secondary pollutant that has harmful effects on human and plant life. The climate and urban emissions in Houston, TX and Denver, CO can be conducive for significant ozone production and thus, high ozone events. Tighter government strategies for ozone mitigation have been proposed, which involve reducing the current EPA eight-hour ozone standard from 75 ppb to 65-70 ppb. These strategies rely on the reduction of ozone precursors in order to decrease the ozone production rate, P(O3). The changes in the ozone concentration at a certain location are dependent upon P(O3), so decreasing P(O3) can decrease ozone levels provided that it has not been transported from other areas. Air quality models test reduction strategies before they are implemented, locate ozone sources, and predict ozone episodes. Traditionally, P(O3) has been calculated by models. However, large uncertainties in model emissions inventories, chemical mechanisms, and meteorology can reduce confidence in this approach. A new instrument, the Measurement of Ozone Production Sensor (MOPS) directly measures P(O3) and can provide an alternate approach to determining P(O3). An updated version of the Penn State MOPS (MOPSv2.0) was deployed to Houston, TX and Denver, CO as a part of NASA's DISCOVER-AQ field campaign in the summers of 2013 and 2014, respectively. We present MOPS directly-measured P(O3) rates from these areas, as well as comparisons to zero-dimensional and three-dimensional modeled P(O3) using the RACM2 and MCMv2.2 mechanisms. These comparisons demonstrate the potential of the MOPS to test and evaluate model-derived P(O3), to advance the understanding of model chemical mechanisms, and to improve predictions of high ozone events.
Nonparametric regression applied to quantitative structure-activity relationships
Constans; Hirst
2000-03-01
Several nonparametric regressors have been applied to modeling quantitative structure-activity relationship (QSAR) data. The simplest regressor, the Nadaraya-Watson, was assessed in a genuine multivariate setting. Other regressors, the local linear and the shifted Nadaraya-Watson, were implemented within additive models--a computationally more expedient approach, better suited for low-density designs. Performances were benchmarked against the nonlinear method of smoothing splines. A linear reference point was provided by multilinear regression (MLR). Variable selection was explored using systematic combinations of different variables and combinations of principal components. For the data set examined, 47 inhibitors of dopamine beta-hydroxylase, the additive nonparametric regressors have greater predictive accuracy (as measured by the mean absolute error of the predictions or the Pearson correlation in cross-validation trails) than MLR. The use of principal components did not improve the performance of the nonparametric regressors over use of the original descriptors, since the original descriptors are not strongly correlated. It remains to be seen if the nonparametric regressors can be successfully coupled with better variable selection and dimensionality reduction in the context of high-dimensional QSARs.
NASA Astrophysics Data System (ADS)
Hamprecht, Fred A.; Peter, Christine; Daura, Xavier; Thiel, Walter; van Gunsteren, Wilfred F.
2001-02-01
We propose an approach for summarizing the output of long simulations of complex systems, affording a rapid overview and interpretation. First, multidimensional scaling techniques are used in conjunction with dimension reduction methods to obtain a low-dimensional representation of the configuration space explored by the system. A nonparametric estimate of the density of states in this subspace is then obtained using kernel methods. The free energy surface is calculated from that density, and the configurations produced in the simulation are then clustered according to the topography of that surface, such that all configurations belonging to one local free energy minimum form one class. This topographical cluster analysis is performed using basin spanning trees which we introduce as subgraphs of Delaunay triangulations. Free energy surfaces obtained in dimensions lower than four can be visualized directly using iso-contours and -surfaces. Basin spanning trees also afford a glimpse of higher-dimensional topographies. The procedure is illustrated using molecular dynamics simulations on the reversible folding of peptide analoga. Finally, we emphasize the intimate relation of density estimation techniques to modern enhanced sampling algorithms.
Exploring multicollinearity using a random matrix theory approach.
Feher, Kristen; Whelan, James; Müller, Samuel
2012-01-01
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
SPReM: Sparse Projection Regression Model For High-dimensional Linear Regression *
Sun, Qiang; Zhu, Hongtu; Liu, Yufeng; Ibrahim, Joseph G.
2014-01-01
The aim of this paper is to develop a sparse projection regression modeling (SPReM) framework to perform multivariate regression modeling with a large number of responses and a multivariate covariate of interest. We propose two novel heritability ratios to simultaneously perform dimension reduction, response selection, estimation, and testing, while explicitly accounting for correlations among multivariate responses. Our SPReM is devised to specifically address the low statistical power issue of many standard statistical approaches, such as the Hotelling’s T2 test statistic or a mass univariate analysis, for high-dimensional data. We formulate the estimation problem of SPREM as a novel sparse unit rank projection (SURP) problem and propose a fast optimization algorithm for SURP. Furthermore, we extend SURP to the sparse multi-rank projection (SMURP) by adopting a sequential SURP approximation. Theoretically, we have systematically investigated the convergence properties of SURP and the convergence rate of SURP estimates. Our simulation results and real data analysis have shown that SPReM out-performs other state-of-the-art methods. PMID:26527844
Hou, Chao; Lang, Xing-You; Han, Gao-Feng; Li, Ying-Qi; Zhao, Lei; Wen, Zi; Zhu, Yong-Fu; Zhao, Ming; Li, Jian-Chen; Lian, Jian-She; Jiang, Qing
2013-01-01
Nanoarchitectured electroactive materials can boost rates of Li insertion/extraction, showing genuine potential to increase power output of Li-ion batteries. However, electrodes assembled with low-dimensional nanostructured transition metal oxides by conventional approach suffer from dramatic reductions in energy capacities owing to sluggish ion and electron transport kinetics. Here we report that flexible bulk electrodes, made of three-dimensional bicontinuous nanoporous Cu/MnO2 hybrid and seamlessly integrated with Cu solid current collector, substantially optimizes Li storage behavior of the constituent MnO2. As a result of the unique integration of solid/nanoporous hybrid architecture that simultaneously enhances the electron transport of MnO2, facilitates fast ion diffusion and accommodates large volume changes on Li insertion/extraction of MnO2, the supported MnO2 exhibits a stable capacity of as high as ~1100 mA h g−1 for 1000 cycles, and ultrahigh charge/discharge rates. It makes the environmentally friendly and low-cost electrode as a promising anode for high-performance Li-ion battery applications. PMID:24096928
Chaos and Robustness in a Single Family of Genetic Oscillatory Networks
Fu, Daniel; Tan, Patrick; Kuznetsov, Alexey; Molkov, Yaroslav I.
2014-01-01
Genetic oscillatory networks can be mathematically modeled with delay differential equations (DDEs). Interpreting genetic networks with DDEs gives a more intuitive understanding from a biological standpoint. However, it presents a problem mathematically, for DDEs are by construction infinitely-dimensional and thus cannot be analyzed using methods common for systems of ordinary differential equations (ODEs). In our study, we address this problem by developing a method for reducing infinitely-dimensional DDEs to two- and three-dimensional systems of ODEs. We find that the three-dimensional reductions provide qualitative improvements over the two-dimensional reductions. We find that the reducibility of a DDE corresponds to its robustness. For non-robust DDEs that exhibit high-dimensional dynamics, we calculate analytic dimension lines to predict the dependence of the DDEs’ correlation dimension on parameters. From these lines, we deduce that the correlation dimension of non-robust DDEs grows linearly with the delay. On the other hand, for robust DDEs, we find that the period of oscillation grows linearly with delay. We find that DDEs with exclusively negative feedback are robust, whereas DDEs with feedback that changes its sign are not robust. We find that non-saturable degradation damps oscillations and narrows the range of parameter values for which oscillations exist. Finally, we deduce that natural genetic oscillators with highly-regular periods likely have solely negative feedback. PMID:24667178
NASA Technical Reports Server (NTRS)
Balas, M. J.; Kaufman, H.; Wen, J.
1985-01-01
A command generator tracker approach to model following contol of linear distributed parameter systems (DPS) whose dynamics are described on infinite dimensional Hilbert spaces is presented. This method generates finite dimensional controllers capable of exponentially stable tracking of the reference trajectories when certain ideal trajectories are known to exist for the open loop DPS; we present conditions for the existence of these ideal trajectories. An adaptive version of this type of controller is also presented and shown to achieve (in some cases, asymptotically) stable finite dimensional control of the infinite dimensional DPS.
Unimodular gravity and the lepton anomalous magnetic moment at one-loop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martín, Carmelo P., E-mail: carmelop@fis.ucm.es
We work out the one-loop contribution to the lepton anomalous magnetic moment coming from Unimodular Gravity. We use Dimensional Regularization and Dimensional Reduction to carry out the computations. In either case, we find that Unimodular Gravity gives rise to the same one-loop correction as that of General Relativity.
Local reduction of certain wave operators to one-dimensional form
NASA Technical Reports Server (NTRS)
Roe, Philip
1994-01-01
It is noted that certain common linear wave operators have the property that linear variation of the initial data gives rise to one-dimensional evolution in a plane defined by time and some direction in space. The analysis is given For operators arising in acoustics, electromagnetics, elastodynamics, and an abstract system.
A General Exponential Framework for Dimensionality Reduction.
Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan
2014-02-01
As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.
Network embedding-based representation learning for single cell RNA-seq data.
Li, Xiangyu; Chen, Weizheng; Chen, Yang; Zhang, Xuegong; Gu, Jin; Zhang, Michael Q
2017-11-02
Single cell RNA-seq (scRNA-seq) techniques can reveal valuable insights of cell-to-cell heterogeneities. Projection of high-dimensional data into a low-dimensional subspace is a powerful strategy in general for mining such big data. However, scRNA-seq suffers from higher noise and lower coverage than traditional bulk RNA-seq, hence bringing in new computational difficulties. One major challenge is how to deal with the frequent drop-out events. The events, usually caused by the stochastic burst effect in gene transcription and the technical failure of RNA transcript capture, often render traditional dimension reduction methods work inefficiently. To overcome this problem, we have developed a novel Single Cell Representation Learning (SCRL) method based on network embedding. This method can efficiently implement data-driven non-linear projection and incorporate prior biological knowledge (such as pathway information) to learn more meaningful low-dimensional representations for both cells and genes. Benchmark results show that SCRL outperforms other dimensional reduction methods on several recent scRNA-seq datasets. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
2012-01-01
We show that certain three-dimensional (3D) superlattice nanostructure based on Bi2Te3 topological insulator thin films has better thermoelectric performance than two-dimensional (2D) thin films. The 3D superlattice shows a predicted peak value of ZT of approximately 6 for gapped surface states at room temperature and retains a high figure of merit ZT of approximately 2.5 for gapless surface states. In contrast, 2D thin films with gapless surface states show no advantage over bulk Bi2Te3. The enhancement of the thermoelectric performance originates from a combination of the reduction of lattice thermal conductivity by phonon-interface scattering, the high mobility of the topologically protected surface states, the enhancement of Seebeck coefficient, and the reduction of electron thermal conductivity by energy filtering. Our study shows that the nanostructure design of topological insulators provides a possible new way of ZT enhancement. PMID:23072433
Fan, Zheyong; Zheng, Jiansen; Wang, Hui-Qiong; Zheng, Jin-Cheng
2012-10-16
We show that certain three-dimensional (3D) superlattice nanostructure based on Bi2Te3 topological insulator thin films has better thermoelectric performance than two-dimensional (2D) thin films. The 3D superlattice shows a predicted peak value of ZT of approximately 6 for gapped surface states at room temperature and retains a high figure of merit ZT of approximately 2.5 for gapless surface states. In contrast, 2D thin films with gapless surface states show no advantage over bulk Bi2Te3. The enhancement of the thermoelectric performance originates from a combination of the reduction of lattice thermal conductivity by phonon-interface scattering, the high mobility of the topologically protected surface states, the enhancement of Seebeck coefficient, and the reduction of electron thermal conductivity by energy filtering. Our study shows that the nanostructure design of topological insulators provides a possible new way of ZT enhancement.
Multispectral x-ray CT: multivariate statistical analysis for efficient reconstruction
NASA Astrophysics Data System (ADS)
Kheirabadi, Mina; Mustafa, Wail; Lyksborg, Mark; Lund Olsen, Ulrik; Bjorholm Dahl, Anders
2017-10-01
Recent developments in multispectral X-ray detectors allow for an efficient identification of materials based on their chemical composition. This has a range of applications including security inspection, which is our motivation. In this paper, we analyze data from a tomographic setup employing the MultiX detector, that records projection data in 128 energy bins covering the range from 20 to 160 keV. Obtaining all information from this data requires reconstructing 128 tomograms, which is computationally expensive. Instead, we propose to reduce the dimensionality of projection data prior to reconstruction and reconstruct from the reduced data. We analyze three linear methods for dimensionality reduction using a dataset with 37 equally-spaced projection angles. Four bottles with different materials are recorded for which we are able to obtain similar discrimination of their content using a very reduced subset of tomograms compared to the 128 tomograms that would otherwise be needed without dimensionality reduction.
Pal, P K; Kamble, Suresh S; Chaurasia, Ranjitkumar Rampratap; Chaurasia, Vishwajit Rampratap; Tiwari, Samarth; Bansal, Deepak
2014-06-01
The present study was done to evaluate the dimensional stability and surface quality of Type IV gypsum casts retrieved from disinfected elastomeric impression materials. In an in vitro study contaminated impression material with known bacterial species was disinfected with disinfectants followed by culturing the swab sample to assess reduction in level of bacterial colony. Changes in surface detail reproduction of impression were assessed fallowing disinfection. All the three disinfectants used in the study produced a 100% reduction in colony forming units of the test organisms. All the three disinfectants produced complete disinfection, and didn't cause any deterioration in surface detail reproduction. How to cite the article: Pal PK, Kamble SS, Chaurasia RR, Chaurasia VR, Tiwari S, Bansal D. Evaluation of dimensional stability and surface quality of type IV gypsum casts retrieved from disinfected elastomeric impression materials. J Int Oral Health 2014;6(3):77-81.
Karayianni, Katerina N; Grimaldi, Keith A; Nikita, Konstantina S; Valavanis, Ioannis K
2015-01-01
This paper aims to enlighten the complex etiology beneath obesity by analysing data from a large nutrigenetics study, in which nutritional and genetic factors associated with obesity were recorded for around two thousand individuals. In our previous work, these data have been analysed using artificial neural network methods, which identified optimised subsets of factors to predict one's obesity status. These methods did not reveal though how the selected factors interact with each other in the obtained predictive models. For that reason, parallel Multifactor Dimensionality Reduction (pMDR) was used here to further analyse the pre-selected subsets of nutrigenetic factors. Within pMDR, predictive models using up to eight factors were constructed, further reducing the input dimensionality, while rules describing the interactive effects of the selected factors were derived. In this way, it was possible to identify specific genetic variations and their interactive effects with particular nutritional factors, which are now under further study.
Design of a 3-dimensional visual illusion speed reduction marking scheme.
Liang, Guohua; Qian, Guomin; Wang, Ye; Yi, Zige; Ru, Xiaolei; Ye, Wei
2017-03-01
To determine which graphic and color combination for a 3-dimensional visual illusion speed reduction marking scheme presents the best visual stimulus, five parameters were designed. According to the Balanced Incomplete Blocks-Law of Comparative Judgment, three schemes, which produce strong stereoscopic impressions, were screened from the 25 initial design schemes of different combinations of graphics and colors. Three-dimensional experimental simulation scenes of the three screened schemes were created to evaluate four different effects according to a semantic analysis. The following conclusions were drawn: schemes with a red color are more effective than those without; the combination of red, yellow and blue produces the best visual stimulus; a larger area from the top surface and the front surface should be colored red; and a triangular prism should be painted as the graphic of the marking according to the stereoscopic impression and the coordination of graphics with the road.
Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction
2016-02-25
Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction We have completed a short program of theoretical research...on dimensional reduction and approximation of models based on quantum stochastic differential equations. Our primary results lie in the area of...2211 quantum probability, quantum stochastic differential equations REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR
3D-Hydrogel Based Polymeric Nanoreactors for Silver Nano-Antimicrobial Composites Generation
Soto-Quintero, Albanelly; Romo-Uribe, Ángel; Bermúdez-Morales, Víctor H.; Quijada-Garrido, Isabel
2017-01-01
This study underscores the development of Ag hydrogel nanocomposites, as smart substrates for antibacterial uses, via innovative in situ reactive and reduction pathways. To this end, two different synthetic strategies were used. Firstly thiol-acrylate (PSA) based hydrogels were attained via thiol-ene and radical polymerization of polyethylene glycol (PEG) and polycaprolactone (PCL). As a second approach, polyurethane (PU) based hydrogels were achieved by condensation polymerization from diisocyanates and PCL and PEG diols. In fact, these syntheses rendered active three-dimensional (3D) hydrogel matrices which were used as nanoreactors for in situ reduction of AgNO3 to silver nanoparticles. A redox chemistry of stannous catalyst in PU hydrogel yielded spherical AgNPs formation, even at 4 °C in the absence of external reductant; and an appropriate thiol-functionalized polymeric network promoted spherical AgNPs well dispersed through PSA hydrogel network, after heating up the swollen hydrogel at 103 °C in the presence of citrate-reductant. Optical and swelling behaviors of both series of hydrogel nanocomposites were investigated as key factors involved in their antimicrobial efficacy over time. Lastly, in vitro antibacterial activity of Ag loaded hydrogels exposed to Pseudomona aeruginosa and Escherichia coli strains indicated a noticeable sustained inhibitory effect, especially for Ag–PU hydrogel nanocomposites with bacterial inhibition growth capabilities up to 120 h cultivation. PMID:28763050
Reusing remediated CCA-treated wood
Carol A. Clausen
2003-01-01
Options for recycling and reusing chromated-copper-arsenate- (CCA) treated material include dimensional lumber and round wood size reduction, composites, and remediation. Size reduction by remilling, shaving, or resawing CCA-treated wood reduces the volume of landfilled waste material and provides many options for reusing used treated wood. Manufacturing composite...
Reducing democratic type II supergravity on SU(3) × SU(3) structures
NASA Astrophysics Data System (ADS)
Cassani, Davide
2008-06-01
Type II supergravity on backgrounds admitting SU(3) × SU(3) structure and general fluxes is considered. Using the generalized geometry formalism, we study dimensional reductions leading to N = 2 gauged supergravity in four dimensions, possibly with tensor multiplets. In particular, a geometric formula for the full N = 2 scalar potential is given. Then we implement a truncation ansatz, and derive the complete N = 2 bosonic action. While the NSNS contribution is obtained via a direct dimensional reduction, the contribution of the RR sector is computed starting from the democratic formulation and demanding consistency with the reduced equations of motion.
Symmetry Reductions and Group-Invariant Radial Solutions to the n-Dimensional Wave Equation
NASA Astrophysics Data System (ADS)
Feng, Wei; Zhao, Songlin
2018-01-01
In this paper, we derive explicit group-invariant radial solutions to a class of wave equation via symmetry group method. The optimal systems of one-dimensional subalgebras for the corresponding radial wave equation are presented in terms of the known point symmetries. The reductions of the radial wave equation into second-order ordinary differential equations (ODEs) with respect to each symmetry in the optimal systems are shown. Then we solve the corresponding reduced ODEs explicitly in order to write out the group-invariant radial solutions for the wave equation. Finally, several analytical behaviours and smoothness of the resulting solutions are discussed.
Mechanism of polymer drag reduction using a low-dimensional model.
Roy, Anshuman; Morozov, Alexander; van Saarloos, Wim; Larson, Ronald G
2006-12-08
Using a retarded-motion expansion to describe the polymer stress, we derive a low-dimensional model to understand the effects of polymer elasticity on the self-sustaining process that maintains the coherent wavy streamwise vortical structures underlying wall-bounded turbulence. Our analysis shows that at small Weissenberg numbers, Wi, elasticity enhances the coherent structures. At higher Wi, however, polymer stresses suppress the streamwise vortices (rolls) by calming down the instability of the streaks that regenerates the rolls. We show that this behavior can be attributed to the nonmonotonic dependence of the biaxial extensional viscosity on Wi, and identify it as the key rheological property controlling drag reduction.
Fu, Shaofang; Zhu, Chengzhou; Song, Junhua; Engelhard, Mark H; Xia, Haibing; Du, Dan; Lin, Yuehe
2016-12-28
Rational design and construction of Pt-based porous nanostructures with large mesopores have triggered significant considerations because of their high surface area and more efficient mass transport. Hydrochloric acid-induced kinetically controlled reduction of metal precursors in the presence of soft template F-127 and hard template tellurium nanowires has been successfully demonstrated to construct one-dimensional hierarchical porous PtCu alloy nanostructures with large mesopores. Moreover, the electrochemical experiments demonstrated that the PtCu hierarchically porous nanostructures synthesized under optimized conditions exhibit enhanced electrocatalytic performance for oxygen reduction reaction in acid media.
Mohammed, Ameer; Zamani, Majid; Bayford, Richard; Demosthenous, Andreas
2017-12-01
In Parkinson's disease (PD), on-demand deep brain stimulation is required so that stimulation is regulated to reduce side effects resulting from continuous stimulation and PD exacerbation due to untimely stimulation. Also, the progressive nature of PD necessitates the use of dynamic detection schemes that can track the nonlinearities in PD. This paper proposes the use of dynamic feature extraction and dynamic pattern classification to achieve dynamic PD detection taking into account the demand for high accuracy, low computation, and real-time detection. The dynamic feature extraction and dynamic pattern classification are selected by evaluating a subset of feature extraction, dimensionality reduction, and classification algorithms that have been used in brain-machine interfaces. A novel dimensionality reduction technique, the maximum ratio method (MRM) is proposed, which provides the most efficient performance. In terms of accuracy and complexity for hardware implementation, a combination having discrete wavelet transform for feature extraction, MRM for dimensionality reduction, and dynamic k-nearest neighbor for classification was chosen as the most efficient. It achieves a classification accuracy of 99.29%, an F1-score of 97.90%, and a choice probability of 99.86%.
Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.
Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua
2016-11-01
This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.
Shape component analysis: structure-preserving dimension reduction on biological shape spaces.
Lee, Hao-Chih; Liao, Tao; Zhang, Yongjie Jessica; Yang, Ge
2016-03-01
Quantitative shape analysis is required by a wide range of biological studies across diverse scales, ranging from molecules to cells and organisms. In particular, high-throughput and systems-level studies of biological structures and functions have started to produce large volumes of complex high-dimensional shape data. Analysis and understanding of high-dimensional biological shape data require dimension-reduction techniques. We have developed a technique for non-linear dimension reduction of 2D and 3D biological shape representations on their Riemannian spaces. A key feature of this technique is that it preserves distances between different shapes in an embedded low-dimensional shape space. We demonstrate an application of this technique by combining it with non-linear mean-shift clustering on the Riemannian spaces for unsupervised clustering of shapes of cellular organelles and proteins. Source code and data for reproducing results of this article are freely available at https://github.com/ccdlcmu/shape_component_analysis_Matlab The implementation was made in MATLAB and supported on MS Windows, Linux and Mac OS. geyang@andrew.cmu.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Formation of dominant mode by evolution in biological systems
NASA Astrophysics Data System (ADS)
Furusawa, Chikara; Kaneko, Kunihiko
2018-04-01
A reduction in high-dimensional phenotypic states to a few degrees of freedom is essential to understand biological systems. Here, we show evolutionary robustness causes such reduction which restricts possible phenotypic changes in response to a variety of environmental conditions. First, global protein expression changes in Escherichia coli after various environmental perturbations were shown to be proportional across components, across different types of environmental conditions. To examine if such dimension reduction is a result of evolution, we analyzed a cell model—with a huge number of components, that reproduces itself via a catalytic reaction network—and confirmed that common proportionality in the concentrations of all components is shaped through evolutionary processes. We found that the changes in concentration across all components in response to environmental and evolutionary changes are constrained to the changes along a one-dimensional major axis, within a huge-dimensional state space. On the basis of these observations, we propose a theory in which such constraints in phenotypic changes are achieved both by evolutionary robustness and plasticity and formulate this proposition in terms of dynamical systems. Accordingly, broad experimental and numerical results on phenotypic changes caused by evolution and adaptation are coherently explained.
Yang, Jing; Ye, Shu-jun; Wu, Ji-chun
2011-05-01
This paper studied on the influence of bioclogging on permeability of saturated porous media. Laboratory hydraulic tests were conducted in a two-dimensional C190 sand-filled cell (55 cm wide x 45 cm high x 1.28 cm thick) to investigate growth of the mixed microorganisms (KB-1) and influence of biofilm on permeability of saturated porous media under condition of rich nutrition. Biomass distributions in the water and on the sand in the cell were measured by protein analysis. The biofilm distribution on the sand was observed by confocal laser scanning microscopy. Permeability was measured by hydraulic tests. The biomass levels measured in water and on the sand increased with time, and were highest at the bottom of the cell. The biofilm on the sand at the bottom of the cell was thicker. The results of the hydraulic tests demonstrated that the permeability due to biofilm growth was estimated to be average 12% of the initial value. To investigate the spatial distribution of permeability in the two dimensional cell, three models (Taylor, Seki, and Clement) were used to calculate permeability of porous media with biofilm growth. The results of Taylor's model showed reduction in permeability of 2-5 orders magnitude. The Clement's model predicted 3%-98% of the initial value. Seki's model could not be applied in this study. Conclusively, biofilm growth could obviously decrease the permeability of two dimensional saturated porous media, however, the reduction was much less than that estimated in one dimensional condition. Additionally, under condition of two dimensional saturated porous media with rich nutrition, Seki's model could not be applied, Taylor's model predicted bigger reductions, and the results of Clement's model were closest to the result of hydraulic test.
Exploring Approaches to Teaching in Three-Dimensional Virtual Worlds
ERIC Educational Resources Information Center
Englund, Claire
2017-01-01
Purpose: The purpose of this paper is to explore how teachers' approaches to teaching and conceptions of teaching and learning with educational technology influence the implementation of three-dimensional virtual worlds (3DVWs) in health care education. Design/methodology/approach: Data were collected through thematic interviews with eight…
Riffel, Philipp; Michaely, Henrik J; Morelli, John N; Pfeuffer, Josef; Attenberger, Ulrike I; Schoenberg, Stefan O; Haneder, Stefan
2014-01-01
Implementation of DWI in the abdomen is challenging due to artifacts, particularly those arising from differences in tissue susceptibility. Two-dimensional, spatially-selective radiofrequency (RF) excitation pulses for single-shot echo-planar imaging (EPI) combined with a reduction in the FOV in the phase-encoding direction (i.e. zooming) leads to a decreased number of k-space acquisition lines, significantly shortening the EPI echo train and potentially susceptibility artifacts. To assess the feasibility and image quality of a zoomed diffusion-weighted EPI (z-EPI) sequence in MR imaging of the pancreas. The approach is compared to conventional single-shot EPI (c-EPI). 23 patients who had undergone an MRI study of the abdomen were included in this retrospective study. Examinations were performed on a 3T whole-body MR system (Magnetom Skyra, Siemens) equipped with a two-channel fully dynamic parallel transmit array (TimTX TrueShape, Siemens). The acquired sequences consisted of a conventional EPI DWI of the abdomen and a zoomed EPI DWI of the pancreas. For z-EPI, the standard sinc excitation was replaced with a two-dimensional spatially-selective RF pulse using an echo-planar transmit trajectory. Images were evaluated with regard to image blur, respiratory motion artifacts, diagnostic confidence, delineation of the pancreas, and overall scan preference. Additionally ADC values of the pancreatic head, body, and tail were calculated and compared between sequences. The pancreas was better delineated in every case (23/23) with z-EPI versus c-EPI. In every case (23/23), both readers preferred z-EPI overall to c-EPI. With z-EPI there was statistically significantly less image blur (p<0.0001) and respiratory motion artifact compared to c-EPI (p<0.0001). Diagnostic confidence was statistically significantly better with z-EPI (p<0.0001). No statistically significant differences in calculated ADC values were observed between the two sequences. Zoomed diffusion-weighted EPI leads to substantial image quality improvements with reduction of susceptibility artifacts in pancreatic DWI.
Diffusion maps for high-dimensional single-cell analysis of differentiation data.
Haghverdi, Laleh; Buettner, Florian; Theis, Fabian J
2015-09-15
Single-cell technologies have recently gained popularity in cellular differentiation studies regarding their ability to resolve potential heterogeneities in cell populations. Analyzing such high-dimensional single-cell data has its own statistical and computational challenges. Popular multivariate approaches are based on data normalization, followed by dimension reduction and clustering to identify subgroups. However, in the case of cellular differentiation, we would not expect clear clusters to be present but instead expect the cells to follow continuous branching lineages. Here, we propose the use of diffusion maps to deal with the problem of defining differentiation trajectories. We adapt this method to single-cell data by adequate choice of kernel width and inclusion of uncertainties or missing measurement values, which enables the establishment of a pseudotemporal ordering of single cells in a high-dimensional gene expression space. We expect this output to reflect cell differentiation trajectories, where the data originates from intrinsic diffusion-like dynamics. Starting from a pluripotent stage, cells move smoothly within the transcriptional landscape towards more differentiated states with some stochasticity along their path. We demonstrate the robustness of our method with respect to extrinsic noise (e.g. measurement noise) and sampling density heterogeneities on simulated toy data as well as two single-cell quantitative polymerase chain reaction datasets (i.e. mouse haematopoietic stem cells and mouse embryonic stem cells) and an RNA-Seq data of human pre-implantation embryos. We show that diffusion maps perform considerably better than Principal Component Analysis and are advantageous over other techniques for non-linear dimension reduction such as t-distributed Stochastic Neighbour Embedding for preserving the global structures and pseudotemporal ordering of cells. The Matlab implementation of diffusion maps for single-cell data is available at https://www.helmholtz-muenchen.de/icb/single-cell-diffusion-map. fbuettner.phys@gmail.com, fabian.theis@helmholtz-muenchen.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Comparative study of feature selection with ensemble learning using SOM variants
NASA Astrophysics Data System (ADS)
Filali, Ameni; Jlassi, Chiraz; Arous, Najet
2017-03-01
Ensemble learning has succeeded in the growth of stability and clustering accuracy, but their runtime prohibits them from scaling up to real-world applications. This study deals the problem of selecting a subset of the most pertinent features for every cluster from a dataset. The proposed method is another extension of the Random Forests approach using self-organizing maps (SOM) variants to unlabeled data that estimates the out-of-bag feature importance from a set of partitions. Every partition is created using a various bootstrap sample and a random subset of the features. Then, we show that the process internal estimates are used to measure variable pertinence in Random Forests are also applicable to feature selection in unsupervised learning. This approach aims to the dimensionality reduction, visualization and cluster characterization at the same time. Hence, we provide empirical results on nineteen benchmark data sets indicating that RFS can lead to significant improvement in terms of clustering accuracy, over several state-of-the-art unsupervised methods, with a very limited subset of features. The approach proves promise to treat with very broad domains.
NASA Astrophysics Data System (ADS)
Salem, Mohamed Shaker; Abdelaleem, Asmaa Mohamed; El-Gamal, Abear Abdullah; Amin, Mohamed
2017-01-01
One-dimensional silicon-based photonic crystals are formed by the electrochemical anodization of silicon substrates in hydrofluoric acid-based solution using an appropriate current density profile. In order to create a multi-band optical filter, two fabrication approaches are compared and discussed. The first approach utilizes a current profile composed of a linear combination of sinusoidal current waveforms having different frequencies. The individual frequency of the waveform maps to a characteristic stop band in the reflectance spectrum. The stopbands of the optical filter created by the second approach, on the other hand, are controlled by stacking multiple porous silicon rugate multilayers having different fabrication conditions. The morphology of the resulting optical filters is tuned by controlling the electrolyte composition and the type of the silicon substrate. The reduction of sidelobes arising from the interference in the multilayers is observed by applying an index matching current profile to the anodizing current waveform. In order to stabilize the resulting optical filters against natural oxidation, atomic layer deposition of silicon dioxide on the pore wall is employed.
2013-01-01
Background The structured organization of cells in the brain plays a key role in its functional efficiency. This delicate organization is the consequence of unique molecular identity of each cell gradually established by precise spatiotemporal gene expression control during development. Currently, studies on the molecular-structural association are beginning to reveal how the spatiotemporal gene expression patterns are related to cellular differentiation and structural development. Results In this article, we aim at a global, data-driven study of the relationship between gene expressions and neuroanatomy in the developing mouse brain. To enable visual explorations of the high-dimensional data, we map the in situ hybridization gene expression data to a two-dimensional space by preserving both the global and the local structures. Our results show that the developing brain anatomy is largely preserved in the reduced gene expression space. To provide a quantitative analysis, we cluster the reduced data into groups and measure the consistency with neuroanatomy at multiple levels. Our results show that the clusters in the low-dimensional space are more consistent with neuroanatomy than those in the original space. Conclusions Gene expression patterns and developing brain anatomy are closely related. Dimensionality reduction and visual exploration facilitate the study of this relationship. PMID:23845024
Strong anti-gravity Life in the shock wave
NASA Astrophysics Data System (ADS)
Fabbrichesi, Marco; Roland, Kaj
1992-12-01
Strong anti-gravity is the vanishing of the net force between two massive particles at rest, to all orders in Newton's constant. We study this phenomenon and show that it occurs in any effective theory of gravity which is obtained from a higher-dimensional model by compactification on a manifold with flat directions. We find the exact solution of the Einstein equations in the presence of a point-like source of strong anti-gravity by dimensional reduction of a shock-wave solution in the higher-dimensional model.
NASA Technical Reports Server (NTRS)
Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.
2012-01-01
This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.
Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut
2016-05-16
Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications.
Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut
2016-01-01
Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications. PMID:27181695
NASA Astrophysics Data System (ADS)
Takayama, T.; Iwasaki, A.
2016-06-01
Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhary, Kenny; Najm, Habib N.
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
Chowdhary, Kenny; Najm, Habib N.
2016-04-13
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
NASA Technical Reports Server (NTRS)
Pain, B.; Cunningham, T. J.; Hancock, B.; Yang, G.; Seshadri, S.; Ortiz, M.
2002-01-01
We present new CMOS photodiode imager pixel with ultra-low read noise through on-chip suppression of reset noise via column-based feedback circuitry. The noise reduction is achieved without introducing any image lag, and with insignificant reduction in quantum efficiency and full well.
Templated assembly of BiFeO3 nanocrystals into 3D mesoporous networks for catalytic applications
NASA Astrophysics Data System (ADS)
Papadas, I. T.; Subrahmanyam, K. S.; Kanatzidis, M. G.; Armatas, G. S.
2015-03-01
The self-assembly of uniform nanocrystals into large porous architectures is currently of immense interest for nanochemistry and nanotechnology. These materials combine the respective advantages of discrete nanoparticles and mesoporous structures. In this article, we demonstrate a facile nanoparticle templating process to synthesize a three-dimensional mesoporous BiFeO3 material. This approach involves the polymer-assisted aggregating assembly of 3-aminopropanoic acid-stabilized bismuth ferrite (BiFeO3) nanocrystals followed by thermal decomposition of the surfactant. The resulting material consists of a network of tightly connected BiFeO3 nanoparticles (~6-7 nm in diameter) and has a moderately high surface area (62 m2 g-1) and uniform pores (ca. 6.3 nm). As a result of the unique mesostructure, the porous assemblies of BiFeO3 nanoparticles show an excellent catalytic activity and chemical stability for the reduction of p-nitrophenol to p-aminophenol with NaBH4.The self-assembly of uniform nanocrystals into large porous architectures is currently of immense interest for nanochemistry and nanotechnology. These materials combine the respective advantages of discrete nanoparticles and mesoporous structures. In this article, we demonstrate a facile nanoparticle templating process to synthesize a three-dimensional mesoporous BiFeO3 material. This approach involves the polymer-assisted aggregating assembly of 3-aminopropanoic acid-stabilized bismuth ferrite (BiFeO3) nanocrystals followed by thermal decomposition of the surfactant. The resulting material consists of a network of tightly connected BiFeO3 nanoparticles (~6-7 nm in diameter) and has a moderately high surface area (62 m2 g-1) and uniform pores (ca. 6.3 nm). As a result of the unique mesostructure, the porous assemblies of BiFeO3 nanoparticles show an excellent catalytic activity and chemical stability for the reduction of p-nitrophenol to p-aminophenol with NaBH4. Electronic supplementary information (ESI) available: IR spectra and TG profiles of as-made BiFeO3 NPs and MBFA samples, TEM images of 3-APA-capped BiFeO3 NPs, EDS spectrum of MBFAs, N2 adsorption-desorption isotherms of randomly aggregated BiFeO3 NPs and catalytic data for 4-NP reduction by MBFAs and other nanostructured catalysts. See DOI: 10.1039/c5nr00185d
An accurate boundary element method for the exterior elastic scattering problem in two dimensions
NASA Astrophysics Data System (ADS)
Bao, Gang; Xu, Liwei; Yin, Tao
2017-11-01
This paper is concerned with a Galerkin boundary element method solving the two dimensional exterior elastic wave scattering problem. The original problem is first reduced to the so-called Burton-Miller [1] boundary integral formulation, and essential mathematical features of its variational form are discussed. In numerical implementations, a newly-derived and analytically accurate regularization formula [2] is employed for the numerical evaluation of hyper-singular boundary integral operator. A new computational approach is employed based on the series expansions of Hankel functions for the computation of weakly-singular boundary integral operators during the reduction of corresponding Galerkin equations into a discrete linear system. The effectiveness of proposed numerical methods is demonstrated using several numerical examples.
Chen, J; Irianto, J; Inamdar, S; Pravincumar, P; Lee, D A; Bader, D L; Knight, M M
2012-09-19
This study adopts a combined computational and experimental approach to determine the mechanical, structural, and metabolic properties of isolated chondrocytes cultured within three-dimensional hydrogels. A series of linear elastic and hyperelastic finite-element models demonstrated that chondrocytes cultured for 24 h in gels for which the relaxation modulus is <5 kPa exhibit a cellular Young's modulus of ∼5 kPa. This is notably greater than that reported for isolated chondrocytes in suspension. The increase in cell modulus occurs over a 24-h period and is associated with an increase in the organization of the cortical actin cytoskeleton, which is known to regulate cell mechanics. However, there was a reduction in chromatin condensation, suggesting that changes in the nucleus mechanics may not be involved. Comparison of cells in 1% and 3% agarose showed that cells in the stiffer gels rapidly develop a higher Young's modulus of ∼20 kPa, sixfold greater than that observed in the softer gels. This was associated with higher levels of actin organization and chromatin condensation, but only after 24 h in culture. Further studies revealed that cells in stiffer gels synthesize less extracellular matrix over a 28-day culture period. Hence, this study demonstrates that the properties of the three-dimensional microenvironment regulate the mechanical, structural, and metabolic properties of living cells. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W
2012-09-07
A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Collins, J. D.; Volakis, John L.
1992-01-01
A method that combines the finite element and boundary integral techniques for the numerical solution of electromagnetic scattering problems is presented. The finite element method is well known for requiring a low order storage and for its capability to model inhomogeneous structures. Of particular emphasis in this work is the reduction of the storage requirement by terminating the finite element mesh on a boundary in a fashion which renders the boundary integrals in convolutional form. The fast Fourier transform is then used to evaluate these integrals in a conjugate gradient solver, without a need to generate the actual matrix. This method has a marked advantage over traditional integral equation approaches with respect to the storage requirement of highly inhomogeneous structures. Rectangular, circular, and ogival mesh termination boundaries are examined for two-dimensional scattering. In the case of axially symmetric structures, the boundary integral matrix storage is reduced by exploiting matrix symmetries and solving the resulting system via the conjugate gradient method. In each case several results are presented for various scatterers aimed at validating the method and providing an assessment of its capabilities. Important in methods incorporating boundary integral equations is the issue of internal resonance. A method is implemented for their removal, and is shown to be effective in the two-dimensional and three-dimensional applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, R.
This report documents the initial progress on the reduced-order flow model developments in SAM for thermal stratification and mixing modeling. Two different modeling approaches are pursued. The first one is based on one-dimensional fluid equations with additional terms accounting for the thermal mixing from both flow circulations and turbulent mixing. The second approach is based on three-dimensional coarse-grid CFD approach, in which the full three-dimensional fluid conservation equations are modeled with closure models to account for the effects of turbulence.
Sentinel Lymph Node Biopsy: Quantification of Lymphedema Risk Reduction
2006-10-01
dimensional internal mammary lymphoscintigraphy: implications for radiation therapy treatment planning for breast carcinoma. Int J Radiat Oncol Biol Phys...techniques based on conventional photon beams, intensity modulated photon beams and proton beams for therapy of intact breast. Radiother Oncol. Feb...Harris JR. Three-dimensional internal mammary lymphoscintigraphy: implications for radiation therapy treatment planning for breast carcinoma. Int J
Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.
2013-01-01
The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition- and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity. PMID:23366954
Cowley, Benjamin R; Kaufman, Matthew T; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M
2012-01-01
The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition-and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
Subbarao, Udumula; Sarkar, Sumanta; Jana, Rajkumar; Bera, Sourav S; Peter, Sebastian C
2016-06-06
We conceptually selected the compounds REPb3 (RE = Eu, Yb), which are unstable in air, and converted them to the stable materials in ambient conditions by the chemical processes of "nanoparticle formation" and "dimensional reduction". The nanoparticles and the bulk counterparts were synthesized by the solvothermal and high-frequency induction furnace heating methods, respectively. The reduction of the particle size led to the valence transition of the rare earth atom, which was monitored through magnetic susceptibility and X-ray absorption near edge spectroscopy (XANES) measurements. The stability was checked by X-ray diffraction and thermogravimetric analysis over a period of seven months in oxygen and argon atmospheres and confirmed by XANES. The nanoparticles showed outstanding stability toward aerial oxidation over a period of seven months compared to the bulk counterpart, as the latter one is more prone to the oxidation within a few days.
The staircase method: integrals for periodic reductions of integrable lattice equations
NASA Astrophysics Data System (ADS)
van der Kamp, Peter H.; Quispel, G. R. W.
2010-11-01
We show, in full generality, that the staircase method (Papageorgiou et al 1990 Phys. Lett. A 147 106-14, Quispel et al 1991 Physica A 173 243-66) provides integrals for mappings, and correspondences, obtained as traveling wave reductions of (systems of) integrable partial difference equations. We apply the staircase method to a variety of equations, including the Korteweg-De Vries equation, the five-point Bruschi-Calogero-Droghei equation, the quotient-difference (QD)-algorithm and the Boussinesq system. We show that, in all these cases, if the staircase method provides r integrals for an n-dimensional mapping, with 2r, then one can introduce q <= 2r variables, which reduce the dimension of the mapping from n to q. These dimension-reducing variables are obtained as joint invariants of k-symmetries of the mappings. Our results support the idea that often the staircase method provides sufficiently many integrals for the periodic reductions of integrable lattice equations to be completely integrable. We also study reductions on other quad-graphs than the regular {\\ Z}^2 lattice, and we prove linear growth of the multi-valuedness of iterates of high-dimensional correspondences obtained as reductions of the QD-algorithm.
Wang, Ji; Cheng, Jie-Jun; Huang, Kai-Yi; Zhuang, Zhi-Guo; Zhang, Xue-Bin; Chi, Jia-Chang; Hua, Xiao-Lan; Xu, Jian-Rong
2016-03-01
The aim of this study was to develop a quantitative measurement of perfusion reduction using color-coded digital subtraction angiography (ccDSA) to monitor intra-procedural arterial stasis during TACE. A total number of 35 patients with hepatocellular carcinoma who had undergone TACE were enrolled into the study. Pre- and post-two-dimensional digital subtraction angiography scans were conducted with same protocol and post-processed with ccDSA prototype software. Time-contrast-intensity (CI[t]) curve was obtained by region-of-interest (ROI) measurement on the generated ccDSA image. Quantitative 2D perfusion parameters time to peak, area under the curve (AUC), maximum upslope, and contrast intensity peak (CI-Peak) derived from the ROI-based CI[t] curve for pre- and post-TACE were evaluated to assess the reduction of antegrade blood flow and tumor blush. Relationships between 2D perfusion parameters, subjective angiographic chemoembolization endpoint (SACE) scale, and clinical outcomes were analyzed. Area normalized AUC and CI-Peak revealed significant reduction after the TACE (P < 0.0001). AUCnorm decreased from pre-procedure of 0.867 ± 0.242 to 0.421 ± 0.171 (P < 0.001) after completion of TACE. CI-Peaknorm was 0.739 ± 0.221 before TACE and 0.421 ± 0.174 (P < 0.001) after TACE. Tumor blood supply time slowed down obviously after embolization. A perfusion reduction either from AUCnorm or CI-Peaknorm ranging from 30% to 40% was associated with SACE level III and a reduction ranging from 60% to 70% was equivalent to SACE level IV. For intermediate reduction (SACE level III), better tumor response was found after TACE rather than a higher reduction (SACE level IV). ccDSA application provides an objective approach to quantify the perfusion reduction and subjectively evaluate the arterial stasis of antegrade blood flow and tumor blush caused by TACE.
Numerical Modeling of Three-Dimensional Confined Flows
NASA Technical Reports Server (NTRS)
Greywall, M. S.
1981-01-01
A three dimensional confined flow model is presented. The flow field is computed by calculating velocity and enthalpy along a set of streamlines. The finite difference equations are obtained by applying conservation principles to streamtubes constructed around the chosen streamlines. With appropriate substitutions for the body force terms, the approach computes three dimensional magnetohydrodynamic channel flows. A listing of a computer code, based on this approach is presented in FORTRAN IV language. The code computes three dimensional compressible viscous flow through a rectangular duct, with the duct cross section specified along the axis.
Dettmer, Jan; Dosso, Stan E
2012-10-01
This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.
Reducing maternal anxiety and stress in pregnancy: what is the best approach?
Fontein-Kuipers, Yvonne
2015-04-01
To briefly review results of the latest research on approaching antenatal maternal anxiety and stress as distinct constructs within a broad spectrum of maternal antenatal distress and the preventive strategic role of the maternal healthcare practitioner. Maternal antenatal anxiety and stress are predominant contributors to short and long-term ill health and reduction of these psychological constructs is evident. Anxiety and stress belong to a broad spectrum of different psychological constructs. Various psychometric instruments are available to measure different individual constructs of antenatal maternal emotional health. Using multiple measures within antenatal care would imply a one-dimensional approach of individual constructs, resulting in inadequate management of care and inefficient use of knowledge and skills of maternity healthcare practitioners. A case-finding approach with slight emphasis on antenatal anxiety with subsequent selection of at-risk women and women suffering from maternal distress are shown to be effective preventive strategies and are consistent with the update of the National Institute for Health and Care Excellence guideline 'Antenatal and postnatal mental health'. Educational aspects of this approach are related to screening and assessment. A shift in perception and attitude towards a broad theoretical and practical approach of antenatal maternal mental health and well-being is required. Case finding with subsequent selective and indicated preventive strategies during pregnancy would conform to this approach and are evidence based.
Topology of Flow Separation on Three-Dimensional Bodies
NASA Technical Reports Server (NTRS)
Chapman, Gary T.; Yates, Leslie A.
1991-01-01
In recent years there has been extensive research on three-dimensional flow separation. There are two different approaches: the phenomenological approach and a mathematical approach using topology. These two approaches are reviewed briefly and the shortcomings of some of the past works are discussed. A comprehensive approach applicable to incompressible and compressible steady-state flows as well as incompressible unsteady flow is then presented. The approach is similar to earlier topological approaches to separation but is more complete and in some cases adds more emphasis to certain points than in the past. To assist in the classification of various types of flow, nomenclature is introduced to describe the skin-friction portraits on the surface. This method of classification is then demonstrated on several categories of flow to illustrate particular points as well as the diversity of flow separation. The categories include attached, two-dimensional separation and three different types of simple, three-dimensional primary separation, secondary separation, and compound separation. Hypothetical experiments are utilized to illustrate the topological terminology and its role in characterizing these flows. These hypothetical experiments use colored oil injected onto the surface at singular points in the skin-friction portrait. Actual flow-visualization information, if available, is used to corroborate the hypothetical examples.