Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ochilov, S.; Alam, M. S.; Bal, A.
2006-05-01
Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.
A trace ratio maximization approach to multiple kernel-based dimensionality reduction.
Jiang, Wenhao; Chung, Fu-lai
2014-01-01
Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
NASA Astrophysics Data System (ADS)
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
TPSLVM: a dimensionality reduction algorithm based on thin plate splines.
Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming
2014-10-01
Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.
Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.
Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil
2017-01-19
Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.
Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio
2015-12-01
This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.
NASA Astrophysics Data System (ADS)
Kawata, Y.; Niki, N.; Ohmatsu, H.; Aokage, K.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.
2015-03-01
Advantages of CT scanners with high resolution have allowed the improved detection of lung cancers. In the recent release of positive results from the National Lung Screening Trial (NLST) in the US showing that CT screening does in fact have a positive impact on the reduction of lung cancer related mortality. While this study does show the efficacy of CT based screening, physicians often face the problems of deciding appropriate management strategies for maximizing patient survival and for preserving lung function. Several key manifold-learning approaches efficiently reveal intrinsic low-dimensional structures latent in high-dimensional data spaces. This study was performed to investigate whether the dimensionality reduction can identify embedded structures from the CT histogram feature of non-small-cell lung cancer (NSCLC) space to improve the performance in predicting the likelihood of RFS for patients with NSCLC.
NASA Astrophysics Data System (ADS)
Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina
2014-03-01
We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang
GIXSGUIis a MATLAB toolbox that offers both a graphical user interface and script-based access to visualize and process grazing-incidence X-ray scattering data from nanostructures on surfaces and in thin films. It provides routine surface scattering data reduction methods such as geometric correction, one-dimensional intensity linecut, two-dimensional intensity reshapingetc. Three-dimensional indexing is also implemented to determine the space group and lattice parameters of buried organized nanoscopic structures in supported thin films.
A Fourier dimensionality reduction model for big data interferometric imaging
NASA Astrophysics Data System (ADS)
Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves
2017-06-01
Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Yuan, Fang; Wang, Guangyi; Wang, Xiaowei
2017-03-01
In this paper, smooth curve models of meminductor and memcapacitor are designed, which are generalized from a memristor. Based on these models, a new five-dimensional chaotic oscillator that contains a meminductor and memcapacitor is proposed. By dimensionality reducing, this five-dimensional system can be transformed into a three-dimensional system. The main work of this paper is to give the comparisons between the five-dimensional system and its dimensionality reduction model. To investigate dynamics behaviors of the two systems, equilibrium points and stabilities are analyzed. And the bifurcation diagrams and Lyapunov exponent spectrums are used to explore their properties. In addition, digital signal processing technologies are used to realize this chaotic oscillator, and chaotic sequences are generated by the experimental device, which can be used in encryption applications.
On the precision of quasi steady state assumptions in stochastic dynamics
NASA Astrophysics Data System (ADS)
Agarwal, Animesh; Adams, Rhys; Castellani, Gastone C.; Shouval, Harel Z.
2012-07-01
Many biochemical networks have complex multidimensional dynamics and there is a long history of methods that have been used for dimensionality reduction for such reaction networks. Usually a deterministic mass action approach is used; however, in small volumes, there are significant fluctuations from the mean which the mass action approach cannot capture. In such cases stochastic simulation methods should be used. In this paper, we evaluate the applicability of one such dimensionality reduction method, the quasi-steady state approximation (QSSA) [L. Menten and M. Michaelis, "Die kinetik der invertinwirkung," Biochem. Z 49, 333369 (1913)] for dimensionality reduction in case of stochastic dynamics. First, the applicability of QSSA approach is evaluated for a canonical system of enzyme reactions. Application of QSSA to such a reaction system in a deterministic setting leads to Michaelis-Menten reduced kinetics which can be used to derive the equilibrium concentrations of the reaction species. In the case of stochastic simulations, however, the steady state is characterized by fluctuations around the mean equilibrium concentration. Our analysis shows that a QSSA based approach for dimensionality reduction captures well the mean of the distribution as obtained from a full dimensional simulation but fails to accurately capture the distribution around that mean. Moreover, the QSSA approximation is not unique. We have then extended the analysis to a simple bistable biochemical network model proposed to account for the stability of synaptic efficacies; the substrate of learning and memory [J. E. Lisman, "A mechanism of memory storage insensitive to molecular turnover: A bistable autophosphorylating kinase," Proc. Natl. Acad. Sci. U.S.A. 82, 3055-3057 (1985)], 10.1073/pnas.82.9.3055. Our analysis shows that a QSSA based dimensionality reduction method results in errors as big as two orders of magnitude in predicting the residence times in the two stable states.
Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding
Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping
2015-01-01
Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
Ravi, Daniele; Fabelo, Himar; Callic, Gustavo Marrero; Yang, Guang-Zhong
2017-09-01
Recent advances in hyperspectral imaging have made it a promising solution for intra-operative tissue characterization, with the advantages of being non-contact, non-ionizing, and non-invasive. Working with hyperspectral images in vivo, however, is not straightforward as the high dimensionality of the data makes real-time processing challenging. In this paper, a novel dimensionality reduction scheme and a new processing pipeline are introduced to obtain a detailed tumor classification map for intra-operative margin definition during brain surgery. However, existing approaches to dimensionality reduction based on manifold embedding can be time consuming and may not guarantee a consistent result, thus hindering final tissue classification. The proposed framework aims to overcome these problems through a process divided into two steps: dimensionality reduction based on an extension of the T-distributed stochastic neighbor approach is first performed and then a semantic segmentation technique is applied to the embedded results by using a Semantic Texton Forest for tissue classification. Detailed in vivo validation of the proposed method has been performed to demonstrate the potential clinical value of the system.
Reduction of Large Dynamical Systems by Minimization of Evolution Rate
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.
1999-01-01
Reduction of a large system of equations to a lower-dimensional system of similar dynamics is investigated. For dynamical systems with disparate timescales, a criterion for determining redundant dimensions and a general reduction method based on the minimization of evolution rate are proposed.
Application of diffusion maps to identify human factors of self-reported anomalies in aviation.
Andrzejczak, Chris; Karwowski, Waldemar; Mikusinski, Piotr
2012-01-01
A study investigating what factors are present leading to pilots submitting voluntary anomaly reports regarding their flight performance was conducted. Diffusion Maps (DM) were selected as the method of choice for performing dimensionality reduction on text records for this study. Diffusion Maps have seen successful use in other domains such as image classification and pattern recognition. High-dimensionality data in the form of narrative text reports from the NASA Aviation Safety Reporting System (ASRS) were clustered and categorized by way of dimensionality reduction. Supervised analyses were performed to create a baseline document clustering system. Dimensionality reduction techniques identified concepts or keywords within records, and allowed the creation of a framework for an unsupervised document classification system. Results from the unsupervised clustering algorithm performed similarly to the supervised methods outlined in the study. The dimensionality reduction was performed on 100 of the most commonly occurring words within 126,000 text records describing commercial aviation incidents. This study demonstrates that unsupervised machine clustering and organization of incident reports is possible based on unbiased inputs. Findings from this study reinforced traditional views on what factors contribute to civil aviation anomalies, however, new associations between previously unrelated factors and conditions were also found.
A reduction for spiking integrate-and-fire network dynamics ranging from homogeneity to synchrony.
Zhang, J W; Rangan, A V
2015-04-01
In this paper we provide a general methodology for systematically reducing the dynamics of a class of integrate-and-fire networks down to an augmented 4-dimensional system of ordinary-differential-equations. The class of integrate-and-fire networks we focus on are homogeneously-structured, strongly coupled, and fluctuation-driven. Our reduction succeeds where most current firing-rate and population-dynamics models fail because we account for the emergence of 'multiple-firing-events' involving the semi-synchronous firing of many neurons. These multiple-firing-events are largely responsible for the fluctuations generated by the network and, as a result, our reduction faithfully describes many dynamic regimes ranging from homogeneous to synchronous. Our reduction is based on first principles, and provides an analyzable link between the integrate-and-fire network parameters and the relatively low-dimensional dynamics underlying the 4-dimensional augmented ODE.
NASA Technical Reports Server (NTRS)
Pain, B.; Cunningham, T. J.; Hancock, B.; Yang, G.; Seshadri, S.; Ortiz, M.
2002-01-01
We present new CMOS photodiode imager pixel with ultra-low read noise through on-chip suppression of reset noise via column-based feedback circuitry. The noise reduction is achieved without introducing any image lag, and with insignificant reduction in quantum efficiency and full well.
Cao, Peng; Liu, Xiaoli; Yang, Jinzhu; Zhao, Dazhe; Huang, Min; Zhang, Jian; Zaiane, Osmar
2017-12-01
Alzheimer's disease (AD) has been not only a substantial financial burden to the health care system but also an emotional burden to patients and their families. Making accurate diagnosis of AD based on brain magnetic resonance imaging (MRI) is becoming more and more critical and emphasized at the earliest stages. However, the high dimensionality and imbalanced data issues are two major challenges in the study of computer aided AD diagnosis. The greatest limitations of existing dimensionality reduction and over-sampling methods are that they assume a linear relationship between the MRI features (predictor) and the disease status (response). To better capture the complicated but more flexible relationship, we propose a multi-kernel based dimensionality reduction and over-sampling approaches. We combined Marginal Fisher Analysis with ℓ 2,1 -norm based multi-kernel learning (MKMFA) to achieve the sparsity of region-of-interest (ROI), which leads to simultaneously selecting a subset of the relevant brain regions and learning a dimensionality transformation. Meanwhile, a multi-kernel over-sampling (MKOS) was developed to generate synthetic instances in the optimal kernel space induced by MKMFA, so as to compensate for the class imbalanced distribution. We comprehensively evaluate the proposed models for the diagnostic classification (binary class and multi-class classification) including all subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The experimental results not only demonstrate the proposed method has superior performance over multiple comparable methods, but also identifies relevant imaging biomarkers that are consistent with prior medical knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Shaofang; Zhu, Chengzhou; Song, Junhua
2016-12-28
Rational design and construction of Pt-based porous nanostructures with large mesopores have triggered significant considerations because of their high surface area and more efficient mass transport. Hydrochloric acid-induced kinetic reduction of metal precursors in the presence of soft template F-127 and hard template tellurium nanowires has been successfully demonstrated to construct one-dimensional hierarchical porous PtCu alloy nanostructures with large mesopores. Moreover, the electrochemical experiments demonstrated that the resultant PtCu hierarchically porous nanostructures with optimized composition exhibit enhanced electrocatalytic performance for oxygen reduction reaction.
Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform
NASA Astrophysics Data System (ADS)
Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah
2017-02-01
Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.
NASA Astrophysics Data System (ADS)
Kusratmoko, Eko; Wibowo, Adi; Cholid, Sofyan; Pin, Tjiong Giok
2017-07-01
This paper presents the results of applications of participatory three dimensional mapping (P3DM) method for fqcilitating the people of Cibanteng' village to compile a landslide disaster risk reduction program. Physical factors, as high rainfall, topography, geology and land use, and coupled with the condition of demographic and social-economic factors, make up the Cibanteng region highly susceptible to landslides. During the years 2013-2014 has happened 2 times landslides which caused economic losses, as a result of damage to homes and farmland. Participatory mapping is one part of the activities of community-based disaster risk reduction (CBDRR)), because of the involvement of local communities is a prerequisite for sustainable disaster risk reduction. In this activity, participatory mapping method are done in two ways, namely participatory two-dimensional mapping (P2DM) with a focus on mapping of disaster areas and participatory three-dimensional mapping (P3DM) with a focus on the entire territory of the village. Based on the results P3DM, the ability of the communities in understanding the village environment spatially well-tested and honed, so as to facilitate the preparation of the CBDRR programs. Furthermore, the P3DM method can be applied to another disaster areas, due to it becomes a medium of effective dialogue between all levels of involved communities.
Geometric mean for subspace selection.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2009-02-01
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.
Generation Algorithm of Discrete Line in Multi-Dimensional Grids
NASA Astrophysics Data System (ADS)
Du, L.; Ben, J.; Li, Y.; Wang, R.
2017-09-01
Discrete Global Grids System (DGGS) is a kind of digital multi-resolution earth reference model, in terms of structure, it is conducive to the geographical spatial big data integration and mining. Vector is one of the important types of spatial data, only by discretization, can it be applied in grids system to make process and analysis. Based on the some constraint conditions, this paper put forward a strict definition of discrete lines, building a mathematic model of the discrete lines by base vectors combination method. Transforming mesh discrete lines issue in n-dimensional grids into the issue of optimal deviated path in n-minus-one dimension using hyperplane, which, therefore realizing dimension reduction process in the expression of mesh discrete lines. On this basis, we designed a simple and efficient algorithm for dimension reduction and generation of the discrete lines. The experimental results show that our algorithm not only can be applied in the two-dimensional rectangular grid, also can be applied in the two-dimensional hexagonal grid and the three-dimensional cubic grid. Meanwhile, when our algorithm is applied in two-dimensional rectangular grid, it can get a discrete line which is more similar to the line in the Euclidean space.
NASA Astrophysics Data System (ADS)
Davoudi, Alireza; Shiry Ghidary, Saeed; Sadatnejad, Khadijeh
2017-06-01
Objective. In this paper, we propose a nonlinear dimensionality reduction algorithm for the manifold of symmetric positive definite (SPD) matrices that considers the geometry of SPD matrices and provides a low-dimensional representation of the manifold with high class discrimination in a supervised or unsupervised manner. Approach. The proposed algorithm tries to preserve the local structure of the data by preserving distances to local means (DPLM) and also provides an implicit projection matrix. DPLM is linear in terms of the number of training samples. Main results. We performed several experiments on the multi-class dataset IIa from BCI competition IV and two other datasets from BCI competition III including datasets IIIa and IVa. The results show that our approach as dimensionality reduction technique—leads to superior results in comparison with other competitors in the related literature because of its robustness against outliers and the way it preserves the local geometry of the data. Significance. The experiments confirm that the combination of DPLM with filter geodesic minimum distance to mean as the classifier leads to superior performance compared with the state of the art on brain-computer interface competition IV dataset IIa. Also the statistical analysis shows that our dimensionality reduction method performs significantly better than its competitors.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.
Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
Fu, Shaofang; Zhu, Chengzhou; Song, Junhua; Engelhard, Mark H; Xia, Haibing; Du, Dan; Lin, Yuehe
2016-12-28
Rational design and construction of Pt-based porous nanostructures with large mesopores have triggered significant considerations because of their high surface area and more efficient mass transport. Hydrochloric acid-induced kinetically controlled reduction of metal precursors in the presence of soft template F-127 and hard template tellurium nanowires has been successfully demonstrated to construct one-dimensional hierarchical porous PtCu alloy nanostructures with large mesopores. Moreover, the electrochemical experiments demonstrated that the PtCu hierarchically porous nanostructures synthesized under optimized conditions exhibit enhanced electrocatalytic performance for oxygen reduction reaction in acid media.
Robust video copy detection approach based on local tangent space alignment
NASA Astrophysics Data System (ADS)
Nie, Xiushan; Qiao, Qianping
2012-04-01
We propose a robust content-based video copy detection approach based on local tangent space alignment (LTSA), which is an efficient dimensionality reduction algorithm. The idea is motivated by the fact that the content of video becomes richer and the dimension of content becomes higher. It does not give natural tools for video analysis and understanding because of the high dimensionality. The proposed approach reduces the dimensionality of video content using LTSA, and then generates video fingerprints in low dimensional space for video copy detection. Furthermore, a dynamic sliding window is applied to fingerprint matching. Experimental results show that the video copy detection approach has good robustness and discrimination.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization
Glaser, Joshua I.; Zamft, Bradley M.; Church, George M.; Kording, Konrad P.
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, “puzzle imaging,” that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples. PMID:26192446
Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals
Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.
2018-03-20
A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less
Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.
A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
Network embedding-based representation learning for single cell RNA-seq data.
Li, Xiangyu; Chen, Weizheng; Chen, Yang; Zhang, Xuegong; Gu, Jin; Zhang, Michael Q
2017-11-02
Single cell RNA-seq (scRNA-seq) techniques can reveal valuable insights of cell-to-cell heterogeneities. Projection of high-dimensional data into a low-dimensional subspace is a powerful strategy in general for mining such big data. However, scRNA-seq suffers from higher noise and lower coverage than traditional bulk RNA-seq, hence bringing in new computational difficulties. One major challenge is how to deal with the frequent drop-out events. The events, usually caused by the stochastic burst effect in gene transcription and the technical failure of RNA transcript capture, often render traditional dimension reduction methods work inefficiently. To overcome this problem, we have developed a novel Single Cell Representation Learning (SCRL) method based on network embedding. This method can efficiently implement data-driven non-linear projection and incorporate prior biological knowledge (such as pathway information) to learn more meaningful low-dimensional representations for both cells and genes. Benchmark results show that SCRL outperforms other dimensional reduction methods on several recent scRNA-seq datasets. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Scalable Learning for Geostatistics and Speaker Recognition
2011-01-01
of prior knowledge of the model or due to improved robustness requirements). Both these methods have their own advantages and disadvantages. The use...application. If the data is well-correlated and low-dimensional, any prior knowledge available on the data can be used to build a parametric model. In the...absence of prior knowledge , non-parametric methods can be used. If the data is high-dimensional, PCA based dimensionality reduction is often the first
Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction
2016-02-25
Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction We have completed a short program of theoretical research...on dimensional reduction and approximation of models based on quantum stochastic differential equations. Our primary results lie in the area of...2211 quantum probability, quantum stochastic differential equations REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR
Dimensionality reduction in epidemic spreading models
NASA Astrophysics Data System (ADS)
Frasca, M.; Rizzo, A.; Gallo, L.; Fortuna, L.; Porfiri, M.
2015-09-01
Complex dynamical systems often exhibit collective dynamics that are well described by a reduced set of key variables in a low-dimensional space. Such a low-dimensional description offers a privileged perspective to understand the system behavior across temporal and spatial scales. In this work, we propose a data-driven approach to establish low-dimensional representations of large epidemic datasets by using a dimensionality reduction algorithm based on isometric features mapping (ISOMAP). We demonstrate our approach on synthetic data for epidemic spreading in a population of mobile individuals. We find that ISOMAP is successful in embedding high-dimensional data into a low-dimensional manifold, whose topological features are associated with the epidemic outbreak. Across a range of simulation parameters and model instances, we observe that epidemic outbreaks are embedded into a family of closed curves in a three-dimensional space, in which neighboring points pertain to instants that are close in time. The orientation of each curve is unique to a specific outbreak, and the coordinates correlate with the number of infected individuals. A low-dimensional description of epidemic spreading is expected to improve our understanding of the role of individual response on the outbreak dynamics, inform the selection of meaningful global observables, and, possibly, aid in the design of control and quarantine procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate ofmore » $$O(n^{-1/2})$$, the corresponding IRUQ converges at $$O(n^{-1})$$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.« less
Sentinel Lymph Node Biopsy: Quantification of Lymphedema Risk Reduction
2006-10-01
dimensional internal mammary lymphoscintigraphy: implications for radiation therapy treatment planning for breast carcinoma. Int J Radiat Oncol Biol Phys...techniques based on conventional photon beams, intensity modulated photon beams and proton beams for therapy of intact breast. Radiother Oncol. Feb...Harris JR. Three-dimensional internal mammary lymphoscintigraphy: implications for radiation therapy treatment planning for breast carcinoma. Int J
NASA Astrophysics Data System (ADS)
Nadjafikhah, Mehdi; Jafari, Mehdi
2013-12-01
In this paper, partially invariant solutions (PISs) method is applied in order to obtain new four-dimensional Einstein Walker manifolds. This method is based on subgroup classification for the symmetry group of partial differential equations (PDEs) and can be regarded as the generalization of the similarity reduction method. For this purpose, those cases of PISs which have the defect structure δ=1 and are resulted from two-dimensional subalgebras are considered in the present paper. Also it is shown that the obtained PISs are distinct from the invariant solutions that obtained by similarity reduction method.
Model-based reinforcement learning with dimension reduction.
Tangkaratt, Voot; Morimoto, Jun; Sugiyama, Masashi
2016-12-01
The goal of reinforcement learning is to learn an optimal policy which controls an agent to acquire the maximum cumulative reward. The model-based reinforcement learning approach learns a transition model of the environment from data, and then derives the optimal policy using the transition model. However, learning an accurate transition model in high-dimensional environments requires a large amount of data which is difficult to obtain. To overcome this difficulty, in this paper, we propose to combine model-based reinforcement learning with the recently developed least-squares conditional entropy (LSCE) method, which simultaneously performs transition model estimation and dimension reduction. We also further extend the proposed method to imitation learning scenarios. The experimental results show that policy search combined with LSCE performs well for high-dimensional control tasks including real humanoid robot control. Copyright © 2016 Elsevier Ltd. All rights reserved.
Graphene-Based Photocatalysts for CO2 Reduction to Solar Fuel.
Low, Jingxiang; Yu, Jiaguo; Ho, Wingkei
2015-11-05
Recently, photocatalytic CO2 reduction for solar fuel production has attracted much attention because of its potential for simultaneously solving energy and global warming problems. Many studies have been conducted to prepare novel and efficient photocatalysts for CO2 reduction. Graphene, a two-dimensional material, has been increasingly used in photocatalytic CO2 reduction. In theory, graphene shows several remarkable properties, including excellent electronic conductivity, good optical transmittance, large specific surface area, and superior chemical stability. Attributing to these advantages, fabrication of graphene-based materials has been known as one of the most feasible strategies to improve the CO2 reduction performance of photocatalysts. This Perspective mainly focuses on the recent important advances in the fabrication and application of graphene-based photocatalysts for CO2 reduction to solar fuels. The existing challenges and difficulties of graphene-based photocatalysts are also discussed for future application.
Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés
2016-07-15
Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.
Spectral Regression Discriminant Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Pan, Y.; Wu, J.; Huang, H.; Liu, J.
2012-08-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.
Nonlinear dimensionality reduction of data lying on the multicluster manifold.
Meng, Deyu; Leung, Yee; Fung, Tung; Xu, Zongben
2008-08-01
A new method, which is called decomposition-composition (D-C) method, is proposed for the nonlinear dimensionality reduction (NLDR) of data lying on the multicluster manifold. The main idea is first to decompose a given data set into clusters and independently calculate the low-dimensional embeddings of each cluster by the decomposition procedure. Based on the intercluster connections, the embeddings of all clusters are then composed into their proper positions and orientations by the composition procedure. Different from other NLDR methods for multicluster data, which consider associatively the intracluster and intercluster information, the D-C method capitalizes on the separate employment of the intracluster neighborhood structures and the intercluster topologies for effective dimensionality reduction. This, on one hand, isometrically preserves the rigid-body shapes of the clusters in the embedding process and, on the other hand, guarantees the proper locations and orientations of all clusters. The theoretical arguments are supported by a series of experiments performed on the synthetic and real-life data sets. In addition, the computational complexity of the proposed method is analyzed, and its efficiency is theoretically analyzed and experimentally demonstrated. Related strategies for automatic parameter selection are also examined.
Drug-target interaction prediction using ensemble learning and dimensionality reduction.
Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong
2017-10-01
Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction. Copyright © 2017 Elsevier Inc. All rights reserved.
Jeon, Sangchoon; Walkup, John T; Woods, Douglas W.; Peterson, Alan; Piacentini, John; Wilhelm, Sabine; Katsovich, Lily; McGuire, Joseph F.; Dziura, James; Scahill, Lawrence
2014-01-01
Objective To compare three statistical strategies for classifying positive treatment response based on a dimensional measure (Yale Global Tic Severity Scale [YGTSS]) and a categorical measure (Clinical Global Impression-Improvement [CGI-I]). Method Subjects (N=232; 69.4% male; ages 9-69 years) with Tourette syndrome or chronic tic disorder participated in one of two 10-week, randomized controlled trials comparing behavioral treatment to supportive therapy. The YGTSS and CGI-I were rated by clinicians blind to treatment assignment. We examined the percent reduction in the YGTSS-Total Tic Score (TTS) against Much Improved or Very Much Improved on the CGI-I, computed a signal detection analysis (SDA) and built a mixture model to classify dimensional response based on the change in the YGTSS-TTS. Results A 25% decrease on the YGTSS-TTS predicted positive response on the CGI-I during the trial. The SDA showed that a 25% reduction in the YGTSS-TTS provided optimal sensitivity (87%) and specificity (84%) for predicting positive response. Using a mixture model without consideration of the CGI-I, the dimensional response was defined by 23% (or greater) reduction on the YGTSS-TTS. The odds ratio (OR) of positive response (OR=5.68, 95% CI=[2.99, 10.78]) on the CGI-I for behavioral intervention was greater than the dimensional response (OR=2.86, 95% CI=[1.65, 4.99]). Conclusion A twenty five percent reduction on the YGTSS-TTS is highly predictive of positive response by all three analytic methods. For trained raters, however, tic severity alone does not drive the classification of positive response. PMID:24001701
Supervised linear dimensionality reduction with robust margins for object recognition
NASA Astrophysics Data System (ADS)
Dornaika, F.; Assoum, A.
2013-01-01
Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.
NASA Astrophysics Data System (ADS)
Aviles, Angelica I.; Alsaleh, Samar; Sobrevilla, Pilar; Casals, Alicia
2016-03-01
Robotic-Assisted Surgery approach overcomes the limitations of the traditional laparoscopic and open surgeries. However, one of its major limitations is the lack of force feedback. Since there is no direct interaction between the surgeon and the tissue, there is no way of knowing how much force the surgeon is applying which can result in irreversible injuries. The use of force sensors is not practical since they impose different constraints. Thus, we make use of a neuro-visual approach to estimate the applied forces, in which the 3D shape recovery together with the geometry of motion are used as input to a deep network based on LSTM-RNN architecture. When deep networks are used in real time, pre-processing of data is a key factor to reduce complexity and improve the network performance. A common pre-processing step is dimensionality reduction which attempts to eliminate redundant and insignificant information by selecting a subset of relevant features to use in model construction. In this work, we show the effects of dimensionality reduction in a real-time application: estimating the applied force in Robotic-Assisted Surgeries. According to the results, we demonstrated positive effects of doing dimensionality reduction on deep networks including: faster training, improved network performance, and overfitting prevention. We also show a significant accuracy improvement, ranging from about 33% to 86%, over existing approaches related to force estimation.
Dimensionality reduction of collective motion by principal manifolds
NASA Astrophysics Data System (ADS)
Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.
2015-01-01
While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.
An adaptive band selection method for dimension reduction of hyper-spectral remote sensing image
NASA Astrophysics Data System (ADS)
Yu, Zhijie; Yu, Hui; Wang, Chen-sheng
2014-11-01
Hyper-spectral remote sensing data can be acquired by imaging the same area with multiple wavelengths, and it normally consists of hundreds of band-images. Hyper-spectral images can not only provide spatial information but also high resolution spectral information, and it has been widely used in environment monitoring, mineral investigation and military reconnaissance. However, because of the corresponding large data volume, it is very difficult to transmit and store Hyper-spectral images. Hyper-spectral image dimensional reduction technique is desired to resolve this problem. Because of the High relation and high redundancy of the hyper-spectral bands, it is very feasible that applying the dimensional reduction method to compress the data volume. This paper proposed a novel band selection-based dimension reduction method which can adaptively select the bands which contain more information and details. The proposed method is based on the principal component analysis (PCA), and then computes the index corresponding to every band. The indexes obtained are then ranked in order of magnitude from large to small. Based on the threshold, system can adaptively and reasonably select the bands. The proposed method can overcome the shortcomings induced by transform-based dimension reduction method and prevent the original spectral information from being lost. The performance of the proposed method has been validated by implementing several experiments. The experimental results show that the proposed algorithm can reduce the dimensions of hyper-spectral image with little information loss by adaptively selecting the band images.
Gauged supergravities from M-theory reductions
NASA Astrophysics Data System (ADS)
Katmadas, Stefanos; Tomasiello, Alessandro
2018-04-01
In supergravity compactifications, there is in general no clear prescription on how to select a finite-dimensional family of metrics on the internal space, and a family of forms on which to expand the various potentials, such that the lower-dimensional effective theory is supersymmetric. We propose a finite-dimensional family of deformations for regular Sasaki-Einstein seven-manifolds M 7, relevant for M-theory compactifications down to four dimensions. It consists of integrable Cauchy-Riemann structures, corresponding to complex deformations of the Calabi-Yau cone M 8 over M 7. The non-harmonic forms we propose are the ones contained in one of the Kohn-Rossi cohomology groups, which is finite-dimensional and naturally controls the deformations of Cauchy-Riemann structures. The same family of deformations can be also described in terms of twisted cohomology of the base M 6, or in terms of Milnor cycles arising in deformations of M 8. Using existing results on SU(3) structure compactifications, we briefly discuss the reduction of M-theory on our class of deformed Sasaki-Einstein manifolds to four-dimensional gauged supergravity.
NASA Astrophysics Data System (ADS)
Dey, Pinkee; Suslov, Sergey A.
2016-12-01
A finite amplitude instability has been analysed to discover the exact mechanism leading to the appearance of stationary magnetoconvection patterns in a vertical layer of a non-conducting ferrofluid heated from the side and placed in an external magnetic field perpendicular to the walls. The physical results have been obtained using a version of a weakly nonlinear analysis that is based on the disturbance amplitude expansion. It enables a low-dimensional reduction of a full nonlinear problem in supercritical regimes away from a bifurcation point. The details of the reduction are given in comparison with traditional small-parameter expansions. It is also demonstrated that Squire’s transformation can be introduced for higher-order nonlinear terms thus reducing the full three-dimensional problem to its equivalent two-dimensional counterpart and enabling significant computational savings. The full three-dimensional instability patterns are subsequently recovered using the inverse transforms The analysed stationary thermomagnetic instability is shown to occur as a result of a supercritical pitchfork bifurcation.
Three-dimensional mapping of the lateral ventricles in autism
Vidal, Christine N.; Nicolsonln, Rob; Boire, Jean-Yves; Barra, Vincent; DeVito, Timothy J.; Hayashi, Kiralee M.; Geaga, Jennifer A.; Drost, Dick J.; Williamson, Peter C.; Rajakumar, Nagalingam; Toga, Arthur W.; Thompson, Paul M.
2009-01-01
In this study, a computational mapping technique was used to examine the three-dimensional profile of the lateral ventricles in autism. T1-weighted three-dimensional magnetic resonance images of the brain were acquired from 20 males with autism (age: 10.1 ± 3.5 years) and 22 male control subjects (age: 10.7 ± 2.5 years). The lateral ventricles were delineated manually and ventricular volumes were compared between the two groups. Ventricular traces were also converted into statistical three-dimensional maps, based on anatomical surface meshes. These maps were used to visualize regional morphological differences in the thickness of the lateral ventricles between patients and controls. Although ventricular volumes measured using traditional methods did not differ significantly between groups, statistical surface maps revealed subtle, highly localized reductions in ventricular size in patients with autism in the left frontal and occipital horns. These localized reductions in the lateral ventricles may result from exaggerated brain growth early in life. PMID:18502618
Lee, Seungyeoun; Kim, Yongkang; Kwon, Min-Seok; Park, Taesung
2015-01-01
Genome-wide association studies (GWAS) have extensively analyzed single SNP effects on a wide variety of common and complex diseases and found many genetic variants associated with diseases. However, there is still a large portion of the genetic variants left unexplained. This missing heritability problem might be due to the analytical strategy that limits analyses to only single SNPs. One of possible approaches to the missing heritability problem is to consider identifying multi-SNP effects or gene-gene interactions. The multifactor dimensionality reduction method has been widely used to detect gene-gene interactions based on the constructive induction by classifying high-dimensional genotype combinations into one-dimensional variable with two attributes of high risk and low risk for the case-control study. Many modifications of MDR have been proposed and also extended to the survival phenotype. In this study, we propose several extensions of MDR for the survival phenotype and compare the proposed extensions with earlier MDR through comprehensive simulation studies. PMID:26339630
Euclidean sections of protein conformation space and their implications in dimensionality reduction
Duan, Mojie; Li, Minghai; Han, Li; Huo, Shuanghong
2014-01-01
Dimensionality reduction is widely used in searching for the intrinsic reaction coordinates for protein conformational changes. We find the dimensionality–reduction methods using the pairwise root–mean–square deviation as the local distance metric face a challenge. We use Isomap as an example to illustrate the problem. We believe that there is an implied assumption for the dimensionality–reduction approaches that aim to preserve the geometric relations between the objects: both the original space and the reduced space have the same kind of geometry, such as Euclidean geometry vs. Euclidean geometry or spherical geometry vs. spherical geometry. When the protein free energy landscape is mapped onto a 2D plane or 3D space, the reduced space is Euclidean, thus the original space should also be Euclidean. For a protein with N atoms, its conformation space is a subset of the 3N-dimensional Euclidean space R3N. We formally define the protein conformation space as the quotient space of R3N by the equivalence relation of rigid motions. Whether the quotient space is Euclidean or not depends on how it is parameterized. When the pairwise root–mean–square deviation is employed as the local distance metric, implicit representations are used for the protein conformation space, leading to no direct correspondence to a Euclidean set. We have demonstrated that an explicit Euclidean-based representation of protein conformation space and the local distance metric associated to it improve the quality of dimensionality reduction in the tetra-peptide and β–hairpin systems. PMID:24913095
Wing download reduction using vortex trapping plates
NASA Technical Reports Server (NTRS)
Light, Jeffrey S.; Stremel, Paul M.; Bilanin, Alan J.
1994-01-01
A download reduction technique using spanwise plates on the upper and lower wing surfaces has been examined. Experimental and analytical techniques were used to determine the download reduction obtained using this technique. Simple two-dimensional wind tunnel testing confirmed the validity of the technique for reducing two-dimensional airfoil drag. Computations using a two-dimensional Navier-Stokes analysis provided insight into the mechanism causing the drag reduction. Finally, the download reduction technique was tested using a rotor and wing to determine the benefits for a semispan configuration representative of a tilt rotor aircraft.
Manifold Learning by Preserving Distance Orders.
Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz
2014-03-01
Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.
Discovering Hidden Controlling Parameters using Data Analytics and Dimensional Analysis
NASA Astrophysics Data System (ADS)
Del Rosario, Zachary; Lee, Minyong; Iaccarino, Gianluca
2017-11-01
Dimensional Analysis is a powerful tool, one which takes a priori information and produces important simplifications. However, if this a priori information - the list of relevant parameters - is missing a relevant quantity, then the conclusions from Dimensional Analysis will be incorrect. In this work, we present novel conclusions in Dimensional Analysis, which provide a means to detect this failure mode of missing or hidden parameters. These results are based on a restated form of the Buckingham Pi theorem that reveals a ridge function structure underlying all dimensionless physical laws. We leverage this structure by constructing a hypothesis test based on sufficient dimension reduction, allowing for an experimental data-driven detection of hidden parameters. Both theory and examples will be presented, using classical turbulent pipe flow as the working example. Keywords: experimental techniques, dimensional analysis, lurking variables, hidden parameters, buckingham pi, data analysis. First author supported by the NSF GRFP under Grant Number DGE-114747.
A frequency-based window width optimized two-dimensional S-Transform profilometry
NASA Astrophysics Data System (ADS)
Zhong, Min; Chen, Feng; Xiao, Chao
2017-11-01
A new scheme is proposed to as a frequency-based window width optimized two-dimensional S-Transform profilometry, in which parameters pu and pv are introduced to control the width of a two-dimensional Gaussian window. Unlike the standard two-dimensional S-transform using the Gaussian window with window width proportional to the reciprocal local frequency of the tested signal, the size of window width for the optimized two-dimensional S-Transform varies with the pu th (pv th) power of the reciprocal local frequency fx (fy) in x (y) direction. The paper gives a detailed theoretical analysis of optimized two-dimensional S-Transform in fringe analysis as well as the characteristics of the modified Gauss window. Simulations are applied to evaluate the proposed scheme, the results show that the new scheme has better noise reduction ability and can extract phase distribution more precise in comparison with the standard two-dimensional S-transform even though the surface of the measured object varies sharply. Finally, the proposed scheme is demonstrated on three-dimensional surface reconstruction for a complex plastic cat mask to show its effectiveness.
Face recognition based on two-dimensional discriminant sparse preserving projection
NASA Astrophysics Data System (ADS)
Zhang, Dawei; Zhu, Shanan
2018-04-01
In this paper, a supervised dimensionality reduction algorithm named two-dimensional discriminant sparse preserving projection (2DDSPP) is proposed for face recognition. In order to accurately model manifold structure of data, 2DDSPP constructs within-class affinity graph and between-class affinity graph by the constrained least squares (LS) and l1 norm minimization problem, respectively. Based on directly operating on image matrix, 2DDSPP integrates graph embedding (GE) with Fisher criterion. The obtained projection subspace preserves within-class neighborhood geometry structure of samples, while keeping away samples from different classes. The experimental results on the PIE and AR face databases show that 2DDSPP can achieve better recognition performance.
ERIC Educational Resources Information Center
Walker, Melanie; McLean, Monica; Dison, Arona; Peppin-Vaughan, Rosie
2009-01-01
This paper reports on a research project investigating the role of universities in South Africa in contributing to poverty reduction through the quality of their professional education programmes. The focus here is on theorising and the early operationalisation of multi-layered, multi-dimensional transformation based on ideas from Amartya Sen's…
A Corresponding Lie Algebra of a Reductive homogeneous Group and Its Applications
NASA Astrophysics Data System (ADS)
Zhang, Yu-Feng; Wu, Li-Xin; Rui, Wen-Juan
2015-05-01
With the help of a Lie algebra of a reductive homogeneous space G/K, where G is a Lie group and K is a resulting isotropy group, we introduce a Lax pair for which an expanding (2+1)-dimensional integrable hierarchy is obtained by applying the binormial-residue representation (BRR) method, whose Hamiltonian structure is derived from the trace identity for deducing (2+1)-dimensional integrable hierarchies, which was proposed by Tu, et al. We further consider some reductions of the expanding integrable hierarchy obtained in the paper. The first reduction is just right the (2+1)-dimensional AKNS hierarchy, the second-type reduction reveals an integrable coupling of the (2+1)-dimensional AKNS equation (also called the Davey-Stewartson hierarchy), a kind of (2+1)-dimensional Schrödinger equation, which was once reobtained by Tu, Feng and Zhang. It is interesting that a new (2+1)-dimensional integrable nonlinear coupled equation is generated from the reduction of the part of the (2+1)-dimensional integrable coupling, which is further reduced to the standard (2+1)-dimensional diffusion equation along with a parameter. In addition, the well-known (1+1)-dimensional AKNS hierarchy, the (1+1)-dimensional nonlinear Schrödinger equation are all special cases of the (2+1)-dimensional expanding integrable hierarchy. Finally, we discuss a few discrete difference equations of the diffusion equation whose stabilities are analyzed by making use of the von Neumann condition and the Fourier method. Some numerical solutions of a special stationary initial value problem of the (2+1)-dimensional diffusion equation are obtained and the resulting convergence and estimation formula are investigated. Supported by the Innovation Team of Jiangsu Province hosted by China University of Mining and Technology (2014), the National Natural Science Foundation of China under Grant No. 11371361, the Fundamental Research Funds for the Central Universities (2013XK03), and the Natural Science Foundation of Shandong Province under Grant No. ZR2013AL016
Fürnstahl, Philipp; Vlachopoulos, Lazaros; Schweizer, Andreas; Fucentese, Sandro F; Koch, Peter P
2015-08-01
The accurate reduction of tibial plateau malunions can be challenging without guidance. In this work, we report on a novel technique that combines 3-dimensional computer-assisted planning with patient-specific surgical guides for improving reliability and accuracy of complex intraarticular corrective osteotomies. Preoperative planning based on 3-dimensional bone models was performed to simulate fragment mobilization and reduction in 3 cases. Surgical implementation of the preoperative plan using patient-specific cutting and reduction guides was evaluated; benefits and limitations of the approach were identified and discussed. The preliminary results are encouraging and show that complex, intraarticular corrective osteotomies can be accurately performed with this technique. For selective patients with complex malunions around the tibia plateau, this method might be an attractive option, with the potential to facilitate achieving the most accurate correction possible.
NASA Technical Reports Server (NTRS)
Hollis, Brian R.
1995-01-01
A FORTRAN computer code for the reduction and analysis of experimental heat transfer data has been developed. This code can be utilized to determine heat transfer rates from surface temperature measurements made using either thin-film resistance gages or coaxial surface thermocouples. Both an analytical and a numerical finite-volume heat transfer model are implemented in this code. The analytical solution is based on a one-dimensional, semi-infinite wall thickness model with the approximation of constant substrate thermal properties, which is empirically corrected for the effects of variable thermal properties. The finite-volume solution is based on a one-dimensional, implicit discretization. The finite-volume model directly incorporates the effects of variable substrate thermal properties and does not require the semi-finite wall thickness approximation used in the analytical model. This model also includes the option of a multiple-layer substrate. Fast, accurate results can be obtained using either method. This code has been used to reduce several sets of aerodynamic heating data, of which samples are included in this report.
2012-01-01
We show that certain three-dimensional (3D) superlattice nanostructure based on Bi2Te3 topological insulator thin films has better thermoelectric performance than two-dimensional (2D) thin films. The 3D superlattice shows a predicted peak value of ZT of approximately 6 for gapped surface states at room temperature and retains a high figure of merit ZT of approximately 2.5 for gapless surface states. In contrast, 2D thin films with gapless surface states show no advantage over bulk Bi2Te3. The enhancement of the thermoelectric performance originates from a combination of the reduction of lattice thermal conductivity by phonon-interface scattering, the high mobility of the topologically protected surface states, the enhancement of Seebeck coefficient, and the reduction of electron thermal conductivity by energy filtering. Our study shows that the nanostructure design of topological insulators provides a possible new way of ZT enhancement. PMID:23072433
Fan, Zheyong; Zheng, Jiansen; Wang, Hui-Qiong; Zheng, Jin-Cheng
2012-10-16
We show that certain three-dimensional (3D) superlattice nanostructure based on Bi2Te3 topological insulator thin films has better thermoelectric performance than two-dimensional (2D) thin films. The 3D superlattice shows a predicted peak value of ZT of approximately 6 for gapped surface states at room temperature and retains a high figure of merit ZT of approximately 2.5 for gapless surface states. In contrast, 2D thin films with gapless surface states show no advantage over bulk Bi2Te3. The enhancement of the thermoelectric performance originates from a combination of the reduction of lattice thermal conductivity by phonon-interface scattering, the high mobility of the topologically protected surface states, the enhancement of Seebeck coefficient, and the reduction of electron thermal conductivity by energy filtering. Our study shows that the nanostructure design of topological insulators provides a possible new way of ZT enhancement.
Multispectral x-ray CT: multivariate statistical analysis for efficient reconstruction
NASA Astrophysics Data System (ADS)
Kheirabadi, Mina; Mustafa, Wail; Lyksborg, Mark; Lund Olsen, Ulrik; Bjorholm Dahl, Anders
2017-10-01
Recent developments in multispectral X-ray detectors allow for an efficient identification of materials based on their chemical composition. This has a range of applications including security inspection, which is our motivation. In this paper, we analyze data from a tomographic setup employing the MultiX detector, that records projection data in 128 energy bins covering the range from 20 to 160 keV. Obtaining all information from this data requires reconstructing 128 tomograms, which is computationally expensive. Instead, we propose to reduce the dimensionality of projection data prior to reconstruction and reconstruct from the reduced data. We analyze three linear methods for dimensionality reduction using a dataset with 37 equally-spaced projection angles. Four bottles with different materials are recorded for which we are able to obtain similar discrimination of their content using a very reduced subset of tomograms compared to the 128 tomograms that would otherwise be needed without dimensionality reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giannessi, Luca; Quattromini, Marcello
1997-06-01
We describe the model for the simulation of charged beam dynamics in radiofrequency injectors used in the three dimensional code TREDI, where the inclusion of space charge fields is obtained by means of the Lienard-Wiechert retarded potentials. The problem of charge screening is analyzed in covariant form and some general recipes for charge assignment and noise reduction are given.
Content Abstract Classification Using Naive Bayes
NASA Astrophysics Data System (ADS)
Latif, Syukriyanto; Suwardoyo, Untung; Aldrin Wihelmus Sanadi, Edwin
2018-03-01
This study aims to classify abstract content based on the use of the highest number of words in an abstract content of the English language journals. This research uses a system of text mining technology that extracts text data to search information from a set of documents. Abstract content of 120 data downloaded at www.computer.org. Data grouping consists of three categories: DM (Data Mining), ITS (Intelligent Transport System) and MM (Multimedia). Systems built using naive bayes algorithms to classify abstract journals and feature selection processes using term weighting to give weight to each word. Dimensional reduction techniques to reduce the dimensions of word counts rarely appear in each document based on dimensional reduction test parameters of 10% -90% of 5.344 words. The performance of the classification system is tested by using the Confusion Matrix based on comparative test data and test data. The results showed that the best classification results were obtained during the 75% training data test and 25% test data from the total data. Accuracy rates for categories of DM, ITS and MM were 100%, 100%, 86%. respectively with dimension reduction parameters of 30% and the value of learning rate between 0.1-0.5.
Esmaeili, Mahdad; Dehnavi, Alireza Mehri; Rabbani, Hossein; Hajizadeh, Fedra
2017-01-01
The process of interpretation of high-speed optical coherence tomography (OCT) images is restricted due to the large speckle noise. To address this problem, this paper proposes a new method using two-dimensional (2D) curvelet-based K-SVD algorithm for speckle noise reduction and contrast enhancement of intra-retinal layers of 2D spectral-domain OCT images. For this purpose, we take curvelet transform of the noisy image. In the next step, noisy sub-bands of different scales and rotations are separately thresholded with an adaptive data-driven thresholding method, then, each thresholded sub-band is denoised based on K-SVD dictionary learning with a variable size initial dictionary dependent on the size of curvelet coefficients' matrix in each sub-band. We also modify each coefficient matrix to enhance intra-retinal layers, with noise suppression at the same time. We demonstrate the ability of the proposed algorithm in speckle noise reduction of 100 publically available OCT B-scans with and without non-neovascular age-related macular degeneration (AMD), and improvement of contrast-to-noise ratio from 1.27 to 5.12 and mean-to-standard deviation ratio from 3.20 to 14.41 are obtained.
NASA Astrophysics Data System (ADS)
Nicolini, Paolo; Frezzato, Diego
2013-06-01
Simplification of chemical kinetics description through dimensional reduction is particularly important to achieve an accurate numerical treatment of complex reacting systems, especially when stiff kinetics are considered and a comprehensive picture of the evolving system is required. To this aim several tools have been proposed in the past decades, such as sensitivity analysis, lumping approaches, and exploitation of time scales separation. In addition, there are methods based on the existence of the so-called slow manifolds, which are hyper-surfaces of lower dimension than the one of the whole phase-space and in whose neighborhood the slow evolution occurs after an initial fast transient. On the other hand, all tools contain to some extent a degree of subjectivity which seems to be irremovable. With reference to macroscopic and spatially homogeneous reacting systems under isothermal conditions, in this work we shall adopt a phenomenological approach to let self-emerge the dimensional reduction from the mathematical structure of the evolution law. By transforming the original system of polynomial differential equations, which describes the chemical evolution, into a universal quadratic format, and making a direct inspection of the high-order time-derivatives of the new dynamic variables, we then formulate a conjecture which leads to the concept of an "attractiveness" region in the phase-space where a well-defined state-dependent rate function ω has the simple evolution dot{ω }= - ω ^2 along any trajectory up to the stationary state. This constitutes, by itself, a drastic dimensional reduction from a system of N-dimensional equations (being N the number of chemical species) to a one-dimensional and universal evolution law for such a characteristic rate. Step-by-step numerical inspections on model kinetic schemes are presented. In the companion paper [P. Nicolini and D. Frezzato, J. Chem. Phys. 138, 234102 (2013)], 10.1063/1.4809593 this outcome will be naturally related to the appearance (and hence, to the definition) of the slow manifolds.
Gönen, Mehmet
2014-01-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862
Gönen, Mehmet
2014-03-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.
Metric dimensional reduction at singularities with implications to Quantum Gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoica, Ovidiu Cristinel, E-mail: holotronix@gmail.com
2014-08-15
A series of old and recent theoretical observations suggests that the quantization of gravity would be feasible, and some problems of Quantum Field Theory would go away if, somehow, the spacetime would undergo a dimensional reduction at high energy scales. But an identification of the deep mechanism causing this dimensional reduction would still be desirable. The main contribution of this article is to show that dimensional reduction effects are due to General Relativity at singularities, and do not need to be postulated ad-hoc. Recent advances in understanding the geometry of singularities do not require modification of General Relativity, being justmore » non-singular extensions of its mathematics to the limit cases. They turn out to work fine for some known types of cosmological singularities (black holes and FLRW Big-Bang), allowing a choice of the fundamental geometric invariants and physical quantities which remain regular. The resulting equations are equivalent to the standard ones outside the singularities. One consequence of this mathematical approach to the singularities in General Relativity is a special, (geo)metric type of dimensional reduction: at singularities, the metric tensor becomes degenerate in certain spacetime directions, and some properties of the fields become independent of those directions. Effectively, it is like one or more dimensions of spacetime just vanish at singularities. This suggests that it is worth exploring the possibility that the geometry of singularities leads naturally to the spontaneous dimensional reduction needed by Quantum Gravity. - Highlights: • The singularities we introduce are described by finite geometric/physical objects. • Our singularities are accompanied by dimensional reduction effects. • They affect the metric, the measure, the topology, the gravitational DOF (Weyl = 0). • Effects proposed in other approaches to Quantum Gravity are obtained naturally. • The geometric dimensional reduction obtained opens new ways for Quantum Gravity.« less
Hypergraph-based anomaly detection of high-dimensional co-occurrences.
Silva, Jorge; Willett, Rebecca
2009-03-01
This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.
Kent, Jack W
2016-02-03
New technologies for acquisition of genomic data, while offering unprecedented opportunities for genetic discovery, also impose severe burdens of interpretation and penalties for multiple testing. The Pathway-based Analyses Group of the Genetic Analysis Workshop 19 (GAW19) sought reduction of multiple-testing burden through various approaches to aggregation of highdimensional data in pathways informed by prior biological knowledge. Experimental methods testedincluded the use of "synthetic pathways" (random sets of genes) to estimate power and false-positive error rate of methods applied to simulated data; data reduction via independent components analysis, single-nucleotide polymorphism (SNP)-SNP interaction, and use of gene sets to estimate genetic similarity; and general assessment of the efficacy of prior biological knowledge to reduce the dimensionality of complex genomic data. The work of this group explored several promising approaches to managing high-dimensional data, with the caveat that these methods are necessarily constrained by the quality of external bioinformatic annotation.
Network community-based model reduction for vortical flows
NASA Astrophysics Data System (ADS)
Gopalakrishnan Meena, Muralikrishnan; Nair, Aditya G.; Taira, Kunihiko
2018-06-01
A network community-based reduced-order model is developed to capture key interactions among coherent structures in high-dimensional unsteady vortical flows. The present approach is data-inspired and founded on network-theoretic techniques to identify important vortical communities that are comprised of vortical elements that share similar dynamical behavior. The overall interaction-based physics of the high-dimensional flow field is distilled into the vortical community centroids, considerably reducing the system dimension. Taking advantage of these vortical interactions, the proposed methodology is applied to formulate reduced-order models for the inter-community dynamics of vortical flows, and predict lift and drag forces on bodies in wake flows. We demonstrate the capabilities of these models by accurately capturing the macroscopic dynamics of a collection of discrete point vortices, and the complex unsteady aerodynamic forces on a circular cylinder and an airfoil with a Gurney flap. The present formulation is found to be robust against simulated experimental noise and turbulence due to its integrating nature of the system reduction.
Office-Based Three-Dimensional Printing Workflow for Craniomaxillofacial Fracture Repair.
Elegbede, Adekunle; Diaconu, Silviu C; McNichols, Colton H L; Seu, Michelle; Rasko, Yvonne M; Grant, Michael P; Nam, Arthur J
2018-03-08
Three-dimensional printing of patient-specific models is being used in various aspects of craniomaxillofacial reconstruction. Printing is typically outsourced to off-site vendors, with the main disadvantages being increased costs and time for production. Office-based 3-dimensional printing has been proposed as a means to reduce costs and delays, but remains largely underused because of the perception among surgeons that it is futuristic, highly technical, and prohibitively expensive. The goal of this report is to demonstrate the feasibility and ease of incorporating in-office 3-dimensional printing into the standard workflow for facial fracture repair.Patients with complex mandible fractures requiring open repair were identified. Open-source software was used to create virtual 3-dimensional skeletal models of the, initial injury pattern, and then the ideally reduced fractures based on preoperative computed tomography (CT) scan images. The virtual 3-dimensional skeletal models were then printed in our office using a commercially available 3-dimensional printer and bioplastic filament. The 3-dimensional skeletal models were used as templates to bend and shape titanium plates that were subsequently used for intraoperative fixation.Average print time was 6 hours. Excluding the 1-time cost of the 3-dimensional printer of $2500, roughly the cost of a single commercially produced model, the average material cost to print 1 model mandible was $4.30. Postoperative CT imaging demonstrated precise, predicted reduction in all patients.Office-based 3-dimensional printing of skeletal models can be routinely used in repair of facial fractures in an efficient and cost-effective manner.
A General Exponential Framework for Dimensionality Reduction.
Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan
2014-02-01
As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.
CATTAERT, TOM; CALLE, M. LUZ; DUDEK, SCOTT M.; MAHACHIE JOHN, JESTINAH M.; VAN LISHOUT, FRANÇOIS; URREA, VICTOR; RITCHIE, MARYLYN D.; VAN STEEN, KRISTEL
2010-01-01
SUMMARY Analyzing the combined effects of genes and/or environmental factors on the development of complex diseases is a great challenge from both the statistical and computational perspective, even using a relatively small number of genetic and non-genetic exposures. Several data mining methods have been proposed for interaction analysis, among them, the Multifactor Dimensionality Reduction Method (MDR), which has proven its utility in a variety of theoretical and practical settings. Model-Based Multifactor Dimensionality Reduction (MB-MDR), a relatively new MDR-based technique that is able to unify the best of both non-parametric and parametric worlds, was developed to address some of the remaining concerns that go along with an MDR-analysis. These include the restriction to univariate, dichotomous traits, the absence of flexible ways to adjust for lower-order effects and important confounders, and the difficulty to highlight epistasis effects when too many multi-locus genotype cells are pooled into two new genotype groups. Whereas the true value of MB-MDR can only reveal itself by extensive applications of the method in a variety of real-life scenarios, here we investigate the empirical power of MB-MDR to detect gene-gene interactions in the absence of any noise and in the presence of genotyping error, missing data, phenocopy, and genetic heterogeneity. For the considered simulation settings, we show that the power is generally higher for MB-MDR than for MDR, in particular in the presence of genetic heterogeneity, phenocopy, or low minor allele frequencies. PMID:21158747
Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Youngsoo; Carlberg, Kevin Thomas
Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over allmore » space and time in a weighted ℓ 2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.« less
Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.
2010-01-01
A central goal of human genetics is to identify and characterize susceptibility genes for common complex human diseases. An important challenge in this endeavor is the modeling of gene-gene interaction or epistasis that can result in non-additivity of genetic effects. The multifactor dimensionality reduction (MDR) method was developed as machine learning alternative to parametric logistic regression for detecting interactions in absence of significant marginal effects. The goal of MDR is to reduce the dimensionality inherent in modeling combinations of polymorphisms using a computational approach called constructive induction. Here, we propose a Robust Multifactor Dimensionality Reduction (RMDR) method that performs constructive induction using a Fisher’s Exact Test rather than a predetermined threshold. The advantage of this approach is that only those genotype combinations that are determined to be statistically significant are considered in the MDR analysis. We use two simulation studies to demonstrate that this approach will increase the success rate of MDR when there are only a few genotype combinations that are significantly associated with case-control status. We show that there is no loss of success rate when this is not the case. We then apply the RMDR method to the detection of gene-gene interactions in genotype data from a population-based study of bladder cancer in New Hampshire. PMID:21091664
Observational Needs for Four-Dimensional Air Quality Characterization
Surface-based monitoring programs provide the foundation for associating air pollution and causal effects in human health studies, and they support the development of air quality standards and the preparation of emission reduction strategies. While surface oriented networks remai...
PCA based feature reduction to improve the accuracy of decision tree c4.5 classification
NASA Astrophysics Data System (ADS)
Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.
2018-03-01
Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.
Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models
Cowley, Benjamin R.; Doiron, Brent; Kohn, Adam
2016-01-01
Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction—shared dimensionality and percent shared variance—with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure. PMID:27926936
Wang, Qiong-Hua; Li, Xiao-Fang; Zhou, Lei; Wang, Ai-Hong; Li, Da-Hai
2011-03-01
A method is proposed to alleviate the cross talk in multiview autostereoscopic three-dimensional displays based on a lenticular sheet. We analyze the positional relationship between subpixels on the image panel and the lenticular sheet. According to this relationship, optimal synthetic images are synthesized to minimize cross talk by correcting the positions of subpixels on the image panel. Experimental results show that the proposed method significantly reduces the cross talk of view images and improves the quality of stereoscopic images. © 2010 Optical Society of America
NASA Astrophysics Data System (ADS)
Ray, S. Saha
2018-04-01
In this paper, the symmetry analysis and similarity reduction of the (2+1)-dimensional Bogoyavlensky-Konopelchenko (B-K) equation are investigated by means of the geometric approach of an invariance group, which is equivalent to the classical Lie symmetry method. Using the extended Harrison and Estabrook’s differential forms approach, the infinitesimal generators for (2+1)-dimensional B-K equation are obtained. Firstly, the vector field associated with the Lie group of transformation is derived. Then the symmetry reduction and the corresponding explicit exact solution of (2+1)-dimensional B-K equation is obtained.
Clark, Neil R.; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D.; Jones, Matthew R.; Ma’ayan, Avi
2016-01-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community. PMID:26848405
Clark, Neil R; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D; Jones, Matthew R; Ma'ayan, Avi
2015-11-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
Krivov, Sergei V
2011-07-01
Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game--the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.
NASA Astrophysics Data System (ADS)
Krivov, Sergei V.
2011-07-01
Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game—the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Rosen, I. G.
1988-01-01
In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.
NASA Astrophysics Data System (ADS)
Luo, Yuan; Tan, Meng-Chwan; Vasko, Petr; Zhao, Qin
2017-05-01
We perform a series of dimensional reductions of the 6d, \\mathcal{N} = (2, 0) SCFT on S 2 × Σ × I × S 1 down to 2d on Σ. The reductions are performed in three steps: (i) a reduction on S 1 (accompanied by a topological twist along Σ) leading to a supersymmetric Yang-Mills theory on S 2 × Σ × I, (ii) a further reduction on S 2 resulting in a complex Chern-Simons theory defined on Σ × I, with the real part of the complex Chern-Simons level being zero, and the imaginary part being proportional to the ratio of the radii of S 2 and S 1, and (iii) a final reduction to the boundary modes of complex Chern-Simons theory with the Nahm pole boundary condition at both ends of the interval I, which gives rise to a complex Toda CFT on the Riemann surface Σ. As the reduction of the 6d theory on Σ would give rise to an \\mathcal{N} = 2 supersymmetric theory on S 2 × I × S 1, our results imply a 4d-2d duality between four-dimensional \\mathcal{N} = 2 supersymmetric theory with boundary and two-dimensional complex Toda theory.
Rydzewski, J; Nowak, W
2016-04-12
In this work we propose an application of a nonlinear dimensionality reduction method to represent the high-dimensional configuration space of the ligand-protein dissociation process in a manner facilitating interpretation. Rugged ligand expulsion paths are mapped into 2-dimensional space. The mapping retains the main structural changes occurring during the dissociation. The topological similarity of the reduced paths may be easily studied using the Fréchet distances, and we show that this measure facilitates machine learning classification of the diffusion pathways. Further, low-dimensional configuration space allows for identification of residues active in transport during the ligand diffusion from a protein. The utility of this approach is illustrated by examination of the configuration space of cytochrome P450cam involved in expulsing camphor by means of enhanced all-atom molecular dynamics simulations. The expulsion trajectories are sampled and constructed on-the-fly during molecular dynamics simulations using the recently developed memetic algorithms [ Rydzewski, J.; Nowak, W. J. Chem. Phys. 2015 , 143 ( 12 ), 124101 ]. We show that the memetic algorithms are effective for enforcing the ligand diffusion and cavity exploration in the P450cam-camphor complex. Furthermore, we demonstrate that machine learning techniques are helpful in inspecting ligand diffusion landscapes and provide useful tools to examine structural changes accompanying rare events.
Learning an intrinsic-variable preserving manifold for dynamic visual tracking.
Qiao, Hong; Zhang, Peng; Zhang, Bo; Zheng, Suiwu
2010-06-01
Manifold learning is a hot topic in the field of computer science, particularly since nonlinear dimensionality reduction based on manifold learning was proposed in Science in 2000. The work has achieved great success. The main purpose of current manifold-learning approaches is to search for independent intrinsic variables underlying high dimensional inputs which lie on a low dimensional manifold. In this paper, a new manifold is built up in the training step of the process, on which the input training samples are set to be close to each other if the values of their intrinsic variables are close to each other. Then, the process of dimensionality reduction is transformed into a procedure of preserving the continuity of the intrinsic variables. By utilizing the new manifold, the dynamic tracking of a human who can move and rotate freely is achieved. From the theoretical point of view, it is the first approach to transfer the manifold-learning framework to dynamic tracking. From the application point of view, a new and low dimensional feature for visual tracking is obtained and successfully applied to the real-time tracking of a free-moving object from a dynamic vision system. Experimental results from a dynamic tracking system which is mounted on a dynamic robot validate the effectiveness of the new algorithm.
Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael
2014-10-01
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Burns, John A.; Marrekchi, Hamadi
1993-01-01
The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.
On the reduction of 4d $$ \\mathcal{N}=1 $$ theories on $$ {\\mathbb{S}}^2 $$
Gadde, Abhijit; Razamat, Shlomo S.; Willett, Brian
2015-11-24
Here, we discuss reductions of generalmore » $$ \\mathcal{N}=1 $$ four dimensional gauge theories on $$ {\\mathbb{S}}^2 $$. The effective two dimensional theory one obtains depends on the details of the coupling of the theory to background fields, which can be translated to a choice of R-symmetry. We argue that, for special choices of R-symmetry, the resulting two dimensional theory has a natural interpretation as an $$ \\mathcal{N}(0,2) $$ gauge theory. As an application of our general observations, we discuss reductions of $$ \\mathcal{N}=1 $$ and $$ \\mathcal{N}=2 $$ dualities and argue that they imply certain two dimensional dualities.« less
Flight tests of external modifications used to reduce blunt base drag
NASA Technical Reports Server (NTRS)
Powers, Sheryll Goecke
1988-01-01
The effectiveness of a trailing disk (the trapped vortex concept) in reducing the blunt base drag of an 8-in diameter body of revolution was studied from measurements made both in flight and in full-scale wind-tunnel tests. The experiment demonstrated the significant base drag reduction capability of the trailing disk to Mach 0.93. The maximum base drag reduction obtained from a cavity tested on the flight body of revolution was not significant. The effectiveness of a splitter plate and a vented-wall cavity in reducing the base drag of a quasi-two-dimensional fuselage closure was studied from base pressure measurements made in flight. The fuselage closure was between the two engines of the F-111 airplane; therefore, the base pressures were in the presence of jet engine exhaust. For Mach numbers from 1.10 to 1.51, significant base drag reduction was provided by the vented-wall cavity configuration. The splitter plate was not considered effective in reducing base drag at any Mach number tested.
A fully 3D approach for metal artifact reduction in computed tomography.
Kratz, Barbel; Weyers, Imke; Buzug, Thorsten M
2012-11-01
In computed tomography imaging metal objects in the region of interest introduce inconsistencies during data acquisition. Reconstructing these data leads to an image in spatial domain including star-shaped or stripe-like artifacts. In order to enhance the quality of the resulting image the influence of the metal objects can be reduced. Here, a metal artifact reduction (MAR) approach is proposed that is based on a recomputation of the inconsistent projection data using a fully three-dimensional Fourier-based interpolation. The success of the projection space restoration depends sensitively on a sensible continuation of neighboring structures into the recomputed area. Fortunately, structural information of the entire data is inherently included in the Fourier space of the data. This can be used for a reasonable recomputation of the inconsistent projection data. The key step of the proposed MAR strategy is the recomputation of the inconsistent projection data based on an interpolation using nonequispaced fast Fourier transforms (NFFT). The NFFT interpolation can be applied in arbitrary dimension. The approach overcomes the problem of adequate neighborhood definitions on irregular grids, since this is inherently given through the usage of higher dimensional Fourier transforms. Here, applications up to the third interpolation dimension are presented and validated. Furthermore, prior knowledge may be included by an appropriate damping of the transform during the interpolation step. This MAR method is applicable on each angular view of a detector row, on two-dimensional projection data as well as on three-dimensional projection data, e.g., a set of sequential acquisitions at different spatial positions, projection data of a spiral acquisition, or cone-beam projection data. Results of the novel MAR scheme based on one-, two-, and three-dimensional NFFT interpolations are presented. All results are compared in projection data space and spatial domain with the well-known one-dimensional linear interpolation strategy. In conclusion, it is recommended to include as much spatial information into the recomputation step as possible. This is realized by increasing the dimension of the NFFT. The resulting image quality can be enhanced considerably.
A review on the multivariate statistical methods for dimensional reduction studies
NASA Astrophysics Data System (ADS)
Aik, Lim Eng; Kiang, Lam Chee; Mohamed, Zulkifley Bin; Hong, Tan Wei
2017-05-01
In this research study we have discussed multivariate statistical methods for dimensional reduction, which has been done by various researchers. The reduction of dimensionality is valuable to accelerate algorithm progression, as well as really may offer assistance with the last grouping/clustering precision. A lot of boisterous or even flawed info information regularly prompts a not exactly alluring algorithm progression. Expelling un-useful or dis-instructive information segments may for sure help the algorithm discover more broad grouping locales and principles and generally speaking accomplish better exhibitions on new data set.
Principal component analysis on a torus: Theory and application to protein dynamics.
Sittel, Florian; Filk, Thomas; Stock, Gerhard
2017-12-28
A dimensionality reduction method for high-dimensional circular data is developed, which is based on a principal component analysis (PCA) of data points on a torus. Adopting a geometrical view of PCA, various distance measures on a torus are introduced and the associated problem of projecting data onto the principal subspaces is discussed. The main idea is that the (periodicity-induced) projection error can be minimized by transforming the data such that the maximal gap of the sampling is shifted to the periodic boundary. In a second step, the covariance matrix and its eigendecomposition can be computed in a standard manner. Adopting molecular dynamics simulations of two well-established biomolecular systems (Aib 9 and villin headpiece), the potential of the method to analyze the dynamics of backbone dihedral angles is demonstrated. The new approach allows for a robust and well-defined construction of metastable states and provides low-dimensional reaction coordinates that accurately describe the free energy landscape. Moreover, it offers a direct interpretation of covariances and principal components in terms of the angular variables. Apart from its application to PCA, the method of maximal gap shifting is general and can be applied to any other dimensionality reduction method for circular data.
Principal component analysis on a torus: Theory and application to protein dynamics
NASA Astrophysics Data System (ADS)
Sittel, Florian; Filk, Thomas; Stock, Gerhard
2017-12-01
A dimensionality reduction method for high-dimensional circular data is developed, which is based on a principal component analysis (PCA) of data points on a torus. Adopting a geometrical view of PCA, various distance measures on a torus are introduced and the associated problem of projecting data onto the principal subspaces is discussed. The main idea is that the (periodicity-induced) projection error can be minimized by transforming the data such that the maximal gap of the sampling is shifted to the periodic boundary. In a second step, the covariance matrix and its eigendecomposition can be computed in a standard manner. Adopting molecular dynamics simulations of two well-established biomolecular systems (Aib9 and villin headpiece), the potential of the method to analyze the dynamics of backbone dihedral angles is demonstrated. The new approach allows for a robust and well-defined construction of metastable states and provides low-dimensional reaction coordinates that accurately describe the free energy landscape. Moreover, it offers a direct interpretation of covariances and principal components in terms of the angular variables. Apart from its application to PCA, the method of maximal gap shifting is general and can be applied to any other dimensionality reduction method for circular data.
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
Exploring the CAESAR database using dimensionality reduction techniques
NASA Astrophysics Data System (ADS)
Mendoza-Schrock, Olga; Raymer, Michael L.
2012-06-01
The Civilian American and European Surface Anthropometry Resource (CAESAR) database containing over 40 anthropometric measurements on over 4000 humans has been extensively explored for pattern recognition and classification purposes using the raw, original data [1-4]. However, some of the anthropometric variables would be impossible to collect in an uncontrolled environment. Here, we explore the use of dimensionality reduction methods in concert with a variety of classification algorithms for gender classification using only those variables that are readily observable in an uncontrolled environment. Several dimensionality reduction techniques are employed to learn the underlining structure of the data. These techniques include linear projections such as the classical Principal Components Analysis (PCA) and non-linear (manifold learning) techniques, such as Diffusion Maps and the Isomap technique. This paper briefly describes all three techniques, and compares three different classifiers, Naïve Bayes, Adaboost, and Support Vector Machines (SVM), for gender classification in conjunction with each of these three dimensionality reduction approaches.
Complexity-reduced implementations of complete and null-space-based linear discriminant analysis.
Lu, Gui-Fu; Zheng, Wenming
2013-10-01
Dimensionality reduction has become an important data preprocessing step in a lot of applications. Linear discriminant analysis (LDA) is one of the most well-known dimensionality reduction methods. However, the classical LDA cannot be used directly in the small sample size (SSS) problem where the within-class scatter matrix is singular. In the past, many generalized LDA methods has been reported to address the SSS problem. Among these methods, complete linear discriminant analysis (CLDA) and null-space-based LDA (NLDA) provide good performances. The existing implementations of CLDA are computationally expensive. In this paper, we propose a new and fast implementation of CLDA. Our proposed implementation of CLDA, which is the most efficient one, is equivalent to the existing implementations of CLDA in theory. Since CLDA is an extension of null-space-based LDA (NLDA), our implementation of CLDA also provides a fast implementation of NLDA. Experiments on some real-world data sets demonstrate the effectiveness of our proposed new CLDA and NLDA algorithms. Copyright © 2013 Elsevier Ltd. All rights reserved.
Tensor Train Neighborhood Preserving Embedding
NASA Astrophysics Data System (ADS)
Wang, Wenqi; Aggarwal, Vaneet; Aeron, Shuchin
2018-05-01
In this paper, we propose a Tensor Train Neighborhood Preserving Embedding (TTNPE) to embed multi-dimensional tensor data into low dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate novel trade-off gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior trade-off in classification, computation, and dimensionality reduction in MNIST handwritten digits and Weizmann face datasets.
Kaluza-Klein cosmology from five-dimensional Lovelock-Cartan theory
NASA Astrophysics Data System (ADS)
Castillo-Felisola, Oscar; Corral, Cristóbal; del Pino, Simón; Ramírez, Francisca
2016-12-01
We study the Kaluza-Klein dimensional reduction of the Lovelock-Cartan theory in five-dimensional spacetime, with a compact dimension of S1 topology. We find cosmological solutions of the Friedmann-Robertson-Walker class in the reduced spacetime. The torsion and the fields arising from the dimensional reduction induce a nonvanishing energy-momentum tensor in four dimensions. We find solutions describing expanding, contracting, and bouncing universes. The model shows a dynamical compactification of the extra dimension in some regions of the parameter space.
NASA Technical Reports Server (NTRS)
Cunefare, K. A.; Koopmann, G. H.
1991-01-01
This paper presents the theoretical development of an approach to active noise control (ANC) applicable to three-dimensional radiators. The active noise control technique, termed ANC Optimization Analysis, is based on minimizing the total radiated power by adding secondary acoustic sources on the primary noise source. ANC Optimization Analysis determines the optimum magnitude and phase at which to drive the secondary control sources in order to achieve the best possible reduction in the total radiated power from the noise source/control source combination. For example, ANC Optimization Analysis predicts a 20 dB reduction in the total power radiated from a sphere of radius at a dimensionless wavenumber ka of 0.125, for a single control source representing 2.5 percent of the total area of the sphere. ANC Optimization Analysis is based on a boundary element formulation of the Helmholtz Integral Equation, and thus, the optimization analysis applies to a single frequency, while multiple frequencies can be treated through repeated analyses.
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications.
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
Xu, Dong; Yan, Shuicheng; Tao, Dacheng; Lin, Stephen; Zhang, Hong-Jiang
2007-11-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for human gait recognition and content-based image retrieval (CBIR). In this paper, we present extensions of our recently proposed marginal Fisher analysis (MFA) to address these problems. For human gait recognition, we first present a direct application of MFA, then inspired by recent advances in matrix and tensor-based dimensionality reduction algorithms, we present matrix-based MFA for directly handling 2-D input in the form of gray-level averaged images. For CBIR, we deal with the relevance feedback problem by extending MFA to marginal biased analysis, in which within-class compactness is characterized only by the distances between each positive sample and its neighboring positive samples. In addition, we present a new technique to acquire a direct optimal solution for MFA without resorting to objective function modification as done in many previous algorithms. We conduct comprehensive experiments on the USF HumanID gait database and the Corel image retrieval database. Experimental results demonstrate that MFA and its extensions outperform related algorithms in both applications.
Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua
2014-01-01
The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Thermally induced rarefied gas flow in a three-dimensional enclosure with square cross-section
NASA Astrophysics Data System (ADS)
Zhu, Lianhua; Yang, Xiaofan; Guo, Zhaoli
2017-12-01
Rarefied gas flow in a three-dimensional enclosure induced by nonuniform temperature distribution is numerically investigated. The enclosure has a square channel-like geometry with alternatively heated closed ends and lateral walls with a linear temperature distribution. A recently proposed implicit discrete velocity method with a memory reduction technique is used to numerically simulate the problem based on the nonlinear Shakhov kinetic equation. The Knudsen number dependencies of the vortices pattern, slip velocity at the planar walls and edges, and heat transfer are investigated. The influences of the temperature ratio imposed at the ends of the enclosure and the geometric aspect ratio are also evaluated. The overall flow pattern shows similarities with those observed in two-dimensional configurations in literature. However, features due to the three-dimensionality are observed with vortices that are not identified in previous studies on similar two-dimensional enclosures at high Knudsen and small aspect ratios.
Sheets, H David; Covino, Kristen M; Panasiewicz, Joanna M; Morris, Sara R
2006-01-01
Background Geometric morphometric methods of capturing information about curves or outlines of organismal structures may be used in conjunction with canonical variates analysis (CVA) to assign specimens to groups or populations based on their shapes. This methodological paper examines approaches to optimizing the classification of specimens based on their outlines. This study examines the performance of four approaches to the mathematical representation of outlines and two different approaches to curve measurement as applied to a collection of feather outlines. A new approach to the dimension reduction necessary to carry out a CVA on this type of outline data with modest sample sizes is also presented, and its performance is compared to two other approaches to dimension reduction. Results Two semi-landmark-based methods, bending energy alignment and perpendicular projection, are shown to produce roughly equal rates of classification, as do elliptical Fourier methods and the extended eigenshape method of outline measurement. Rates of classification were not highly dependent on the number of points used to represent a curve or the manner in which those points were acquired. The new approach to dimensionality reduction, which utilizes a variable number of principal component (PC) axes, produced higher cross-validation assignment rates than either the standard approach of using a fixed number of PC axes or a partial least squares method. Conclusion Classification of specimens based on feather shape was not highly dependent of the details of the method used to capture shape information. The choice of dimensionality reduction approach was more of a factor, and the cross validation rate of assignment may be optimized using the variable number of PC axes method presented herein. PMID:16978414
Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin
2015-12-01
One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .
Yielding physically-interpretable emulators - A Sparse PCA approach
NASA Astrophysics Data System (ADS)
Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.
2015-12-01
Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.
DD-HDS: A method for visualization and exploration of high-dimensional data.
Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard
2007-09-01
Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.
A vector scanning processing technique for pulsed laser velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Edwards, Robert V.
1989-01-01
Pulsed-laser-sheet velocimetry yields two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high-precision (1-percent) velocity estimates, but can require hours of processing time on specialized array processors. Sometimes, however, a less accurate (about 5 percent) data-reduction technique which also gives unambiguous velocity vector information is acceptable. Here, a direct space-domain processing technique is described and shown to be far superior to previous methods in achieving these objectives. It uses a novel data coding and reduction technique and has no 180-deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 min on an 80386-based PC, producing a two-dimensional velocity-vector map of the flowfield. Pulsed-laser velocimetry data can thus be reduced quickly and reasonably accurately, without specialized array processing hardware.
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Poissant, Dominique; Brissette, François
2015-11-01
This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.
Numerical and experimental study of Lamb wave propagation in a two-dimensional acoustic black hole
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Shiling; Shen, Zhonghua, E-mail: shenzh@njust.edu.cn; Lomonosov, Alexey M.
2016-06-07
The propagation of laser-generated Lamb waves in a two-dimensional acoustic black-hole structure was studied numerically and experimentally. The geometrical acoustic theory has been applied to calculate the beam trajectories in the region of the acoustic black hole. The finite element method was also used to study the time evolution of propagating waves. An optical system based on the laser-Doppler vibration method was assembled. The effect of the focusing wave and the reduction in wave speed of the acoustic black hole has been validated.
Multivariate Strategies in Functional Magnetic Resonance Imaging
ERIC Educational Resources Information Center
Hansen, Lars Kai
2007-01-01
We discuss aspects of multivariate fMRI modeling, including the statistical evaluation of multivariate models and means for dimensional reduction. In a case study we analyze linear and non-linear dimensional reduction tools in the context of a "mind reading" predictive multivariate fMRI model.
Two component-three dimensional catalysis
Schwartz, Michael; White, James H.; Sammells, Anthony F.
2002-01-01
This invention relates to catalytic reactor membranes having a gas-impermeable membrane for transport of oxygen anions. The membrane has an oxidation surface and a reduction surface. The membrane is coated on its oxidation surface with an adherent catalyst layer and is optionally coated on its reduction surface with a catalyst that promotes reduction of an oxygen-containing species (e.g., O.sub.2, NO.sub.2, SO.sub.2, etc.) to generate oxygen anions on the membrane. The reactor has an oxidation zone and a reduction zone separated by the membrane. A component of an oxygen containing gas in the reduction zone is reduced at the membrane and a reduced species in a reactant gas in the oxidation zone of the reactor is oxidized. The reactor optionally contains a three-dimensional catalyst in the oxidation zone. The adherent catalyst layer and the three-dimensional catalyst are selected to promote a desired oxidation reaction, particularly a partial oxidation of a hydrocarbon.
Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan
2017-07-01
High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.
Peleato, Nicolas M; Legge, Raymond L; Andrews, Robert C
2018-06-01
The use of fluorescence data coupled with neural networks for improved predictability of drinking water disinfection by-products (DBPs) was investigated. Novel application of autoencoders to process high-dimensional fluorescence data was related to common dimensionality reduction techniques of parallel factors analysis (PARAFAC) and principal component analysis (PCA). The proposed method was assessed based on component interpretability as well as for prediction of organic matter reactivity to formation of DBPs. Optimal prediction accuracies on a validation dataset were observed with an autoencoder-neural network approach or by utilizing the full spectrum without pre-processing. Latent representation by an autoencoder appeared to mitigate overfitting when compared to other methods. Although DBP prediction error was minimized by other pre-processing techniques, PARAFAC yielded interpretable components which resemble fluorescence expected from individual organic fluorophores. Through analysis of the network weights, fluorescence regions associated with DBP formation can be identified, representing a potential method to distinguish reactivity between fluorophore groupings. However, distinct results due to the applied dimensionality reduction approaches were observed, dictating a need for considering the role of data pre-processing in the interpretability of the results. In comparison to common organic measures currently used for DBP formation prediction, fluorescence was shown to improve prediction accuracies, with improvements to DBP prediction best realized when appropriate pre-processing and regression techniques were applied. The results of this study show promise for the potential application of neural networks to best utilize fluorescence EEM data for prediction of organic matter reactivity. Copyright © 2018 Elsevier Ltd. All rights reserved.
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.
Rigorous Model Reduction for a Damped-Forced Nonlinear Beam Model: An Infinite-Dimensional Analysis
NASA Astrophysics Data System (ADS)
Kogelbauer, Florian; Haller, George
2018-06-01
We use invariant manifold results on Banach spaces to conclude the existence of spectral submanifolds (SSMs) in a class of nonlinear, externally forced beam oscillations. SSMs are the smoothest nonlinear extensions of spectral subspaces of the linearized beam equation. Reduction in the governing PDE to SSMs provides an explicit low-dimensional model which captures the correct asymptotics of the full, infinite-dimensional dynamics. Our approach is general enough to admit extensions to other types of continuum vibrations. The model-reduction procedure we employ also gives guidelines for a mathematically self-consistent modeling of damping in PDEs describing structural vibrations.
High-speed three-dimensional measurements with a fringe projection-based optical sensor
NASA Astrophysics Data System (ADS)
Bräuer-Burchardt, Christian; Breitbarth, Andreas; Kühmstedt, Peter; Notni, Gunther
2014-11-01
An optical three-dimensional (3-D) sensor based on a fringe projection technique that realizes the acquisition of the surface geometry of small objects was developed for highly resolved and ultrafast measurements. It realizes a data acquisition rate up to 60 high-resolution 3-D datasets per second. The high measurement velocity was achieved by consequent fringe code reduction and parallel data processing. The reduction of the length of the fringe image sequence was obtained by omission of the Gray code sequence using the geometric restrictions of the measurement objects and the geometric constraints of the sensor arrangement. The sensor covers three different measurement fields between 20 mm×20 mm and 40 mm×40 mm with a spatial resolution between 10 and 20 μm, respectively. In order to obtain a robust and fast recalibration of the sensor after change of the measurement field, a calibration procedure based on single shot analysis of a special test object was applied which works with low effort and time. The sensor may be used, e.g., for quality inspection of conductor boards or plugs in real-time industrial applications.
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications. PMID:25885290
NASA Astrophysics Data System (ADS)
Bonilla, L. L.; Carretero, M.; Segura, A.
2017-12-01
When quantized, traces of classically chaotic single-particle systems include eigenvalue statistics and scars in eigenfuntions. Since 2001, many theoretical and experimental works have argued that classically chaotic single-electron dynamics influences and controls collective electron transport. For transport in semiconductor superlattices under tilted magnetic and electric fields, these theories rely on a reduction to a one-dimensional self-consistent drift model. A two-dimensional theory based on self-consistent Boltzmann transport does not support that single-electron chaos influences collective transport. This theory agrees with existing experimental evidence of current self-oscillations, predicts spontaneous collective chaos via a period doubling scenario, and could be tested unambiguously by measuring the electric potential inside the superlattice under a tilted magnetic field.
Bonilla, L L; Carretero, M; Segura, A
2017-12-01
When quantized, traces of classically chaotic single-particle systems include eigenvalue statistics and scars in eigenfuntions. Since 2001, many theoretical and experimental works have argued that classically chaotic single-electron dynamics influences and controls collective electron transport. For transport in semiconductor superlattices under tilted magnetic and electric fields, these theories rely on a reduction to a one-dimensional self-consistent drift model. A two-dimensional theory based on self-consistent Boltzmann transport does not support that single-electron chaos influences collective transport. This theory agrees with existing experimental evidence of current self-oscillations, predicts spontaneous collective chaos via a period doubling scenario, and could be tested unambiguously by measuring the electric potential inside the superlattice under a tilted magnetic field.
Relevance feedback-based building recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Allinson, Nigel M.
2010-07-01
Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.
1974-01-01
General classes of nonlinear and linear transformations were investigated for the reduction of the dimensionality of the classification (feature) space so that, for a prescribed dimension m of this space, the increase of the misclassification risk is minimized.
The Production of Anatomical Teaching Resources Using Three-Dimensional (3D) Printing Technology
ERIC Educational Resources Information Center
McMenamin, Paul G.; Quayle, Michelle R.; McHenry, Colin R.; Adams, Justin W.
2014-01-01
The teaching of anatomy has consistently been the subject of societal controversy, especially in the context of employing cadaveric materials in professional medical and allied health professional training. The reduction in dissection-based teaching in medical and allied health professional training programs has been in part due to the financial…
Reduction of initial shock in decadal predictions using a new initialization strategy
NASA Astrophysics Data System (ADS)
He, Yujun; Wang, Bin; Liu, Mimi; Liu, Li; Yu, Yongqiang; Liu, Juanjuan; Li, Ruizhe; Zhang, Cheng; Xu, Shiming; Huang, Wenyu; Liu, Qun; Wang, Yong; Li, Feifei
2017-08-01
A novel full-field initialization strategy based on the dimension-reduced projection four-dimensional variational data assimilation (DRP-4DVar) is proposed to alleviate the well-known initial shock occurring in the early years of decadal predictions. It generates consistent initial conditions, which best fit the monthly mean oceanic analysis data along the coupled model trajectory in 1 month windows. Three indices to measure the initial shock intensity are also proposed. Results indicate that this method does reduce the initial shock in decadal predictions by Flexible Global Ocean-Atmosphere-Land System model, Grid-point version 2 (FGOALS-g2) compared with the three-dimensional variational data assimilation-based nudging full-field initialization for the same model and is comparable to or even better than the different initialization strategies for other fifth phase of the Coupled Model Intercomparison Project (CMIP5) models. Better hindcasts of global mean surface air temperature anomalies can be obtained than in other FGOALS-g2 experiments. Due to the good model response to external forcing and the reduction of initial shock, higher decadal prediction skill is achieved than in other CMIP5 models.
Two-Photon Fluorescence Microscopy Developed for Microgravity Fluid Physics
NASA Technical Reports Server (NTRS)
Fischer, David G.; Zimmerli, Gregory A.; Asipauskas, Marius
2004-01-01
Recent research efforts within the Microgravity Fluid Physics Branch of the NASA Glenn Research Center have necessitated the development of a microscope capable of high-resolution, three-dimensional imaging of intracellular structure and tissue morphology. Standard optical microscopy works well for thin samples, but it does not allow the imaging of thick samples because of severe degradation caused by out-of-focus object structure. Confocal microscopy, which is a laser-based scanning microscopy, provides improved three-dimensional imaging and true optical sectioning by excluding the out-of-focus light. However, in confocal microscopy, out-of-focus object structure is still illuminated by the incoming beam, which can lead to substantial photo-bleaching. In addition, confocal microscopy is plagued by limited penetration depth, signal loss due to the presence of a confocal pinhole, and the possibility of live-cell damage. Two-photon microscopy is a novel form of laser-based scanning microscopy that allows three-dimensional imaging without many of the problems inherent in confocal microscopy. Unlike one-photon microscopy, it utilizes the nonlinear absorption of two near-infrared photons. However, the efficiency of two-photon absorption is much lower than that of one-photon absorption because of the nonlinear (i.e., quadratic) electric field dependence, so an ultrafast pulsed laser source must typically be employed. On the other hand, this stringent energy density requirement effectively localizes fluorophore excitation to the focal volume. Consequently, two-photon microscopy provides optical sectioning and confocal performance without the need for a signal-limiting pinhole. In addition, there is a reduction in photo-damage because of the longer excitation wavelength, a reduction in background fluorescence, and a 4 increase in penetration depth over confocal methods because of the reduction in Rayleigh scattering.
On Determining if Tree-based Networks Contain Fixed Trees.
Anaya, Maria; Anipchenko-Ulaj, Olga; Ashfaq, Aisha; Chiu, Joyce; Kaiser, Mahedi; Ohsawa, Max Shoji; Owen, Megan; Pavlechko, Ella; St John, Katherine; Suleria, Shivam; Thompson, Keith; Yap, Corrine
2016-05-01
We address an open question of Francis and Steel about phylogenetic networks and trees. They give a polynomial time algorithm to decide if a phylogenetic network, N, is tree-based and pose the problem: given a fixed tree T and network N, is N based on T? We show that it is [Formula: see text]-hard to decide, by reduction from 3-Dimensional Matching (3DM) and further that the problem is fixed-parameter tractable.
Asymmetrically Functionalized Graphene for Photodependent Diode Rectifying Behavior
2011-06-06
catalysts for oxygen reduction in fuel cells, high-performance electrodes in supercapacitors , batteries, actuators, and sen- sors.[1,2] Of particular...Stoller et al.[1j] produced graphene-based supercapacitors free from any conducting filler with a specific capacitance of 135 Fg1 in aqueous electrolytes...dimensionally compatible and electrically conduc- tive component, Guo et al.[2g,h] further constructed a smart graphene-based multifunctional biointerface for
Stable orthogonal local discriminant embedding for linear dimensionality reduction.
Gao, Quanxue; Ma, Jingjie; Zhang, Hailin; Gao, Xinbo; Liu, Yamin
2013-07-01
Manifold learning is widely used in machine learning and pattern recognition. However, manifold learning only considers the similarity of samples belonging to the same class and ignores the within-class variation of data, which will impair the generalization and stableness of the algorithms. For this purpose, we construct an adjacency graph to model the intraclass variation that characterizes the most important properties, such as diversity of patterns, and then incorporate the diversity into the discriminant objective function for linear dimensionality reduction. Finally, we introduce the orthogonal constraint for the basis vectors and propose an orthogonal algorithm called stable orthogonal local discriminate embedding. Experimental results on several standard image databases demonstrate the effectiveness of the proposed dimensionality reduction approach.
NASA Technical Reports Server (NTRS)
Mennell, R. C.
1974-01-01
Tests were conducted to investigate various base drag reduction techniques in an attempt to improve Orbiter lift-to-drag ratios and to calculate sting interference effects on the Orbiter aerodynamic characteristics. Test conditions and facilites, and model dimensional data are presented along with the data reduction guidelines and data set/run number collation used for the studies. Aerodynamic force and moment data and the results of stability and control tests are also given.
Blöchliger, Nicolas; Caflisch, Amedeo; Vitalis, Andreas
2015-11-10
Data mining techniques depend strongly on how the data are represented and how distance between samples is measured. High-dimensional data often contain a large number of irrelevant dimensions (features) for a given query. These features act as noise and obfuscate relevant information. Unsupervised approaches to mine such data require distance measures that can account for feature relevance. Molecular dynamics simulations produce high-dimensional data sets describing molecules observed in time. Here, we propose to globally or locally weight simulation features based on effective rates. This emphasizes, in a data-driven manner, slow degrees of freedom that often report on the metastable states sampled by the molecular system. We couple this idea to several unsupervised learning protocols. Our approach unmasks slow side chain dynamics within the native state of a miniprotein and reveals additional metastable conformations of a protein. The approach can be combined with most algorithms for clustering or dimensionality reduction.
NASA Astrophysics Data System (ADS)
Verhiest, K.; Mullens, S.; De Wispelaere, N.; Claessens, S.; DeBremaecker, A.; Verbeken, K.
2012-09-01
In this study, oxide dispersion strengthened (ODS) 316L steel samples were manufactured by the 3 dimensional fiber deposition (3DFD) technique. The performance of 3DFD as colloidal consolidation technique to obtain porous green bodies based on yttria (Y2O3) nano-slurries or paste, is discussed within this experimental work. The influence of the sintering temperature and time on sample densification and grain growth was investigated in this study. Hot consolidation was performed to obtain final product quality in terms of residual porosity reduction and final dispersion homogeneity.
Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap
NASA Astrophysics Data System (ADS)
Spiwok, Vojtěch; Králová, Blanka
2011-12-01
Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling.
N-Dimensional LLL Reduction Algorithm with Pivoted Reflection
Deng, Zhongliang; Zhu, Di
2018-01-01
The Lenstra-Lenstra-Lovász (LLL) lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO) communication systems and carrier phase positioning in global navigation satellite system (GNSS) to solve the integer least squares (ILS) problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL), expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm. PMID:29351224
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Ezzedine, Souheil; Rubin, Yoram
2012-02-01
The significance of conditioning predictions of environmental performance metrics (EPMs) on hydrogeological data in heterogeneous porous media is addressed. Conditioning EPMs on available data reduces uncertainty and increases the reliability of model predictions. We present a rational and concise approach to investigate the impact of conditioning EPMs on data as a function of the location of the environmentally sensitive target receptor, data types and spacing between measurements. We illustrate how the concept of comparative information yield curves introduced in de Barros et al. [de Barros FPJ, Rubin Y, Maxwell R. The concept of comparative information yield curves and its application to risk-based site characterization. Water Resour Res 2009;45:W06401. doi:10.1029/2008WR007324] could be used to assess site characterization needs as a function of flow and transport dimensionality and EPMs. For a given EPM, we show how alternative uncertainty reduction metrics yield distinct gains of information from a variety of sampling schemes. Our results show that uncertainty reduction is EPM dependent (e.g., travel times) and does not necessarily indicate uncertainty reduction in an alternative EPM (e.g., human health risk). The results show how the position of the environmental target, flow dimensionality and the choice of the uncertainty reduction metric can be used to assist in field sampling campaigns.
Zhao, Mingbo; Zhang, Zhao; Chow, Tommy W S; Li, Bing
2014-07-01
Dealing with high-dimensional data has always been a major problem in research of pattern recognition and machine learning, and Linear Discriminant Analysis (LDA) is one of the most popular methods for dimension reduction. However, it only uses labeled samples while neglecting unlabeled samples, which are abundant and can be easily obtained in the real world. In this paper, we propose a new dimension reduction method, called "SL-LDA", by using unlabeled samples to enhance the performance of LDA. The new method first propagates label information from the labeled set to the unlabeled set via a label propagation process, where the predicted labels of unlabeled samples, called "soft labels", can be obtained. It then incorporates the soft labels into the construction of scatter matrixes to find a transformed matrix for dimension reduction. In this way, the proposed method can preserve more discriminative information, which is preferable when solving the classification problem. We further propose an efficient approach for solving SL-LDA under a least squares framework, and a flexible method of SL-LDA (FSL-LDA) to better cope with datasets sampled from a nonlinear manifold. Extensive simulations are carried out on several datasets, and the results show the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wang, Meng; Hou, Yuyang; Slade, Robert C. T.; Wang, Jiazhao; Shi, Dongqi; Wexler, David; Liu, Huakun; Chen, Jun
2016-01-01
Here, we demonstrate that Cobalt/cobalt oxide core-shell nanoparticles integrated on nitrogen-doped (N-doped) three-dimensional reduced graphene oxide aerogel-based architecture (Co/CoO-NGA) were synthesized through a facile hydrothermal method followed by annealing treatment. The unique endurable porous structure could provide sufficient mass transfer channels and ample active sites on Co/CoO-NGA to facilitate the catalytic reaction. The synthesized Co/CoO-NGA was explored as an electrocatalyst for the oxygen reduction reaction, showing comparable oxygen reduction performance with excellent methanol resistance and better durability compared with Pt/C. PMID:27597939
Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua
2013-01-01
Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.
Ji, Chen-Chen; Xu, Mao-Wen; Bao, Shu-Juan; Cai, Chang-Jun; Lu, Zheng-Jiang; Chai, Hui; Yang, Fan; Wei, Hua
2013-10-01
Homogeneously distributed self-assembling hybrid graphene-based aerogels with 3D interconnected pores, employing three types of carbohydrates (glucose, β-cyclodextrin, and chitosan), have been fabricated by a simple hydrothermal route. Using three types of carbohydrates as morphology oriented agents and reductants can effectively tailor the microstructures, physical properties, and electrochemical performances of the products. The effects of different carbohydrates on graphene oxide reduction to form graphene-based aerogels with different microcosmic morphologies and physical properties were also systemically discussed. The electrochemical behaviors of all graphene-based aerogel samples showed remarkably strong and stable performances, which indicated that all the 3D interpenetrating microstructure graphene-based aerogel samples with well-developed porous nanostructures and interconnected conductive networks could provide fast ionic channels for electrochemical energy storage. These results demonstrate that this strategy would offer an easy and effective way to fabricate graphene-based materials. Copyright © 2013 Elsevier Inc. All rights reserved.
Roger M. Rowell; Rebecca E. Ibach; James McSweeny; Thomas Nilsson
2009-01-01
Reductions in hygroscopicity, increased dimensional stability and decay resistance of heat-treated wood depend on decomposition of a large portion of the hemicelluloses in the wood cell wall. In theory, these hemicelluloses are converted to small organic molecules, water and volatile furan-type intermediates that can polymerize in the cell wall. Reductions in...
Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.
Xia, Youshen; Wang, Jun
2015-07-01
This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta
2009-07-01
Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.
A framework for optimal kernel-based manifold embedding of medical image data.
Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma
2015-04-01
Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis
2016-11-01
Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.
NASA Technical Reports Server (NTRS)
Dasarathy, B. V.
1976-01-01
An algorithm is proposed for dimensionality reduction in the context of clustering techniques based on histogram analysis. The approach is based on an evaluation of the hills and valleys in the unidimensional histograms along the different features and provides an economical means of assessing the significance of the features in a nonparametric unsupervised data environment. The method has relevance to remote sensing applications.
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
2016-01-01
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results. PMID:26778864
Daugherty, Ana M; Yuan, Peng; Dahle, Cheryl L; Bender, Andrew R; Yang, Yiqin; Raz, Naftali
2015-09-01
Studies of human navigation in virtual maze environments have consistently linked advanced age with greater distance traveled between the start and the goal and longer duration of the search. Observations of search path geometry suggest that routes taken by older adults may be unnecessarily complex and that excessive path complexity may be an indicator of cognitive difficulties experienced by older navigators. In a sample of healthy adults, we quantify search path complexity in a virtual Morris water maze with a novel method based on fractal dimensionality. In a two-level hierarchical linear model, we estimated improvement in navigation performance across trials by a decline in route length, shortening of search time, and reduction in fractal dimensionality of the path. While replicating commonly reported age and sex differences in time and distance indices, a reduction in fractal dimension of the path accounted for improvement across trials, independent of age or sex. The volumes of brain regions associated with the establishment of cognitive maps (parahippocampal gyrus and hippocampus) were related to path dimensionality, but not to the total distance and time. Thus, fractal dimensionality of a navigational path may present a useful complementary method of quantifying performance in navigation. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.
Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.
Alternative dimensional reduction via the density matrix
NASA Astrophysics Data System (ADS)
de Carvalho, C. A.; Cornwall, J. M.; da Silva, A. J.
2001-07-01
We give graphical rules, based on earlier work for the functional Schrödinger equation, for constructing the density matrix for scalar and gauge fields in equilibrium at finite temperature T. More useful is a dimensionally reduced effective action (DREA) constructed from the density matrix by further functional integration over the arguments of the density matrix coupled to a source. The DREA is an effective action in one less dimension which may be computed order by order in perturbation theory or by dressed-loop expansions; it encodes all thermal matrix elements. We term the DREA procedure alternative dimensional reduction, to distinguish it from the conventional dimensionally reduced field theory (DRFT) which applies at infinite T. The DREA is useful because it gives a dimensionally reduced theory usable at any T including infinity, where it yields the DRFT, and because it does not and cannot have certain spurious infinities which sometimes occur in the density matrix itself or the conventional DRFT; these come from ln T factors at infinite temperature. The DREA can be constructed to all orders (in principle) and the only regularizations needed are those which control the ultraviolet behavior of the zero-T theory. An example of spurious divergences in the DRFT occurs in d=2+1φ4 theory dimensionally reduced to d=2. We study this theory and show that the rules for the DREA replace these ``wrong'' divergences in physical parameters by calculable powers of ln T; we also compute the phase transition temperature of this φ4 theory in one-loop order. Our density-matrix construction is equivalent to a construction of the Landau-Ginzburg ``coarse-grained free energy'' from a microscopic Hamiltonian.
NASA Astrophysics Data System (ADS)
Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.
2014-03-01
The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.
Application of N-Doped Three-Dimensional Reduced Graphene Oxide Aerogel to Thin Film Loudspeaker.
Kim, Choong Sun; Lee, Kyung Eun; Lee, Jung-Min; Kim, Sang Ouk; Cho, Byung Jin; Choi, Jung-Woo
2016-08-31
We built a thermoacoustic loudspeaker employing N-doped three-dimensional reduced graphene oxide aerogel (N-rGOA) based on a simple template-free fabrication method. A two-step fabrication process, which includes freeze-drying and reduction/doping, was used to realize a three-dimensional, freestanding, and porous graphene-based loudspeaker, whose macroscopic structure can be easily modulated. The simplified fabrication process also allows the control of structural properties of the N-rGOAs, including density and area. Taking advantage of the facile fabrication process, we fabricated and analyzed thermoacoustic loudspeakers with different structural properties. The anlayses showed that a N-rGOA with lower density and larger area can produce a higher sound pressure level (SPL). Furthermore, the resistance of the proposed loudspeaker can be easily controlled through heteroatom doping, thereby helping to generate higher SPL per unit driving voltage. Our success in constructing an array of optimized N-rGOAs able to withstand input power as high as 40 W demonstrates that a practical thermoacoustic loudspeaker can be fabricated using the proposed mass-producible solution-based process.
Prell, D; Kalender, W A; Kyriakou, Y
2010-12-01
The purpose of this study was to develop, implement and evaluate a dedicated metal artefact reduction (MAR) method for flat-detector CT (FDCT). The algorithm uses the multidimensional raw data space to calculate surrogate attenuation values for the original metal traces in the raw data domain. The metal traces are detected automatically by a three-dimensional, threshold-based segmentation algorithm in an initial reconstructed image volume, based on twofold histogram information for calculating appropriate metal thresholds. These thresholds are combined with constrained morphological operations in the projection domain. A subsequent reconstruction of the modified raw data yields an artefact-reduced image volume that is further processed by a combining procedure that reinserts the missing metal information. For image quality assessment, measurements on semi-anthropomorphic phantoms containing metallic inserts were evaluated in terms of CT value accuracy, image noise and spatial resolution before and after correction. Measurements of the same phantoms without prostheses were used as ground truth for comparison. Cadaver measurements were performed on complex and realistic cases and to determine the influences of our correction method on the tissue surrounding the prostheses. The results showed a significant reduction of metal-induced streak artefacts (CT value differences were reduced to below 22 HU and image noise reduction of up to 200%). The cadaver measurements showed excellent results for imaging areas close to the implant and exceptional artefact suppression in these areas. Furthermore, measurements in the knee and spine regions confirmed the superiority of our method to standard one-dimensional, linear interpolation.
Four-dimensional MRI using an internal respiratory surrogate derived by dimensionality reduction
NASA Astrophysics Data System (ADS)
Uh, Jinsoo; Ayaz Khan, M.; Hua, Chiaho
2016-11-01
This study aimed to develop a practical and accurate 4-dimensional (4D) magnetic resonance imaging (MRI) method using a non-navigator, image-based internal respiratory surrogate derived by dimensionality reduction (DR). The use of DR has been previously suggested but not implemented for reconstructing 4D MRI, despite its practical advantages. We compared multiple image-acquisition schemes and refined a retrospective-sorting process to optimally implement a DR-derived surrogate. The comparison included an unconventional scheme that acquires paired slices alternately to mitigate the internal surrogate’s dependency on a specific slice location. We introduced ‘target-oriented sorting’, as opposed to conventional binning, to quantify the coherence in retrospectively sorted images, thereby determining the minimal scan time needed for sufficient coherence. This study focused on evaluating the proposed method using digital phantoms which provided unequivocal gold standard. The evaluation indicated that the DR-based respiratory surrogate is highly accurate: the error in amplitude percentile of the surrogate signal was less than 5% with the optimal scheme. Acquiring alternating paired slices was superior to the conventional scheme of acquiring individual slices; the advantage of the unconventional scheme was more pronounced when a substantial phase shift occurred across slice locations. The analysis of coherence across sorted images confirmed the advantage of higher sampling efficiencies in non-navigator respiratory surrogates. We determined that a scan time of 20 s per imaging slice was sufficient to achieve a mean coherence error of less than 1% for the tested respiratory patterns. The clinical applicability of the proposed 4D MRI has been demonstrated with volunteers and patients. The diaphragm motion in 4D MRI was consistent with that in dynamic 2D imaging which was regarded as the gold standard (difference within 1.8 mm on average).
Ye, Shibing; Feng, Jiachun
2014-06-25
A three-dimensional hierarchical graphene/polypyrrole aerogel (GPA) has been fabricated using graphene oxide (GO) and already synthesized one-dimensional hollow polypyrrole nanotubes (PNTs) as the feedstock. The amphiphilic GO is helpful in effectively promoting the dispersion of well-defined PNTs to result in a stable, homogeneous GO/PNT complex solution, while the PNTs not only provide a large accessible surface area for fast transport of hydrate ions but also act as spacers to prevent the restacking of graphene sheets. By a simple one-step reduction self-assembly process, hierarchically structured, low-density, highly compressible GPAs are easily obtained, which favorably combine the advantages of graphene and PNTs. The supercapacitor electrodes based on such materials exhibit excellent electrochemical performance, including a high specific capacitance up to 253 F g(-1), good rate performance, and outstanding cycle stability. Moreover, this method may be feasible to prepare other graphene-based hybrid aerogels with structure-controllable nanostructures in large scale, thereby holding enormous potential in many application fields.
NASA Astrophysics Data System (ADS)
Ye, Fei; Marchetti, P. A.; Su, Z. B.; Yu, L.
2017-09-01
The relation between braid and exclusion statistics is examined in one-dimensional systems, within the framework of Chern-Simons statistical transmutation in gauge invariant form with an appropriate dimensional reduction. If the matter action is anomalous, as for chiral fermions, a relation between braid and exclusion statistics can be established explicitly for both mutual and nonmutual cases. However, if it is not anomalous, the exclusion statistics of emergent low energy excitations is not necessarily connected to the braid statistics of the physical charged fields of the system. Finally, we also discuss the bosonization of one-dimensional anyonic systems through T-duality. Dedicated to the memory of Mario Tonin.
CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets
Nowicka, Malgorzata; Krieg, Carsten; Weber, Lukas M.; Hartmann, Felix J.; Guglietta, Silvia; Becher, Burkhard; Levesque, Mitchell P.; Robinson, Mark D.
2017-01-01
High dimensional mass and flow cytometry (HDCyto) experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots), reporting of clustering results (dimensionality reduction, heatmaps with dendrograms) and differential analyses (e.g. plots of aggregated signals). PMID:28663787
Ly, Cheng
2013-10-01
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach
NASA Astrophysics Data System (ADS)
Liu, Wenyang; Sawant, Amit; Ruan, Dan
2016-07-01
The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.
NASA Astrophysics Data System (ADS)
Aytaç Korkmaz, Sevcan; Binol, Hamidullah
2018-03-01
Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.
Thomas, Minta; De Brabanter, Kris; De Moor, Bart
2014-05-10
DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.
Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification
NASA Astrophysics Data System (ADS)
Sharif, I.; Khare, S.
2014-11-01
With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.
Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap.
Spiwok, Vojtěch; Králová, Blanka
2011-12-14
Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling. © 2011 American Institute of Physics
Higher-dimensional Bianchi type-VIh cosmologies
NASA Astrophysics Data System (ADS)
Lorenz-Petzold, D.
1985-09-01
The higher-dimensional perfect fluid equations of a generalization of the (1 + 3)-dimensional Bianchi type-VIh space-time are discussed. Bianchi type-V and Bianchi type-III space-times are also included as special cases. It is shown that the Chodos-Detweiler (1980) mechanism of cosmological dimensional-reduction is possible in these cases.
Econo-ESA in semantic text similarity.
Rahutomo, Faisal; Aritsugi, Masayoshi
2014-01-01
Explicit semantic analysis (ESA) utilizes an immense Wikipedia index matrix in its interpreter part. This part of the analysis multiplies a large matrix by a term vector to produce a high-dimensional concept vector. A similarity measurement between two texts is performed between two concept vectors with numerous dimensions. The cost is expensive in both interpretation and similarity measurement steps. This paper proposes an economic scheme of ESA, named econo-ESA. We investigate two aspects of this proposal: dimensional reduction and experiments with various data. We use eight recycling test collections in semantic text similarity. The experimental results show that both the dimensional reduction and test collection characteristics can influence the results. They also show that an appropriate concept reduction of econo-ESA can decrease the cost with minor differences in the results from the original ESA.
Xue, Hairong; Wang, Tao; Gong, Hao; Guo, Hu; Fan, Xiaoli; Gao, Bin; Feng, Yaya; Meng, Xianguang; Huang, Xianli; He, Jianping
2018-03-02
As a typical photocatalyst for CO 2 reduction, practical applications of TiO 2 still suffer from low photocatalytic efficiency and limited visible-light absorption. Herein, a novel Au-nanoparticle (NP)-decorated ordered mesoporous TiO 2 (OMT) composite (OMT-Au) was successfully fabricated, in which Au NPs were uniformly dispersed on the OMT. Due to the surface plasmon resonance (SPR) effect derived from the excited Au NPs, the TiO 2 shows high photocatalytic performance for CO 2 reduction under visible light. The ordered mesoporous TiO 2 exhibits superior material and structure, with a high surface area that offers more catalytically active sites. More importantly, the three-dimensional transport channels ensure the smooth flow of gas molecules, highly efficient CO 2 adsorption, and the fast and steady transmission of hot electrons excited from the Au NPs, which lead to a further improvement in the photocatalytic performance. These results highlight the possibility of improving the photocatalysis for CO 2 reduction under visible light by constructing OMT-based Au-SPR-induced photocatalysts. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tufto, Jarle
2010-01-01
Domesticated species frequently spread their genes into populations of wild relatives through interbreeding. The domestication process often involves artificial selection for economically desirable traits. This can lead to an indirect response in unknown correlated traits and a reduction in fitness of domesticated individuals in the wild. Previous models for the effect of gene flow from domesticated species to wild relatives have assumed that evolution occurs in one dimension. Here, I develop a quantitative genetic model for the balance between migration and multivariate stabilizing selection. Different forms of correlational selection consistent with a given observed ratio between average fitness of domesticated and wild individuals offsets the phenotypic means at migration-selection balance away from predictions based on simpler one-dimensional models. For almost all parameter values, correlational selection leads to a reduction in the migration load. For ridge selection, this reduction arises because the distance the immigrants deviates from the local optimum in effect is reduced. For realistic parameter values, however, the effect of correlational selection on the load is small, suggesting that simpler one-dimensional models may still be adequate in terms of predicting mean population fitness and viability.
Wang, Chen; Xu, Gui-Jun; Han, Zhe; Jiang, Xuan; Zhang, Cheng-Bao; Dong, Qiang; Ma, Jian-Xiong; Ma, Xin-Long
2015-11-01
The aim of the study was to introduce a new method for measuring the residual displacement of the femoral head after internal fixation and explore the relationship between residual displacement and osteonecrosis with femoral head, and to evaluate the risk factors associated with osteonecrosis of the femoral head in patients with femoral neck fractures treated by closed reduction and percutaneous cannulated screw fixation.One hundred and fifty patients who sustained intracapsular femoral neck fractures between January 2011 and April 2013 were enrolled in the study. All were treated with closed reduction and percutaneous cannulated screw internal fixation. The residual displacement of the femoral head after surgery was measured by 3-dimensional reconstruction that evaluated the quality of the reduction. Other data that might affect prognosis were also obtained from outpatient follow-up, telephone calls, or case reviews. Multivariate logistic regression analysis was applied to assess the intrinsic relationship between the risk factors and the osteonecrosis of the femoral head.Osteonecrosis of the femoral head occurred in 27 patients (18%). Significant differences were observed regarding the residual displacement of the femoral head and the preoperative Garden classification. Moreover, we found more or less residual displacement of femoral head in all patients with high quality of reduction based on x-ray by the new technique. There was a close relationship between residual displacement and ONFH.There exists limitation to evaluate the quality of reduction by x-ray. Three-dimensional reconstruction and digital measurement, as a new method, is a more accurate method to assess the quality of reduction. Residual displacement of the femoral head and the preoperative Garden classification were risk factors for osteonecrosis of the femoral head. High-quality reduction was necessary to avoid complications.
Higher-order gravity in higher dimensions: geometrical origins of four-dimensional cosmology?
NASA Astrophysics Data System (ADS)
Troisi, Antonio
2017-03-01
Determining the cosmological field equations is still very much debated and led to a wide discussion around different theoretical proposals. A suitable conceptual scheme could be represented by gravity models that naturally generalize Einstein theory like higher-order gravity theories and higher-dimensional ones. Both of these two different approaches allow one to define, at the effective level, Einstein field equations equipped with source-like energy-momentum tensors of geometrical origin. In this paper, the possibility is discussed to develop a five-dimensional fourth-order gravity model whose lower-dimensional reduction could provide an interpretation of cosmological four-dimensional matter-energy components. We describe the basic concepts of the model, the complete field equations formalism and the 5-D to 4-D reduction procedure. Five-dimensional f( R) field equations turn out to be equivalent, on the four-dimensional hypersurfaces orthogonal to the extra coordinate, to an Einstein-like cosmological model with three matter-energy tensors related with higher derivative and higher-dimensional counter-terms. By considering the gravity model with f(R)=f_0R^n the possibility is investigated to obtain five-dimensional power law solutions. The effective four-dimensional picture and the behaviour of the geometrically induced sources are finally outlined in correspondence to simple cases of such higher-dimensional solutions.
Detection of Answer Copying Based on the Structure of a High-Stakes Test
ERIC Educational Resources Information Center
Belov, Dmitry I.
2011-01-01
This article presents the Variable Match Index (VM-Index), a new statistic for detecting answer copying. The power of the VM-Index relies on two-dimensional conditioning as well as the structure of the test. The asymptotic distribution of the VM-Index is analyzed by reduction to Poisson trials. A computational study comparing the VM-Index with the…
Locally linear embedding: dimension reduction of massive protostellar spectra
NASA Astrophysics Data System (ADS)
Ward, J. L.; Lumsden, S. L.
2016-09-01
We present the results of the application of locally linear embedding (LLE) to reduce the dimensionality of dereddened and continuum subtracted near-infrared spectra using a combination of models and real spectra of massive protostars selected from the Red MSX Source survey data base. A brief comparison is also made with two other dimension reduction techniques; principal component analysis (PCA) and Isomap using the same set of spectra as well as a more advanced form of LLE, Hessian locally linear embedding. We find that whilst LLE certainly has its limitations, it significantly outperforms both PCA and Isomap in classification of spectra based on the presence/absence of emission lines and provides a valuable tool for classification and analysis of large spectral data sets.
Gao, Qian; Liu, Lu; Li, Hai-Mei; Tang, Yi-Lang; Wu, Zhao-Min; Chen, Yun; Wang, Yu-Feng; Qian, Qiu-Jin
2015-01-01
As candidate genes of attention--deficit/hyperactivity disorder (ADHD), monoamine oxidase A (MAOA), and synaptophysin (SYP) are both on the X chromosome, and have been suggested to be associated with the predominantly inattentive subtype (ADHD-I). The present study is to investigate the potential gene-gene interaction (G × G) between rs5905859 of MAOA and rs5906754 of SYP for ADHD in Chinese Han subjects. For family-based association study, 177 female trios were included. For case-control study, 1,462 probands and 807 normal controls were recruited. The ADHD Rating Scale-IV (ADHD-RS-IV) was used to evaluate ADHD symptoms. Pedigree-based generalized multifactor dimensionality reduction (PGMDR) for female ADHD trios indicated significant gene interaction effect of rs5905859 and rs5906754. Generalized multifactor dimensionality reduction (GMDR) indicated potential gene-gene interplay on ADHD RS-IV scores in female ADHD-I. No associations were observed in male subjects in case-control analysis. In conclusion, our findings suggested that the interaction of MAOA and SYP may be involved in the genetic mechanism of ADHD-I subtype and predict ADHD symptoms. © 2014 Wiley Periodicals, Inc.
Use of thermoacoustic excitation for control of turbulent flow over a wall-mounted hump
NASA Astrophysics Data System (ADS)
Yeh, Chi-An; Munday, Phillip; Taira, Kunihiko
2014-11-01
We numerically examine the effectiveness of high-frequency acoustic excitation for drag reduction control of turbulent flow over a wall-mounted hump at a free stream Reynolds number of 500,000 and Mach number of 0.25. Actuation frequencies around Helmholtz number of 3 are considered based on the characteristics of recently developed graphene/carbon nanotube-based surface compliant loud speakers. The present study utilizes LES (CharLES) with an oscillatory heat flux boundary condition to produce high-intensity acoustic waves, which interact with the turbulent flow structures by introducing small-scale perturbations to the shear layer in the wake of the hump. With thermoacoustic control, the recirculation zone downstream of the hump becomes elongated with thinner shear layer profile compared to the uncontrolled case. This change in the flow shifts the low-pressure region of the wake further downstream and results in reduction in drag by 10% for two-dimensional and 15% for three-dimensional flows. The influence of actuation frequency and amplitude is also examined. This work is supported by the US Army Research Office (W911NF-13-1-0062, W911NF-14-1-0224).
Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof
2018-01-01
We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.
Transition Manifolds of Complex Metastable Systems
NASA Astrophysics Data System (ADS)
Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof
2018-04-01
We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.
Evidence and mechanism of Hurricane Fran-Induced ocean cooling in the Charleston Trough
NASA Astrophysics Data System (ADS)
Xie, Lian; Pietrafesa, L. J.; Bohm, E.; Zhang, C.; Li, X.
Evidence of enhanced sea surface cooling during and following the passage of Hurricane Fran in September 1996 over an oceanic depression located on the ocean margin offshore of Charleston, South Carolina (referred to as the Charleston Trough), [Pietrafesa, 1983] is documented. Approximately 4C° of sea surface temperature (SST) reduction within the Charleston Trough following the passage of Hurricane Fran was estimated based on SST imagery from Advanced Very High Resolution Radiometer (AVHRR) on the NOAA-14 polar orbiting satellite. Simulations using a three-dimensional coastal ocean model indicate that the largest SST reduction occurred within the Charleston Trough. This SST reduction can be explained by oceanic mixing due to storm-induced internal inertia-gravity waves.
Wake Management Strategies for Reduction of Turbomachinery Fan Noise
NASA Technical Reports Server (NTRS)
Waitz, Ian A.
1998-01-01
The primary objective of our work was to evaluate and test several wake management schemes for the reduction of turbomachinery fan noise. Throughout the course of this work we relied on several tools. These include 1) Two-dimensional steady boundary-layer and wake analyses using MISES (a thin-shear layer Navier-Stokes code), 2) Two-dimensional unsteady wake-stator interaction simulations using UNSFLO, 3) Three-dimensional, steady Navier-Stokes rotor simulations using NEWT, 4) Internal blade passage design using quasi-one-dimensional passage flow models developed at MIT, 5) Acoustic modeling using LINSUB, 6) Acoustic modeling using VO72, 7) Experiments in a low-speed cascade wind-tunnel, and 8) ADP fan rig tests in the MIT Blowdown Compressor.
Hopkins, Jesse Bennett; Gillilan, Richard E; Skou, Soren
2017-10-01
BioXTAS RAW is a graphical-user-interface-based free open-source Python program for reduction and analysis of small-angle X-ray solution scattering (SAXS) data. The software is designed for biological SAXS data and enables creation and plotting of one-dimensional scattering profiles from two-dimensional detector images, standard data operations such as averaging and subtraction and analysis of radius of gyration and molecular weight, and advanced analysis such as calculation of inverse Fourier transforms and envelopes. It also allows easy processing of inline size-exclusion chromatography coupled SAXS data and data deconvolution using the evolving factor analysis method. It provides an alternative to closed-source programs such as Primus and ScÅtter for primary data analysis. Because it can calibrate, mask and integrate images it also provides an alternative to synchrotron beamline pipelines that scientists can install on their own computers and use both at home and at the beamline.
Jeong, Ji-Wook; Chae, Seung-Hoon; Chae, Eun Young; Kim, Hak Hee; Choi, Young-Wook; Lee, Sooyeul
2016-01-01
We propose computer-aided detection (CADe) algorithm for microcalcification (MC) clusters in reconstructed digital breast tomosynthesis (DBT) images. The algorithm consists of prescreening, MC detection, clustering, and false-positive (FP) reduction steps. The DBT images containing the MC-like objects were enhanced by a multiscale Hessian-based three-dimensional (3D) objectness response function and a connected-component segmentation method was applied to extract the cluster seed objects as potential clustering centers of MCs. Secondly, a signal-to-noise ratio (SNR) enhanced image was also generated to detect the individual MC candidates and prescreen the MC-like objects. Each cluster seed candidate was prescreened by counting neighboring individual MC candidates nearby the cluster seed object according to several microcalcification clustering criteria. As a second step, we introduced bounding boxes for the accepted seed candidate, clustered all the overlapping cubes, and examined. After the FP reduction step, the average number of FPs per case was estimated to be 2.47 per DBT volume with a sensitivity of 83.3%.
Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.
2010-01-01
Epistasis or gene-gene interaction is a fundamental component of the genetic architecture of complex traits such as disease susceptibility. Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free method to detect epistasis when there are no significant marginal genetic effects. However, in many studies of complex disease, other covariates like age of onset and smoking status could have a strong main effect and may potentially interfere with MDR's ability to achieve its goal. In this paper, we present a simple and computationally efficient sampling method to adjust for covariate effects in MDR. We use simulation to show that after adjustment, MDR has sufficient power to detect true gene-gene interactions. We also compare our method with the state-of-art technique in covariate adjustment. The results suggest that our proposed method performs similarly, but is more computationally efficient. We then apply this new method to an analysis of a population-based bladder cancer study in New Hampshire. PMID:20924193
Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach
NASA Astrophysics Data System (ADS)
Pinto, Rafael S.; Saa, Alberto
2015-12-01
A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smirnov, A. G., E-mail: smirnov@lpi.ru
2015-12-15
We develop a general technique for finding self-adjoint extensions of a symmetric operator that respects a given set of its symmetries. Problems of this type naturally arise when considering two- and three-dimensional Schrödinger operators with singular potentials. The approach is based on constructing a unitary transformation diagonalizing the symmetries and reducing the initial operator to the direct integral of a suitable family of partial operators. We prove that symmetry preserving self-adjoint extensions of the initial operator are in a one-to-one correspondence with measurable families of self-adjoint extensions of partial operators obtained by reduction. The general scheme is applied to themore » three-dimensional Aharonov-Bohm Hamiltonian describing the electron in the magnetic field of an infinitely thin solenoid. We construct all self-adjoint extensions of this Hamiltonian, invariant under translations along the solenoid and rotations around it, and explicitly find their eigenfunction expansions.« less
Efficient two-dimensional compressive sensing in MIMO radar
NASA Astrophysics Data System (ADS)
Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad
2017-12-01
Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.
Principles of three-dimensional printing and clinical applications within the abdomen and pelvis.
Bastawrous, Sarah; Wake, Nicole; Levin, Dmitry; Ripley, Beth
2018-04-04
Improvements in technology and reduction in costs have led to widespread interest in three-dimensional (3D) printing. 3D-printed anatomical models contribute to personalized medicine, surgical planning, and education across medical specialties, and these models are rapidly changing the landscape of clinical practice. A physical object that can be held in one's hands allows for significant advantages over standard two-dimensional (2D) or even 3D computer-based virtual models. Radiologists have the potential to play a significant role as consultants and educators across all specialties by providing 3D-printed models that enhance clinical care. This article reviews the basics of 3D printing, including how models are created from imaging data, clinical applications of 3D printing within the abdomen and pelvis, implications for education and training, limitations, and future directions.
Sahin, Ismail; Iskender, Salim; Ozturk, Serdar; Balaban, Birol; Isik, Selcuk
2013-06-01
Breast hypertrophy is a significant health burden with symptoms of back and shoulder pain, intertrigo, and shoulder grooving from the bra straps. Women often rely on surgery to relieve these symptoms, and they are mostly satisfied with the results. The satisfaction from surgery usually is evaluated by subjective measures. Objective evidence testing of the surgical outcomes is lacking. In this study, 10 women with breast hypertrophy underwent reduction mammaplasty. Their surgical outcomes were evaluated using three-dimensional gait analysis before surgery and 2 months afterward. A statistical difference was sought between the kinematic data of the spine, hip, knee, and ankle joints. The average maximum anterior pelvic tilt angles decreased 41 %, and the average maximum spine anterior flexion angles decreased 30 %. The difference between the pre- and postoperative values was statistically significant. The analysis of the kinematic data showed no significant difference in the hip, knee, or ankle joint angles postoperatively. The outcomes of breast reduction surgery have been evaluated mostly by subjective means until recently. As an objective evidence for surgical gain in the current study, reduction mammaplasty resulted in the patients' improved body posture when walking. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Simplifying the representation of complex free-energy landscapes using sketch-map
Ceriotti, Michele; Tribello, Gareth A.; Parrinello, Michele
2011-01-01
A new scheme, sketch-map, for obtaining a low-dimensional representation of the region of phase space explored during an enhanced dynamics simulation is proposed. We show evidence, from an examination of the distribution of pairwise distances between frames, that some features of the free-energy surface are inherently high-dimensional. This makes dimensionality reduction problematic because the data does not satisfy the assumptions made in conventional manifold learning algorithms We therefore propose that when dimensionality reduction is performed on trajectory data one should think of the resultant embedding as a quickly sketched set of directions rather than a road map. In other words, the embedding tells one about the connectivity between states but does not provide the vectors that correspond to the slow degrees of freedom. This realization informs the development of sketch-map, which endeavors to reproduce the proximity information from the high-dimensionality description in a space of lower dimensionality even when a faithful embedding is not possible. PMID:21730167
The semantic representation of prejudice and stereotypes.
Bhatia, Sudeep
2017-07-01
We use a theory of semantic representation to study prejudice and stereotyping. Particularly, we consider large datasets of newspaper articles published in the United States, and apply latent semantic analysis (LSA), a prominent model of human semantic memory, to these datasets to learn representations for common male and female, White, African American, and Latino names. LSA performs a singular value decomposition on word distribution statistics in order to recover word vector representations, and we find that our recovered representations display the types of biases observed in human participants using tasks such as the implicit association test. Importantly, these biases are strongest for vector representations with moderate dimensionality, and weaken or disappear for representations with very high or very low dimensionality. Moderate dimensional LSA models are also the best at learning race, ethnicity, and gender-based categories, suggesting that social category knowledge, acquired through dimensionality reduction on word distribution statistics, can facilitate prejudiced and stereotyped associations. Copyright © 2017 Elsevier B.V. All rights reserved.
Restoration of dimensional reduction in the random-field Ising model at five dimensions
NASA Astrophysics Data System (ADS)
Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D <6 to their values in the pure Ising model at D -2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
Restoration of dimensional reduction in the random-field Ising model at five dimensions.
Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤D<6 to their values in the pure Ising model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
Dimensional reduction for a SIR type model
NASA Astrophysics Data System (ADS)
Cahyono, Edi; Soeharyadi, Yudi; Mukhsar
2018-03-01
Epidemic phenomena are often modeled in the form of dynamical systems. Such model has also been used to model spread of rumor, spread of extreme ideology, and dissemination of knowledge. Among the simplest is SIR (susceptible, infected and recovered) model, a model that consists of three compartments, and hence three variables. The variables are functions of time which represent the number of subpopulations, namely suspect, infected and recovery. The sum of the three is assumed to be constant. Hence, the model is actually two dimensional which sits in three-dimensional ambient space. This paper deals with the reduction of a SIR type model into two variables in two-dimensional ambient space to understand the geometry and dynamics better. The dynamics is studied, and the phase portrait is presented. The two dimensional model preserves the equilibrium and the stability. The model has been applied for knowledge dissemination, which has been the interest of knowledge management.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
Wang, Zuowei; Xia, Siqing; Xu, Xiaoyin; Wang, Chenhui
2016-02-01
In this study, a one-dimensional multispecies model (ODMSM) was utilized to simulate NO3(-)-N and ClO4(-) reduction performances in two kinds of H2-based membrane-aeration biofilm reactors (H2-MBfR) within different operating conditions (e.g., NO3(-)-N/ClO4(-) loading rates, H2 partial pressure, etc.). Before the simulation process, we conducted the sensitivity analysis of some key parameters which would fluctuate in different environmental conditions, then we used the experimental data to calibrate the more sensitive parameters μ1 and μ2 (maximum specific growth rates of denitrification bacteria and perchlorate reduction bacteria) in two H2-MBfRs, and the diversity of the two key parameters' values in two types of reactors may be resulted from the different carbon source fed in the reactors. From the simulation results of six different operating conditions (four in H2-MBfR 1 and two in H2-MBfR 2), the applicability of the model was approved, and the variation of the removal tendency in different operating conditions could be well simulated. Besides, the rationality of operating parameters (H2 partial pressure, etc.) could be judged especially in condition of high nutrients' loading rates. To a certain degree, the model could provide theoretical guidance to determine the operating parameters on some specific conditions in practical application.
Characteristic-based algorithms for flows in thermo-chemical nonequilibrium
NASA Technical Reports Server (NTRS)
Walters, Robert W.; Cinnella, Pasquale; Slack, David C.; Halt, David
1990-01-01
A generalized finite-rate chemistry algorithm with Steger-Warming, Van Leer, and Roe characteristic-based flux splittings is presented in three-dimensional generalized coordinates for the Navier-Stokes equations. Attention is placed on convergence to steady-state solutions with fully coupled chemistry. Time integration schemes including explicit m-stage Runge-Kutta, implicit approximate-factorization, relaxation and LU decomposition are investigated and compared in terms of residual reduction per unit of CPU time. Practical issues such as code vectorization and memory usage on modern supercomputers are discussed.
Katagiri, Fumiaki; Glazebrook, Jane
2003-01-01
A major task in computational analysis of mRNA expression profiles is definition of relationships among profiles on the basis of similarities among them. This is generally achieved by pattern recognition in the distribution of data points representing each profile in a high-dimensional space. Some drawbacks of commonly used pattern recognition algorithms stem from their use of a globally linear space and/or limited degrees of freedom. A pattern recognition method called Local Context Finder (LCF) is described here. LCF uses nonlinear dimensionality reduction for pattern recognition. Then it builds a network of profiles based on the nonlinear dimensionality reduction results. LCF was used to analyze mRNA expression profiles of the plant host Arabidopsis interacting with the bacterial pathogen Pseudomonas syringae. In one case, LCF revealed two dimensions essential to explain the effects of the NahG transgene and the ndr1 mutation on resistant and susceptible responses. In another case, plant mutants deficient in responses to pathogen infection were classified on the basis of LCF analysis of their profiles. The classification by LCF was consistent with the results of biological characterization of the mutants. Thus, LCF is a powerful method for extracting information from expression profile data. PMID:12960373
PCA based clustering for brain tumor segmentation of T1w MRI images.
Kaya, Irem Ersöz; Pehlivanlı, Ayça Çakmak; Sekizkardeş, Emine Gezmez; Ibrikci, Turgay
2017-03-01
Medical images are huge collections of information that are difficult to store and process consuming extensive computing time. Therefore, the reduction techniques are commonly used as a data pre-processing step to make the image data less complex so that a high-dimensional data can be identified by an appropriate low-dimensional representation. PCA is one of the most popular multivariate methods for data reduction. This paper is focused on T1-weighted MRI images clustering for brain tumor segmentation with dimension reduction by different common Principle Component Analysis (PCA) algorithms. Our primary aim is to present a comparison between different variations of PCA algorithms on MRIs for two cluster methods. Five most common PCA algorithms; namely the conventional PCA, Probabilistic Principal Component Analysis (PPCA), Expectation Maximization Based Principal Component Analysis (EM-PCA), Generalize Hebbian Algorithm (GHA), and Adaptive Principal Component Extraction (APEX) were applied to reduce dimensionality in advance of two clustering algorithms, K-Means and Fuzzy C-Means. In the study, the T1-weighted MRI images of the human brain with brain tumor were used for clustering. In addition to the original size of 512 lines and 512 pixels per line, three more different sizes, 256 × 256, 128 × 128 and 64 × 64, were included in the study to examine their effect on the methods. The obtained results were compared in terms of both the reconstruction errors and the Euclidean distance errors among the clustered images containing the same number of principle components. According to the findings, the PPCA obtained the best results among all others. Furthermore, the EM-PCA and the PPCA assisted K-Means algorithm to accomplish the best clustering performance in the majority as well as achieving significant results with both clustering algorithms for all size of T1w MRI images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Incremental online learning in high dimensions.
Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan
2005-12-01
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.
Slaying Hydra: A Python-Based Reduction Pipeline for the Hydra Multi-Object Spectrograph
NASA Astrophysics Data System (ADS)
Seifert, Richard; Mann, Andrew
2018-01-01
We present a Python-based data reduction pipeline for the Hydra Multi-Object Spectrograph on the WIYN 3.5 m telescope, an instrument which enables simultaneous spectroscopy of up to 93 targets. The reduction steps carried out include flat-fielding, dynamic fiber tracing, wavelength calibration, optimal fiber extraction, and sky subtraction. The pipeline also supports the use of sky lines to correct for zero-point offsets between fibers. To account for the moving parts on the instrument and telescope, fiber positions and wavelength solutions are derived in real-time for each dataset. The end result is a one-dimensional spectrum for each target fiber. Quick and fully automated, the pipeline enables on-the-fly reduction while observing, and has been known to outperform the IRAF pipeline by more accurately reproducing known RVs. While Hydra has many configurations in both high- and low-resolution, the pipeline was developed and tested with only one high-resolution mode. In the future we plan to expand the pipeline to work in most commonly used modes.
Helical Channel Design and Technology for Cooling of Muon Beams
NASA Astrophysics Data System (ADS)
Yonehara, K.; Derbenev, Y. S.; Johnson, R. P.
2010-11-01
Novel magnetic helical channel designs for capture and cooling of bright muon beams are being developed using numerical simulations based on new inventions such as helical solenoid (HS) magnets and hydrogen-pressurized RF (HPRF) cavities. We are close to the factor of a million six-dimensional phase space (6D) reduction needed for muon colliders. Recent experimental and simulation results are presented.
High Performance Thermoelectric Cryocoolers Based on II-VI Low Dimensional Structures
2015-05-26
around 210-250K and where the requirement of noise reduction and improving the signal resolution is crucial, such as in case of infrared detectors ...Development of TEC Integrated HOT MWIR detector for Tactical applications .................... 12 SECTION III – DISSEMINATION OF RESULTS...Integrated Dewar- Detector Cooler Assembly (IDDCA). The IDDCA will incorporate the prototype TEC into a typical Long Range thermal Imager dewar package
Kupinski, M. K.; Clarkson, E.
2015-01-01
We present a new method for computing optimized channels for channelized quadratic observers (CQO) that is feasible for high-dimensional image data. The method for calculating channels is applicable in general and optimal for Gaussian distributed image data. Gradient-based algorithms for determining the channels are presented for five different information-based figures of merit (FOMs). Analytic solutions for the optimum channels for each of the five FOMs are derived for the case of equal mean data for both classes. The optimum channels for three of the FOMs under the equal mean condition are shown to be the same. This result is critical since some of the FOMs are much easier to compute. Implementing the CQO requires a set of channels and the first- and second-order statistics of channelized image data from both classes. The dimensionality reduction from M measurements to L channels is a critical advantage of CQO since estimating image statistics from channelized data requires smaller sample sizes and inverting a smaller covariance matrix is easier. In a simulation study we compare the performance of ideal and Hotelling observers to CQO. The optimal CQO channels are calculated using both eigenanalysis and a new gradient-based algorithm for maximizing Jeffrey's divergence (J). Optimal channel selection without eigenanalysis makes the J-CQO on large-dimensional image data feasible. PMID:26366764
Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497
NASA Astrophysics Data System (ADS)
Wu, Yu-Liang; Jiang, Ze-Yi; Zhang, Xin-Xin; Xue, Qing-Guo; Yu, Ai-Bing; Shen, Yan-Song
2017-10-01
Metallurgical dusts can be recycled through direct reduction in rotary hearth furnaces (RHFs) via addition into carbon-based composite pellets. While iron in the dust is recycled, several heavy and alkali metal elements harmful for blast furnace operation, including Zn, Pb, K, and Na, can also be separated and then recycled. However, there is a lack of understanding on thermochemical behavior related to direct reduction in an industrial-scale RHF, especially removal behavior of Zn, Pb, K, and Na, leading to technical issues in industrial practice. In this work, an integrated model of the direct reduction process in an industrial-scale RHF is described. The integrated model includes three mathematical submodels and one physical model, specifically, a three-dimensional (3-D) CFD model of gas flow and heat transfer in an RHF chamber, a one-dimensional (1-D) CFD model of direct reduction inside a pellet, an energy/mass equilibrium model, and a reduction physical experiment using a Si-Mo furnace. The model is validated by comparing the simulation results with measurements in terms of furnace temperature, furnace pressure, and pellet indexes. The model is then used for describing in-furnace phenomena and pellet behavior in terms of heat transfer, direct reduction, and removal of a range of heavy and alkali metal elements under industrial-scale RHF conditions. The results show that the furnace temperature in the preheating section should be kept at a higher level in an industrial-scale RHF compared with that in a pilot-scale RHF. The removal rates of heavy and alkali metal elements inside the composite pellet are all faster than iron metallization, specifically in the order of Pb, Zn, K, and Na.
Ephaptic conduction in a cardiac strand model with 3D electrodiffusion
Mori, Yoichiro; Fishman, Glenn I.; Peskin, Charles S.
2008-01-01
We study cardiac action potential propagation under severe reduction in gap junction conductance. We use a mathematical model of cellular electrical activity that takes into account both three-dimensional geometry and ionic concentration effects. Certain anatomical and biophysical parameters are varied to see their impact on cardiac action potential conduction velocity. This study uncovers quantitative features of ephaptic propagation that differ from previous studies based on one-dimensional models. We also identify a mode of cardiac action potential propagation in which the ephaptic and gap-junction-mediated mechanisms alternate. Our study demonstrates the usefulness of this modeling approach for electrophysiological systems especially when detailed membrane geometry plays an important role. PMID:18434544
Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics
NASA Astrophysics Data System (ADS)
Wehmeyer, Christoph; Noé, Frank
2018-06-01
Inspired by the success of deep learning techniques in the physical and chemical sciences, we apply a modification of an autoencoder type deep neural network to the task of dimension reduction of molecular dynamics data. We can show that our time-lagged autoencoder reliably finds low-dimensional embeddings for high-dimensional feature spaces which capture the slow dynamics of the underlying stochastic processes—beyond the capabilities of linear dimension reduction techniques.
NASA Astrophysics Data System (ADS)
Jiang, Li; Shi, Tielin; Xuan, Jianping
2012-05-01
Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.
Compilation of load spectrum of loader drive axle
NASA Astrophysics Data System (ADS)
Wei, Yongxiang; Zhu, Haoyue; Tang, Heng; Yuan, Qunwei
2018-03-01
In order to study the preparation method of gear fatigue load spectrum for loaders, the load signal of four typical working conditions of loader is collected. The signal that reflects the law of load change is obtained by preprocessing the original signal. The torque of the drive axle is calculated by using the rain flow counting method. According to the operating time ratio of each working condition, the two dimensional load spectrum based on the real working conditions of the drive axle of loader is established by the cycle extrapolation and synthesis method. The two-dimensional load spectrum is converted into one-dimensional load spectrum by means of the mean of torque equal damage method. Torque amplification includes the maximum load torque of the main reduction gear. Based on the theory of equal damage, the accelerated cycles are calculated. In this way, the load spectrum of the loading condition of the drive axle is prepared to reflect loading condition of the loader. The load spectrum can provide reference for fatigue life test and life prediction of loader drive axle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Shaofang; Zhu, Chengzhou; Song, Junhua
2017-07-11
The development of active, durable, and low-cost catalysts to replace noble metal-based materials is highly desirable to promote the sluggish oxygen reduction reaction in fuel cells. Herein, nitrogen and fluorine-codoped three-dimensional carbon nanowire aerogels, composed of interconnected carbon nanowires, were synthesized for the first time by a hydrothermal carbonization process. Owing to their porous nanostructures and heteroatom-doping, the as-prepared carbon nanowire aerogels, with optimized composition, present excellent electrocatalytic activity that is comparable to commercial Pt/C. Remarkably, the aerogels also exhibit superior stability and methanol tolerance. This synthesis procedure paves a new way to design novel heteroatomdoped catalysts.
NASA Astrophysics Data System (ADS)
Campoamor-Stursberg, R.
2018-03-01
A procedure for the construction of nonlinear realizations of Lie algebras in the context of Vessiot-Guldberg-Lie algebras of first-order systems of ordinary differential equations (ODEs) is proposed. The method is based on the reduction of invariants and projection of lowest-dimensional (irreducible) representations of Lie algebras. Applications to the description of parameterized first-order systems of ODEs related by contraction of Lie algebras are given. In particular, the kinematical Lie algebras in (2 + 1)- and (3 + 1)-dimensions are realized simultaneously as Vessiot-Guldberg-Lie algebras of parameterized nonlinear systems in R3 and R4, respectively.
NASA Astrophysics Data System (ADS)
Maghsoudi, Mohammad Javad; Mohamed, Z.; Sudin, S.; Buyamin, S.; Jaafar, H. I.; Ahmad, S. M.
2017-08-01
This paper proposes an improved input shaping scheme for an efficient sway control of a nonlinear three dimensional (3D) overhead crane with friction using the particle swarm optimization (PSO) algorithm. Using this approach, a higher payload sway reduction is obtained as the input shaper is designed based on a complete nonlinear model, as compared to the analytical-based input shaping scheme derived using a linear second order model. Zero Vibration (ZV) and Distributed Zero Vibration (DZV) shapers are designed using both analytical and PSO approaches for sway control of rail and trolley movements. To test the effectiveness of the proposed approach, MATLAB simulations and experiments on a laboratory 3D overhead crane are performed under various conditions involving different cable lengths and sway frequencies. Their performances are studied based on a maximum residual of payload sway and Integrated Absolute Error (IAE) values which indicate total payload sway of the crane. With experiments, the superiority of the proposed approach over the analytical-based is shown by 30-50% reductions of the IAE values for rail and trolley movements, for both ZV and DZV shapers. In addition, simulations results show higher sway reductions with the proposed approach. It is revealed that the proposed PSO-based input shaping design provides higher payload sway reductions of a 3D overhead crane with friction as compared to the commonly designed input shapers.
FeynArts model file for MSSM transition counterterms from DREG to DRED
NASA Astrophysics Data System (ADS)
Stöckinger, Dominik; Varšo, Philipp
2012-02-01
The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.
Using Betweenness Centrality to Identify Manifold Shortcuts
Cukierski, William J.; Foran, David J.
2010-01-01
High-dimensional data presents a challenge to tasks of pattern recognition and machine learning. Dimensionality reduction (DR) methods remove the unwanted variance and make these tasks tractable. Several nonlinear DR methods, such as the well known ISOMAP algorithm, rely on a neighborhood graph to compute geodesic distances between data points. These graphs can contain unwanted edges which connect disparate regions of one or more manifolds. This topological sensitivity is well known [1], [2], [3], yet handling high-dimensional, noisy data in the absence of a priori manifold knowledge, remains an open and difficult problem. This work introduces a divisive, edge-removal method based on graph betweenness centrality which can robustly identify manifold-shorting edges. The problem of graph construction in high dimension is discussed and the proposed algorithm is fit into the ISOMAP workflow. ROC analysis is performed and the performance is tested on synthetic and real datasets. PMID:20607142
Excitation basis for (3+1)d topological phases
NASA Astrophysics Data System (ADS)
Delcamp, Clement
2017-12-01
We consider an exactly solvable model in 3+1 dimensions, based on a finite group, which is a natural generalization of Kitaev's quantum double model. The corresponding lattice Hamiltonian yields excitations located at torus-boundaries. By cutting open the three-torus, we obtain a manifold bounded by two tori which supports states satisfying a higher-dimensional version of Ocneanu's tube algebra. This defines an algebraic structure extending the Drinfel'd double. Its irreducible representations, labeled by two fluxes and one charge, characterize the torus-excitations. The tensor product of such representations is introduced in order to construct a basis for (3+1)d gauge models which relies upon the fusion of the defect excitations. This basis is defined on manifolds of the form Σ × S_1 , with Σ a two-dimensional Riemann surface. As such, our construction is closely related to dimensional reduction from (3+1)d to (2+1)d topological orders.
Mass reduction patterning of silicon-on-oxide-based micromirrors
NASA Astrophysics Data System (ADS)
Hall, Harris J.; Green, Andrew; Dooley, Sarah; Schmidt, Jason D.; Starman, LaVern A.; Langley, Derrick; Coutu, Ronald A.
2016-10-01
It has long been recognized in the design of micromirror-based optical systems that balancing static flatness of the mirror surface through structural design with the system's mechanical dynamic response is challenging. Although a variety of mass reduction approaches have been presented in the literature to address this performance trade, there has been little quantifiable comparison reported. In this work, different mass reduction approaches, some unique to the work, are quantifiably compared with solid plate thinning in both curvature and mass using commercial finite element simulation of a specific square silicon-on-insulator-based micromirror geometry. Other important considerations for micromirror surfaces, including surface profile and smoothness, are also discussed. Fabrication of one of these geometries, a two-dimensional tessellated square pattern, was performed in the presence of a 400-μm-tall central post structure using a simple single mask process. Limited experimental curvature measurements of fabricated samples are shown to correspond well with properly characterized simulation results and indicate ˜67% improvement in radius of curvature in comparison to a solid plate design of equivalent mass.
Tsukagoshi, Yuta; Kamada, Hiroshi; Kamegaya, Makoto; Takeuchi, Ryoko; Nakagawa, Shogo; Tomaru, Yohei; Tanaka, Kenta; Onishi, Mio; Nishino, Tomofumi; Yamazaki, Masashi
2018-05-02
Previous reports on patients with developmental dysplasia of the hip (DDH) showed that the prereduced femoral head was notably smaller and more nonspherical than the intact head, with growth failure observed at the proximal posteromedial area. We evaluated the shape of the femoral head cartilage in patients with DDH before and after reduction, with size and sphericity assessed using 3-dimensional (3D) magnetic resonance imaging (MRI). We studied 10 patients with unilateral DDH (all female) who underwent closed reduction. Patients with avascular necrosis of the femoral head on the plain radiograph 1 year after reduction were excluded. 3D MRI was performed before reduction and after reduction, at 2 years of age. 3D-image analysis software was used to reconstruct the multiplanes. After setting the axial, coronal, and sagittal planes in the software (based on the femoral shaft and neck axes), the smallest sphere that included the femoral head cartilage was drawn, the diameter was measured, and the center of the sphere was defined as the femoral head center. We measured the distance between the center and cartilage surface every 30 degrees on the 3 reconstructed planes. Sphericity of the femoral head was calculated using a ratio (the distance divided by each radius) and compared between prereduction and postreduction. The mean patient age was 7±3 and 26±3 months at the first and second MRI, respectively. The mean duration between the reduction and second MRI was 18±3 months. The femoral head diameter was 26.7±1.5 and 26.0±1.6 mm on the diseased and intact sides, respectively (P=0.069). The ratios of the posteromedial area on the axial plane and the proximoposterior area on the sagittal plane after reduction were significantly larger than before reduction (P<0.01). We demonstrated that the size of the reduced femoral head was nearly equal to that of the intact femoral head and that the growth failure area of the head before reduction, in the proximal posteromedial area, was remodeled after reduction. Level IV-case series.
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
NASA Astrophysics Data System (ADS)
Hyman, J.; Hagberg, A.; Srinivasan, G.; Mohd-Yusof, J.; Viswanathan, H. S.
2017-12-01
We present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths. First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. Accurate estimates of first passage times are obtained with an order of magnitude reduction of CPU time and mesh size using the proposed method.
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
NASA Astrophysics Data System (ADS)
Hyman, Jeffrey D.; Hagberg, Aric; Srinivasan, Gowri; Mohd-Yusof, Jamaludin; Viswanathan, Hari
2017-07-01
We present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths. First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. Accurate estimates of first passage times are obtained with an order of magnitude reduction of CPU time and mesh size using the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Anibal Boscoboinik; Zhong, Jian -Qiang; Kestell, John
2016-03-23
The oxidation and reduction of Ru(0001) surfaces at the confined space between two-dimensional nanoporous silica frameworks and Ru(0001) have been investigated using synchrotron-based ambient pressure X-ray photoelectron spectroscopy (AP-XPS). The porous nature of the frameworks and the weak interaction between the silica and the ruthenium substrate allow oxygen and hydrogen molecules to go through the nanopores and react with the metal at the interface between the silica framework and the metal surface. In this work, three types of two-dimensional silica frameworks have been used to study their influence in the oxidation and reduction of the ruthenium surface at elevated pressuresmore » and temperatures. These frameworks are bilayer silica (0.5 nm thick), bilayer aluminosilicate (0.5 nm thick), and zeolite MFI nanosheets (3 nm thick). It is found that the silica frameworks stay essentially intact under these conditions, but they strongly affect the oxidation of ruthenium, with the 0.5 nm thick aluminosilicate bilayer completely inhibiting the oxidation. Furthermore, the latter is believed to be related to the lower chemisorbed oxygen content arising from electrostatic interactions between the negatively charged aluminosilicate framework and the Ru(0001) substrate.« less
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
1997-01-01
The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.
Upon Generating (2+1)-dimensional Dynamical Systems
NASA Astrophysics Data System (ADS)
Zhang, Yufeng; Bai, Yang; Wu, Lixin
2016-06-01
Under the framework of the Adler-Gel'fand-Dikii(AGD) scheme, we first propose two Hamiltonian operator pairs over a noncommutative ring so that we construct a new dynamical system in 2+1 dimensions, then we get a generalized special Novikov-Veselov (NV) equation via the Manakov triple. Then with the aid of a special symmetric Lie algebra of a reductive homogeneous group G, we adopt the Tu-Andrushkiw-Huang (TAH) scheme to generate a new integrable (2+1)-dimensional dynamical system and its Hamiltonian structure, which can reduce to the well-known (2+1)-dimensional Davey-Stewartson (DS) hierarchy. Finally, we extend the binormial residue representation (briefly BRR) scheme to the super higher dimensional integrable hierarchies with the help of a super subalgebra of the super Lie algebra sl(2/1), which is also a kind of symmetric Lie algebra of the reductive homogeneous group G. As applications, we obtain a super 2+1 dimensional MKdV hierarchy which can be reduced to a super 2+1 dimensional generalized AKNS equation. Finally, we compare the advantages and the shortcomings for the three schemes to generate integrable dynamical systems.
NASA Technical Reports Server (NTRS)
Chevallier, J. P.; Vaucheret, X.
1986-01-01
A synthesis of current trends in the reduction and computation of wall effects is presented. Some of the points discussed include: (1) for the two-dimensional, transonic tests, various control techniques of boundary conditions are used with adaptive walls offering high precision in determining reference conditions and residual corrections. A reduction in the boundary layer effects of the lateral walls is obtained at T2; (2) for the three-dimensional tests, the methods for the reduction of wall effects are still seldom applied due to a lesser need and to their complexity; (3) the supports holding the model of the probes have to be taken into account in the estimation of perturbatory effects.
Wideband radar cross section reduction using two-dimensional phase gradient metasurfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yongfeng; Qu, Shaobo; Wang, Jiafu
2014-06-02
Phase gradient metasurface (PGMs) are artificial surfaces that can provide pre-defined in-plane wave-vectors to manipulate the directions of refracted/reflected waves. In this Letter, we propose to achieve wideband radar cross section (RCS) reduction using two-dimensional (2D) PGMs. A 2D PGM was designed using a square combination of 49 split-ring sub-unit cells. The PGM can provide additional wave-vectors along the two in-plane directions simultaneously, leading to either surface wave conversion, deflected reflection, or diffuse reflection. Both the simulation and experiment results verified the wide-band, polarization-independent, high-efficiency RCS reduction induced by the 2D PGM.
Wire EDM for Refractory Materials
NASA Technical Reports Server (NTRS)
Zellars, G. R.; Harris, F. E.; Lowell, C. E.; Pollman, W. M.; Rys, V. J.; Wills, R. J.
1982-01-01
In an attempt to reduce fabrication time and costs, Wire Electrical Discharge Machine (Wire EDM) method was investigated as tool for fabricating matched blade roots and disk slots. Eight high-strength nickel-base superalloys were used. Computer-controlled Wire EDM technique provided high quality surfaces with excellent dimensional tolerances. Wire EDM method offers potential for substantial reductions in fabrication costs for "hard to machine" alloys and electrically conductive materials in specific high-precision applications.
Symmetry reduction and exact solutions of two higher-dimensional nonlinear evolution equations.
Gu, Yongyi; Qi, Jianming
2017-01-01
In this paper, symmetries and symmetry reduction of two higher-dimensional nonlinear evolution equations (NLEEs) are obtained by Lie group method. These NLEEs play an important role in nonlinear sciences. We derive exact solutions to these NLEEs via the [Formula: see text]-expansion method and complex method. Five types of explicit function solutions are constructed, which are rational, exponential, trigonometric, hyperbolic and elliptic function solutions of the variables in the considered equations.
Zhang, Zhao; Zhao, Mingbo; Chow, Tommy W S
2012-12-01
In this work, sub-manifold projections based semi-supervised dimensionality reduction (DR) problem learning from partial constrained data is discussed. Two semi-supervised DR algorithms termed Marginal Semi-Supervised Sub-Manifold Projections (MS³MP) and orthogonal MS³MP (OMS³MP) are proposed. MS³MP in the singular case is also discussed. We also present the weighted least squares view of MS³MP. Based on specifying the types of neighborhoods with pairwise constraints (PC) and the defined manifold scatters, our methods can preserve the local properties of all points and discriminant structures embedded in the localized PC. The sub-manifolds of different classes can also be separated. In PC guided methods, exploring and selecting the informative constraints is challenging and random constraint subsets significantly affect the performance of algorithms. This paper also introduces an effective technique to select the informative constraints for DR with consistent constraints. The analytic form of the projection axes can be obtained by eigen-decomposition. The connections between this work and other related work are also elaborated. The validity of the proposed constraint selection approach and DR algorithms are evaluated by benchmark problems. Extensive simulations show that our algorithms can deliver promising results over some widely used state-of-the-art semi-supervised DR techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.
Dimensionality reduction for the quantitative evaluation of a smartphone-based Timed Up and Go test.
Palmerini, Luca; Mellone, Sabato; Rocchi, Laura; Chiari, Lorenzo
2011-01-01
The Timed Up and Go is a clinical test to assess mobility in the elderly and in Parkinson's disease. Lately instrumented versions of the test are being considered, where inertial sensors assess motion. To improve the pervasiveness, ease of use, and cost, we consider a smartphone's accelerometer as the measurement system. Several parameters (usually highly correlated) can be computed from the signals recorded during the test. To avoid redundancy and obtain the features that are most sensitive to the locomotor performance, a dimensionality reduction was performed through principal component analysis (PCA). Forty-nine healthy subjects of different ages were tested. PCA was performed to extract new features (principal components) which are not redundant combinations of the original parameters and account for most of the data variability. They can be useful for exploratory analysis and outlier detection. Then, a reduced set of the original parameters was selected through correlation analysis with the principal components. This set could be recommended for studies based on healthy adults. The proposed procedure could be used as a first-level feature selection in classification studies (i.e. healthy-Parkinson's disease, fallers-non fallers) and could allow, in the future, a complete system for movement analysis to be incorporated in a smartphone.
NASA Astrophysics Data System (ADS)
Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.
2016-10-01
X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.
Graph embedding and extensions: a general framework for dimensionality reduction.
Yan, Shuicheng; Xu, Dong; Zhang, Benyu; Zhang, Hong-Jiang; Yang, Qiang; Lin, Stephen
2007-01-01
Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.
Visualizing phylogenetic tree landscapes.
Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A
2017-02-02
Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D projections significantly increase the fit between the tree-to-tree distances and can facilitate the interpretation of the relationship among phylogenetic trees. We demonstrate that the choice of dimensionality reduction method can significantly influence the spatial relationship among a large set of competing phylogenetic trees. We highlight the importance of selecting a dimensionality reduction method to visualize large multi-locus phylogenetic landscapes and demonstrate that 3D projections of mitochondrial tree landscapes better capture the relationship among the trees being compared.
NASA Astrophysics Data System (ADS)
Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.
2012-01-01
The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.
Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A
2007-01-01
The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.
Ding, Jiarui; Condon, Anne; Shah, Sohrab P
2018-05-21
Single-cell RNA-sequencing has great potential to discover cell types, identify cell states, trace development lineages, and reconstruct the spatial organization of cells. However, dimension reduction to interpret structure in single-cell sequencing data remains a challenge. Existing algorithms are either not able to uncover the clustering structures in the data or lose global information such as groups of clusters that are close to each other. We present a robust statistical model, scvis, to capture and visualize the low-dimensional structures in single-cell gene expression data. Simulation results demonstrate that low-dimensional representations learned by scvis preserve both the local and global neighbor structures in the data. In addition, scvis is robust to the number of data points and learns a probabilistic parametric mapping function to add new data points to an existing embedding. We then use scvis to analyze four single-cell RNA-sequencing datasets, exemplifying interpretable two-dimensional representations of the high-dimensional single-cell RNA-sequencing data.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-10-21
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-01-01
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347
Internal Kinematics of the Tongue Following Volume Reduction
SHCHERBATYY, VOLODYMYR; PERKINS, JONATHAN A.; LIU, ZI-JUN
2008-01-01
This study was undertaken to determine the functional consequences following tongue volume reduction on tongue internal kinematics during mastication and neuromuscular stimulation in a pig model. Six ultrasonic-crystals were implanted into the tongue body in a wedge-shaped configuration which allows recording distance changes in the bilateral length (LENG) and posterior thickness (THICK), as well as anterior (AW), posterior dorsal (PDW), and ventral (PVW) widths in 12 Yucatan-minipigs. Six animals received a uniform mid-sagittal tongue volume reduction surgery (reduction), and the other six had identical incisions without tissue removal (sham). The initial-distances among each crystal-pairs were recorded before, and immediately after surgery to calculate the dimensional losses. Referring to the initial-distance there were 3−66% and 1−4% tongue dimensional losses by the reduction and sham surgeries, respectively. The largest deformation in sham animals during mastication was in AW, significantly larger than LENG, PDW, PVW, and THICK (P < 0.01−0.001). In reduction animals, however, these deformational changes significantly diminished and enhanced in the anterior and posterior tongue, respectively (P < 0.05−0.001). In both groups, neuromuscular stimulation produced deformational ranges that were 2−4 times smaller than those occurred during chewing. Furthermore, reduction animals showed significantly decreased ranges of deformation in PVW, LENG, and THICK (P < 0.05−0.01). These results indicate that tongue volume reduction alters the tongue internal kinematics, and the dimensional losses in the anterior tongue caused by volume reduction can be compensated by increased deformations in the posterior tongue during mastication. This compensatory effect, however, diminishes during stimulation of the hypoglossal nerve and individual tongue muscles. PMID:18484603
Effects of Fragmented Fe Intermetallic Compounds on Ductility in Al-Si-Mg Alloys.
Kim, JaeHwang; Kim, DaeHwan
2018-03-01
Fe is intentionally added in order to form the Fe intermetallic compounds (Fe-IMCs) during casting. Field emission scanning electron microscope with energy dispersive spectrometer (EDS) was conducted to understand microstructural changes and chemical composition analyses. The needlelike Fe-IMCs based on two dimensional observation with hundreds of micro size are modified to fragmented particles with the minimum size of 300 nm through clod rolling with 80% thickness reduction. The ratio of Fe:Si on the fragmented Fe-IMCs after 80% reduction is close to 1:1, representing the β-Al5FeSi. The yield and tensile strengths are increased with increasing reduction rate. On the other hand, the elongation is decreased with the 40% reduction, but slightly increased with the 60% reduction. The elongation is dramatically increased over two times for the specimen of 80% reduction compared with that of the as-cast. Fracture behavior is strongly affected by the morphology and size of Fe-IMCs. The fracture mode is changed from brittle to ductile with the microstructure modification of Fe-IMCs.
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
Water linked 3D coordination polymers: Syntheses, structures and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Suryabhan, E-mail: sbs.bhu@gmail.com; Bhim, Anupam
2016-12-15
Three new coordination polymers (CPs) based on Cd and Pb, [Cd(OBA)(μ-H{sub 2}O)(H{sub 2}O)]{sub n}1, [Pb(OBA)(μ-H{sub 2}O)]{sub n}2 [where OBA=4,4’-Oxybis(benzoate)] and [Pb(SDBA)(H{sub 2}O)]{sub n}.1/4DMF 3 (SDBA=4,4’-Sulfonyldibenzoate), have been synthesized and characterized. The single crystal structural studies reveal that CPs 1 and 2 have three dimensional structure. A water molecule bridges two metal centres which appears to the responsible for the dimensionality increase from 2D to 3D. Compound 3 has a supramolecular 3D structure involving water molecule and hydrogen bonds. A structural transformation is observed when 3 was heated at 100 °C or kept in methanol, forming [Pb(SDBA)]{sub n}4. Compound 4 ismore » used as supporting matrix for palladium nanoparticles, PdNPs@4. The PdNPs@4 exhibits good catalytic activity toward the reduction of 4-nitrophenol (4-NP) to 4-aminophenol (4-AP) in the presence of NaBH{sub 4} at room temperature. Luminescence studies revealed that all CPs could be an effective sensor for nitroaromatic explosives. - Graphical abstract: Three new CPs based on Cd and Pb, have been synthesized and characterized. A water molecule bridges two metal centres which appears to the responsible for the dimensionality increase from 2D to 3D. One of the CP is used as supporting matrix for palladium nanoparticles, PdNPs@4. The PdNPs@4 exhibits good catalytic activity toward the reduction of 4-nitrophenol. Luminescence studies shown that all CPs could be an effective sensor for nitroaromatic explosives. - Highlights: • Three new CPs based on Cd and Pb, have been synthesized and characterized. • A water molecule bridges two metal centres which appears to the responsible for the dimensionality increase from 2D to 3D. • One of the CP is used as supporting matrix for palladium nanoparticles, PdNPs@4. • Luminescence studies shown that all CPs could be an effective sensor for nitroaromatic explosives.« less
NASA Astrophysics Data System (ADS)
Wen, Gezheng; Markey, Mia K.
2015-03-01
It is resource-intensive to conduct human studies for task-based assessment of medical image quality and system optimization. Thus, numerical model observers have been developed as a surrogate for human observers. The Hotelling observer (HO) is the optimal linear observer for signal-detection tasks, but the high dimensionality of imaging data results in a heavy computational burden. Channelization is often used to approximate the HO through a dimensionality reduction step, but how to produce channelized images without losing significant image information remains a key challenge. Kernel local Fisher discriminant analysis (KLFDA) uses kernel techniques to perform supervised dimensionality reduction, which finds an embedding transformation that maximizes betweenclass separability and preserves within-class local structure in the low-dimensional manifold. It is powerful for classification tasks, especially when the distribution of a class is multimodal. Such multimodality could be observed in many practical clinical tasks. For example, primary and metastatic lesions may both appear in medical imaging studies, but the distributions of their typical characteristics (e.g., size) may be very different. In this study, we propose to use KLFDA as a novel channelization method. The dimension of the embedded manifold (i.e., the result of KLFDA) is a counterpart to the number of channels in the state-of-art linear channelization. We present a simulation study to demonstrate the potential usefulness of KLFDA for building the channelized HOs (CHOs) and generating reliable decision statistics for clinical tasks. We show that the performance of the CHO with KLFDA channels is comparable to that of the benchmark CHOs.
A Review on Dimension Reduction
Ma, Yanyuan; Zhu, Liping
2013-01-01
Summary Summarizing the effect of many covariates through a few linear combinations is an effective way of reducing covariate dimension and is the backbone of (sufficient) dimension reduction. Because the replacement of high-dimensional covariates by low-dimensional linear combinations is performed with a minimum assumption on the specific regression form, it enjoys attractive advantages as well as encounters unique challenges in comparison with the variable selection approach. We review the current literature of dimension reduction with an emphasis on the two most popular models, where the dimension reduction affects the conditional distribution and the conditional mean, respectively. We discuss various estimation and inference procedures in different levels of detail, with the intention of focusing on their underneath idea instead of technicalities. We also discuss some unsolved problems in this area for potential future research. PMID:23794782
NASA Astrophysics Data System (ADS)
Khaimovich, I. N.
2017-10-01
The articles provides the calculation algorithms for blank design and die forming fitting to produce the compressor blades for aircraft engines. The design system proposed in the article allows generating drafts of trimming and reducing dies automatically, leading to significant reduction of work preparation time. The detailed analysis of the blade structural elements features was carried out, the taken limitations and technological solutions allowed forming generalized algorithms of forming parting stamp face over the entire circuit of the engraving for different configurations of die forgings. The author worked out the algorithms and programs to calculate three dimensional point locations describing the configuration of die cavity. As a result the author obtained the generic mathematical model of final die block in the form of three-dimensional array of base points. This model is the base for creation of engineering documentation of technological equipment and means of its control.
Three New (2+1)-dimensional Integrable Systems and Some Related Darboux Transformations
NASA Astrophysics Data System (ADS)
Guo, Xiu-Rong
2016-06-01
We introduce two operator commutators by using different-degree loop algebras of the Lie algebra A1, then under the framework of zero curvature equations we generate two (2+1)-dimensional integrable hierarchies, including the (2+1)-dimensional shallow water wave (SWW) hierarchy and the (2+1)-dimensional Kaup-Newell (KN) hierarchy. Through reduction of the (2+1)-dimensional hierarchies, we get a (2+1)-dimensional SWW equation and a (2+1)-dimensional KN equation. Furthermore, we obtain two Darboux transformations of the (2+1)-dimensional SWW equation. Similarly, the Darboux transformations of the (2+1)-dimensional KN equation could be deduced. Finally, with the help of the spatial spectral matrix of SWW hierarchy, we generate a (2+1) heat equation and a (2+1) nonlinear generalized SWW system containing inverse operators with respect to the variables x and y by using a reduction spectral problem from the self-dual Yang-Mills equations. Supported by the National Natural Science Foundation of China under Grant No. 11371361, the Shandong Provincial Natural Science Foundation of China under Grant Nos. ZR2012AQ011, ZR2013AL016, ZR2015EM042, National Social Science Foundation of China under Grant No. 13BJY026, the Development of Science and Technology Project under Grant No. 2015NS1048 and A Project of Shandong Province Higher Educational Science and Technology Program under Grant No. J14LI58
3D-Hydrogel Based Polymeric Nanoreactors for Silver Nano-Antimicrobial Composites Generation.
Soto-Quintero, Albanelly; Romo-Uribe, Ángel; Bermúdez-Morales, Víctor H; Quijada-Garrido, Isabel; Guarrotxena, Nekane
2017-08-01
This study underscores the development of Ag hydrogel nanocomposites, as smart substrates for antibacterial uses, via innovative in situ reactive and reduction pathways. To this end, two different synthetic strategies were used. Firstly thiol-acrylate (PSA) based hydrogels were attained via thiol-ene and radical polymerization of polyethylene glycol (PEG) and polycaprolactone (PCL). As a second approach, polyurethane (PU) based hydrogels were achieved by condensation polymerization from diisocyanates and PCL and PEG diols. In fact, these syntheses rendered active three-dimensional (3D) hydrogel matrices which were used as nanoreactors for in situ reduction of AgNO₃ to silver nanoparticles. A redox chemistry of stannous catalyst in PU hydrogel yielded spherical AgNPs formation, even at 4 °C in the absence of external reductant; and an appropriate thiol-functionalized polymeric network promoted spherical AgNPs well dispersed through PSA hydrogel network, after heating up the swollen hydrogel at 103 °C in the presence of citrate-reductant. Optical and swelling behaviors of both series of hydrogel nanocomposites were investigated as key factors involved in their antimicrobial efficacy over time. Lastly, in vitro antibacterial activity of Ag loaded hydrogels exposed to Pseudomona aeruginosa and Escherichia coli strains indicated a noticeable sustained inhibitory effect, especially for Ag-PU hydrogel nanocomposites with bacterial inhibition growth capabilities up to 120 h cultivation.
Advanced Fluid Reduced Order Models for Compressible Flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tezaur, Irina Kalashnikova; Fike, Jeffrey A.; Carlberg, Kevin Thomas
This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly themore » POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.« less
Sample Dimensionality Effects on d' and Proportion of Correct Responses in Discrimination Testing.
Bloom, David J; Lee, Soo-Yeun
2016-09-01
Products in the food and beverage industry have varying levels of dimensionality ranging from pure water to multicomponent food products, which can modify sensory perception and possibly influence discrimination testing results. The objectives of the study were to determine the impact of (1) sample dimensionality and (2) complex formulation changes on the d' and proportion of correct response of the 3-AFC and triangle methods. Two experiments were conducted using 47 prescreened subjects who performed either triangle or 3-AFC test procedures. In Experiment I, subjects performed 3-AFC and triangle tests using model solutions with different levels of dimensionality. Samples increased in dimensionality from 1-dimensional sucrose in water solution to 3-dimensional sucrose, citric acid, and flavor in water solution. In Experiment II, subjects performed 3-AFC and triangle tests using 3-dimensional solutions. Sample pairs differed in all 3 dimensions simultaneously to represent complex formulation changes. Two forms of complexity were compared: dilution, where all dimensions decreased in the same ratio, and compensation, where a dimension was increased to compensate for a reduction in another. The proportion of correct responses decreased for both methods when the dimensionality was increased from 1- to 2-dimensional samples. No reduction in correct responses was observed from 2- to 3-dimensional samples. No significant differences in d' were demonstrated between the 2 methods when samples with complex formulation changes were tested. Results reveal an impact on proportion of correct responses due to sample dimensionality and should be explored further using a wide range of sample formulations. © 2016 Institute of Food Technologists®
Long-range prediction of Indian summer monsoon rainfall using data mining and statistical approaches
NASA Astrophysics Data System (ADS)
H, Vathsala; Koolagudi, Shashidhar G.
2017-10-01
This paper presents a hybrid model to better predict Indian summer monsoon rainfall. The algorithm considers suitable techniques for processing dense datasets. The proposed three-step algorithm comprises closed itemset generation-based association rule mining for feature selection, cluster membership for dimensionality reduction, and simple logistic function for prediction. The application of predicting rainfall into flood, excess, normal, deficit, and drought based on 36 predictors consisting of land and ocean variables is presented. Results show good accuracy in the considered study period of 37years (1969-2005).
Kernel PLS-SVC for Linear and Nonlinear Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Matthews, Bryan
2003-01-01
A new methodology for discrimination is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by support vector machines for classification. Close connection of orthonormalized PLS and Fisher's approach to linear discrimination or equivalently with canonical correlation analysis is described. This gives preference to use orthonormalized PLS over principal component analysis. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of the classification finger movement periods versus non-movement periods based on electroencephalogram.
Active Subspaces for Wind Plant Surrogate Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Ryan N; Quick, Julian; Dykes, Katherine L
Understanding the uncertainty in wind plant performance is crucial to their cost-effective design and operation. However, conventional approaches to uncertainty quantification (UQ), such as Monte Carlo techniques or surrogate modeling, are often computationally intractable for utility-scale wind plants because of poor congergence rates or the curse of dimensionality. In this paper we demonstrate that wind plant power uncertainty can be well represented with a low-dimensional active subspace, thereby achieving a significant reduction in the dimension of the surrogate modeling problem. We apply the active sub-spaces technique to UQ of plant power output with respect to uncertainty in turbine axial inductionmore » factors, and find a single active subspace direction dominates the sensitivity in power output. When this single active subspace direction is used to construct a quadratic surrogate model, the number of model unknowns can be reduced by up to 3 orders of magnitude without compromising performance on unseen test data. We conclude that the dimension reduction achieved with active subspaces makes surrogate-based UQ approaches tractable for utility-scale wind plants.« less
Li, Wei Zhong; Zhang, Mei Chao; Li, Shao Ping; Zhang, Lei Tao; Huang, Yu
2009-06-01
With the advent of CAD/CAM and rapid prototyping (RP), a technical revolution in oral and maxillofacial trauma was promoted to benefit treatment, repair of maxillofacial fractures and reconstruction of maxillofacial defects. For a patient with zygomatico-facial collapse deformity resulting from a zygomatico-orbito-maxillary complex (ZOMC) fracture, CT scan data were processed by using Mimics 10.0 for three-dimensional (3D) reconstruction. The reduction design was aided by 3D virtual imaging and the 3D skull model was reproduced using the RP technique. In line with the design by Mimics, presurgery was performed on the 3D skull model and the semi-coronal incision was taken for reduction of ZOMC fracture, based on the outcome from the presurgery. Postoperative CT and images revealed significantly modified zygomatic collapse and zygomatic arch rise and well-modified facial symmetry. The CAD/CAM and RP technique is a relatively useful tool that can assist surgeons with reconstruction of the maxillofacial skeleton, especially in repairs of ZOMC fracture.
NASA Astrophysics Data System (ADS)
Ferng, Yi-Cherng; Chang, Liann-Be; Das, Atanu; Lin, Ching-Chi; Cheng, Chun-Yu; Kuei, Ping-Yu; Chow, Lee
2012-12-01
In this paper, a varactor with metal-semiconductor-metal diodes on top of the (NH4)2S/P2S5-treated AlGaN/GaN two-dimensional electron gas epitaxial structure (MSM-2DEG) is proposed to the surge protection for the first time. The sulfur-treated MSM-2DEG varactor properties, including current-voltage (I-V), capacitance-voltage (C-V), and frequency response of the proposed surge protection circuit, are presented. To verify its capability of surge protection, we replace the metal oxide varistor (MOV) and resistor (R) in a state-of-the-art surge protection circuit with the sulfur-treated MSM-2DEG varactor under the application conditions of system-level surge tests. The measured results show that the proposed surge protection circuit, consisted of a gas discharge arrester (GDA) and a sulfur-treated MSM-2DEG varactor, can suppress an electromagnetic pulse (EMP) voltage of 4000 to 360 V, a reduction of 91%, whereas suppression is to 1780 V, a reduction of 55%, when using only a GDA.
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
2016-01-01
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
Online Sequential Projection Vector Machine with Adaptive Data Mean Update
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958
Online Sequential Projection Vector Machine with Adaptive Data Mean Update.
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.
Tighe, Patrick J; Lucas, Stephen D; Edwards, David A; Boezaart, André P; Aytug, Haldun; Bihorac, Azra
2012-10-01
The purpose of this project was to determine whether machine-learning classifiers could predict which patients would require a preoperative acute pain service (APS) consultation. Retrospective cohort. University teaching hospital. The records of 9,860 surgical patients posted between January 1 and June 30, 2010 were reviewed. Request for APS consultation. A cohort of machine-learning classifiers was compared according to its ability or inability to classify surgical cases as requiring a request for a preoperative APS consultation. Classifiers were then optimized utilizing ensemble techniques. Computational efficiency was measured with the central processing unit processing times required for model training. Classifiers were tested using the full feature set, as well as the reduced feature set that was optimized using a merit-based dimensional reduction strategy. Machine-learning classifiers correctly predicted preoperative requests for APS consultations in 92.3% (95% confidence intervals [CI], 91.8-92.8) of all surgical cases. Bayesian methods yielded the highest area under the receiver operating curve (0.87, 95% CI 0.84-0.89) and lowest training times (0.0018 seconds, 95% CI, 0.0017-0.0019 for the NaiveBayesUpdateable algorithm). An ensemble of high-performing machine-learning classifiers did not yield a higher area under the receiver operating curve than its component classifiers. Dimensional reduction decreased the computational requirements for multiple classifiers, but did not adversely affect classification performance. Using historical data, machine-learning classifiers can predict which surgical cases should prompt a preoperative request for an APS consultation. Dimensional reduction improved computational efficiency and preserved predictive performance. Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
Two-dimensional materials as catalysts for energy conversion
Siahrostami, Samira; Tsai, Charlie; Karamad, Mohammadreza; ...
2016-08-24
Although large efforts have been dedicated to studying two-dimensional materials for catalysis, a rationalization of the associated trends in their intrinsic activity has so far been elusive. In the present work we employ density functional theory to examine a variety of two-dimensional materials, including, carbon based materials, hexagonal boron nitride ( h-BN), transition metal dichalcogenides (e.g. MoS 2, MoSe 2) and layered oxides, to give an overview of the trends in adsorption energies. By examining key reaction intermediates relevant to the oxygen reduction, and oxygen evolution reactions we find that binding energies largely follow the linear scaling relationships observed formore » pure metals. Here, this observation is very important as it suggests that the same simplifying assumptions made to correlate descriptors with reaction rates in transition metal catalysts are also valid for the studied two-dimensional materials. By means of these scaling relations, for each reaction we also identify several promising candidates that are predicted to exhibit a comparable activity to the state-of-the-art catalysts.« less
Shi, Chaoyang; Tercero, Carlos; Ikeda, Seiichi; Ooe, Katsutoshi; Fukuda, Toshio; Komori, Kimihiro; Yamamoto, Kiyohito
2012-09-01
It is desirable to reduce aortic stent graft installation time and the amount of contrast media used for this process. Guidance with augmented reality can achieve this by facilitating alignment of the stent graft with the renal and mesenteric arteries. For this purpose, a sensor fusion is proposed between intravascular ultrasound (IVUS) and magnetic trackers to construct three-dimensional virtual reality models of the blood vessels, as well as improvements to the gradient vector flow snake for boundary detection in ultrasound images. In vitro vasculature imaging experiments were done with hybrid probe and silicone models of the vasculature. The dispersion of samples for the magnetic tracker in the hybrid probe increased less than 1 mm when the IVUS was activated. Three-dimensional models of the descending thoracic aorta, with cross-section radius average error of 0.94 mm, were built from the data fusion. The development of this technology will enable reduction in the amount of contrast media required for in vivo and real-time three-dimensional blood vessel imaging. Copyright © 2012 John Wiley & Sons, Ltd.
Clustering and Dimensionality Reduction to Discover Interesting Patterns in Binary Data
NASA Astrophysics Data System (ADS)
Palumbo, Francesco; D'Enza, Alfonso Iodice
The attention towards binary data coding increased consistently in the last decade due to several reasons. The analysis of binary data characterizes several fields of application, such as market basket analysis, DNA microarray data, image mining, text mining and web-clickstream mining. The paper illustrates two different approaches exploiting a profitable combination of clustering and dimensionality reduction for the identification of non-trivial association structures in binary data. An application in the Association Rules framework supports the theory with the empirical evidence.
Akram, M Nadeem; Tong, Zhaomin; Ouyang, Guangmin; Chen, Xuyuan; Kartashov, Vladimir
2010-06-10
We utilize spatial and angular diversity to achieve speckle reduction in laser illumination. Both free-space and imaging geometry configurations are considered. A fast two-dimensional scanning micromirror is employed to steer the laser beam. A simple experimental setup is built to demonstrate the application of our technique in a two-dimensional laser picture projection. Experimental results show that the speckle contrast factor can be reduced down to 5% within the integration time of the detector.
Argyres–Douglas theories, S 1 reductions, and topological symmetries
Buican, Matthew; Nishinaka, Takahiro
2015-12-21
In a recent paper, we proposed closed-form expressions for the superconformal indices of the (A(1), A(2n-3)) and(A(1), D-2n) Argyres-Douglas (AD) superconformal field theories (SCFTs) in the Schur limit. Following up on our results, we turn our attention to the small S-1 regime of these indices. As expected on general grounds, our study reproduces the S-3 partition functions of the resulting dimensionally reduced theories. However, we show that in all cases-with the exception of the reduction of the (A(1), D-4) SCFTcertain imaginary partners of real mass terms are turned on in the corresponding mirror theories. We interpret these deformations as Rmore » symmetry mixing with the topological symmetries of the direct S-1 reductions. Moreover, we argue that these shifts occur in any of our theories whose four-dimensional N = 2 superconformal U(1)(R) symmetry does not obey an SU(2) quantization condition. We then use our R symmetry map to find the fourdimensional ancestors of certain three-dimensional operators. Somewhat surprisingly, this picture turns out to imply that the scaling dimensions of many of the chiral operators of the four-dimensional theory are encoded in accidental symmetries of the three-dimensional theory. We also comment on the implications of our work on the space of general N = 2 SCFTs.« less
Argyres–Douglas theories, S 1 reductions, and topological symmetries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buican, Matthew; Nishinaka, Takahiro
In a recent paper, we proposed closed-form expressions for the superconformal indices of the (A(1), A(2n-3)) and(A(1), D-2n) Argyres-Douglas (AD) superconformal field theories (SCFTs) in the Schur limit. Following up on our results, we turn our attention to the small S-1 regime of these indices. As expected on general grounds, our study reproduces the S-3 partition functions of the resulting dimensionally reduced theories. However, we show that in all cases-with the exception of the reduction of the (A(1), D-4) SCFTcertain imaginary partners of real mass terms are turned on in the corresponding mirror theories. We interpret these deformations as Rmore » symmetry mixing with the topological symmetries of the direct S-1 reductions. Moreover, we argue that these shifts occur in any of our theories whose four-dimensional N = 2 superconformal U(1)(R) symmetry does not obey an SU(2) quantization condition. We then use our R symmetry map to find the fourdimensional ancestors of certain three-dimensional operators. Somewhat surprisingly, this picture turns out to imply that the scaling dimensions of many of the chiral operators of the four-dimensional theory are encoded in accidental symmetries of the three-dimensional theory. We also comment on the implications of our work on the space of general N = 2 SCFTs.« less
NASA Astrophysics Data System (ADS)
Schulz, Wolfgang; Hermanns, Torsten; Al Khawli, Toufik
2017-07-01
Decision making for competitive production in high-wage countries is a daily challenge where rational and irrational methods are used. The design of decision making processes is an intriguing, discipline spanning science. However, there are gaps in understanding the impact of the known mathematical and procedural methods on the usage of rational choice theory. Following Benjamin Franklin's rule for decision making formulated in London 1772, he called "Prudential Algebra" with the meaning of prudential reasons, one of the major ingredients of Meta-Modelling can be identified finally leading to one algebraic value labelling the results (criteria settings) of alternative decisions (parameter settings). This work describes the advances in Meta-Modelling techniques applied to multi-dimensional and multi-criterial optimization by identifying the persistence level of the corresponding Morse-Smale Complex. Implementations for laser cutting and laser drilling are presented, including the generation of fast and frugal Meta-Models with controlled error based on mathematical model reduction Reduced Models are derived to avoid any unnecessary complexity. Both, model reduction and analysis of multi-dimensional parameter space are used to enable interactive communication between Discovery Finders and Invention Makers. Emulators and visualizations of a metamodel are introduced as components of Virtual Production Intelligence making applicable the methods of Scientific Design Thinking and getting the developer as well as the operator more skilled.
NASA Astrophysics Data System (ADS)
So, Hongyun; Senesky, Debbie G.
2016-01-01
In this letter, three-dimensional gateless AlGaN/GaN high electron mobility transistors (HEMTs) were demonstrated with 54% reduction in electrical resistance and 73% increase in surface area compared with conventional gateless HEMTs on planar substrates. Inverted pyramidal AlGaN/GaN surfaces were microfabricated using potassium hydroxide etched silicon with exposed (111) surfaces and metal-organic chemical vapor deposition of coherent AlGaN/GaN thin films. In addition, electrical characterization of the devices showed that a combination of series and parallel connections of the highly conductive two-dimensional electron gas along the pyramidal geometry resulted in a significant reduction in electrical resistance at both room and high temperatures (up to 300 °C). This three-dimensional HEMT architecture can be leveraged to realize low-power and reliable power electronics, as well as harsh environment sensors with increased surface area.
Kondo, Atsushi; Suzuki, Takayuki; Kotani, Ryosuke; Maeda, Kazuyuki
2017-05-23
A new 3D metal-organic framework (MOF), in which 2D layers are interlaced to form a 3D architecture, was synthesized by a reaction of Cu(BF 4 ) 2 and 1,3-bis(4-pyridyl)propane (bpp) in a water/1-hexanol solvent system, and the crystal structure of the MOF was successfully solved. The MOF is reversibly transformed to a 1D chain MOF, which shows gate adsorption properties. The dynamic transformation gives crystal size reduction resulting in a slight change in CO 2 adsorption isotherms. The 1D MOF shows selective adsorption/separation properties on benzene and its analogues with similar sizes and shapes (benzene, toluene, and cyclohexane).
NASA Astrophysics Data System (ADS)
Hunziker, Jürg; Laloy, Eric; Linde, Niklas
2016-04-01
Deterministic inversion procedures can often explain field data, but they only deliver one final subsurface model that depends on the initial model and regularization constraints. This leads to poor insights about the uncertainties associated with the inferred model properties. In contrast, probabilistic inversions can provide an ensemble of model realizations that accurately span the range of possible models that honor the available calibration data and prior information allowing a quantitative description of model uncertainties. We reconsider the problem of inferring the dielectric permittivity (directly related to radar velocity) structure of the subsurface by inversion of first-arrival travel times from crosshole ground penetrating radar (GPR) measurements. We rely on the DREAM_(ZS) algorithm that is a state-of-the-art Markov chain Monte Carlo (MCMC) algorithm. Such algorithms need several orders of magnitude more forward simulations than deterministic algorithms and often become infeasible in high parameter dimensions. To enable high-resolution imaging with MCMC, we use a recently proposed dimensionality reduction approach that allows reproducing 2D multi-Gaussian fields with far fewer parameters than a classical grid discretization. We consider herein a dimensionality reduction from 5000 to 257 unknowns. The first 250 parameters correspond to a spectral representation of random and uncorrelated spatial fluctuations while the remaining seven geostatistical parameters are (1) the standard deviation of the data error, (2) the mean and (3) the variance of the relative electric permittivity, (4) the integral scale along the major axis of anisotropy, (5) the anisotropy angle, (6) the ratio of the integral scale along the minor axis of anisotropy to the integral scale along the major axis of anisotropy and (7) the shape parameter of the Matérn function. The latter essentially defines the type of covariance function (e.g., exponential, Whittle, Gaussian). We present an improved formulation of the dimensionality reduction, and numerically show how it reduces artifacts in the generated models and provides better posterior estimation of the subsurface geostatistical structure. We next show that the results of the method compare very favorably against previous deterministic and stochastic inversion results obtained at the South Oyster Bacterial Transport Site in Virginia, USA. The long-term goal of this work is to enable MCMC-based full waveform inversion of crosshole GPR data.
Analytic integration of real-virtual counterterms in NNLO jet cross sections I
NASA Astrophysics Data System (ADS)
Aglietti, Ugo; Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Trócsányi, Zoltán
2008-09-01
We present analytic evaluations of some integrals needed to give explicitly the integrated real-virtual counterterms, based on a recently proposed subtraction scheme for next-to-next-to-leading order (NNLO) jet cross sections. After an algebraic reduction of the integrals, integration-by-parts identities are used for the reduction to master integrals and for the computation of the master integrals themselves by means of differential equations. The results are written in terms of one- and two-dimensional harmonic polylogarithms, once an extension of the standard basis is made. We expect that the techniques described here will be useful in computing other integrals emerging in calculations in perturbative quantum field theories.
A data reduction package for multiple object spectroscopy
NASA Technical Reports Server (NTRS)
Hill, J. M.; Eisenhamer, J. D.; Silva, D. R.
1986-01-01
Experience with fiber-optic spectrometers has demonstrated improvements in observing efficiency for clusters of 30 or more objects that must in turn be matched by data reduction capability increases. The Medusa Automatic Reduction System reduces data generated by multiobject spectrometers in the form of two-dimensional images containing 44 to 66 individual spectra, using both software and hardware improvements to efficiently extract the one-dimensional spectra. Attention is given to the ridge-finding algorithm for automatic location of the spectra in the CCD frame. A simultaneous extraction of calibration frames allows an automatic wavelength calibration routine to determine dispersion curves, and both line measurements and cross-correlation techniques are used to determine galaxy redshifts.
The concept and evolution of involved site radiation therapy for lymphoma.
Specht, Lena; Yahalom, Joachim
2015-10-01
We describe the development of radiation therapy for lymphoma from extended field radiotherapy of the past to modern conformal treatment with involved site radiation therapy based on advanced imaging, three-dimensional treatment planning and advanced treatment delivery techniques. Today, radiation therapy is part of the multimodality treatment of lymphoma, and the irradiated tissue volume is much smaller than before, leading to highly significant reductions in the risks of long-term complications.
Teshima, Tara Lynn; Patel, Vaibhav; Mainprize, James G; Edwards, Glenn; Antonyshyn, Oleh M
2015-07-01
The utilization of three-dimensional modeling technology in craniomaxillofacial surgery has grown exponentially during the last decade. Future development, however, is hindered by the lack of a normative three-dimensional anatomic dataset and a statistical mean three-dimensional virtual model. The purpose of this study is to develop and validate a protocol to generate a statistical three-dimensional virtual model based on a normative dataset of adult skulls. Two hundred adult skull CT images were reviewed. The average three-dimensional skull was computed by processing each CT image in the series using thin-plate spline geometric morphometric protocol. Our statistical average three-dimensional skull was validated by reconstructing patient-specific topography in cranial defects. The experiment was repeated 4 times. In each case, computer-generated cranioplasties were compared directly to the original intact skull. The errors describing the difference between the prediction and the original were calculated. A normative database of 33 adult human skulls was collected. Using 21 anthropometric landmark points, a protocol for three-dimensional skull landmarking and data reduction was developed and a statistical average three-dimensional skull was generated. Our results show the root mean square error (RMSE) for restoration of a known defect using the native best match skull, our statistical average skull, and worst match skull was 0.58, 0.74, and 4.4 mm, respectively. The ability to statistically average craniofacial surface topography will be a valuable instrument for deriving missing anatomy in complex craniofacial defects and deficiencies as well as in evaluating morphologic results of surgery.
ERIC Educational Resources Information Center
Hoko, J. Aaron; LeBlanc, Judith M.
1988-01-01
Because disabled learners may profit from procedures using gradual stimulus change, this study utilized a microcomputer to investigate the effectiveness of stimulus equalization, an error reduction procedure involving an abrupt but temporary reduction of dimensional complexity. The procedure was found to be generally effective and implications for…
A two-dimensional lattice equation as an extension of the Heideman-Hogan recurrence
NASA Astrophysics Data System (ADS)
Kamiya, Ryo; Kanki, Masataka; Mase, Takafumi; Tokihiro, Tetsuji
2018-03-01
We consider a two dimensional extension of the so-called linearizable mappings. In particular, we start from the Heideman-Hogan recurrence, which is known as one of the linearizable Somos-like recurrences, and introduce one of its two dimensional extensions. The two dimensional lattice equation we present is linearizable in both directions, and has the Laurent and the coprimeness properties. Moreover, its reduction produces a generalized family of the Heideman-Hogan recurrence. Higher order examples of two dimensional linearizable lattice equations related to the Dana Scott recurrence are also discussed.
Transferring of speech movements from video to 3D face space.
Pei, Yuru; Zha, Hongbin
2007-01-01
We present a novel method for transferring speech animation recorded in low quality videos to high resolution 3D face models. The basic idea is to synthesize the animated faces by an interpolation based on a small set of 3D key face shapes which span a 3D face space. The 3D key shapes are extracted by an unsupervised learning process in 2D video space to form a set of 2D visemes which are then mapped to the 3D face space. The learning process consists of two main phases: 1) Isomap-based nonlinear dimensionality reduction to embed the video speech movements into a low-dimensional manifold and 2) K-means clustering in the low-dimensional space to extract 2D key viseme frames. Our main contribution is that we use the Isomap-based learning method to extract intrinsic geometry of the speech video space and thus to make it possible to define the 3D key viseme shapes. To do so, we need only to capture a limited number of 3D key face models by using a general 3D scanner. Moreover, we also develop a skull movement recovery method based on simple anatomical structures to enhance 3D realism in local mouth movements. Experimental results show that our method can achieve realistic 3D animation effects with a small number of 3D key face models.
Performance and analysis of a three-dimensional nonorthogonal laser Doppler anemometer
NASA Technical Reports Server (NTRS)
Snyder, P. K.; Orloff, K. L.; Aoyagi, K.
1981-01-01
A three dimensional laser Doppler anemometer with a nonorthogonal third axis coupled by 14 deg was designed and tested. A highly three dimensional flow field of a jet in a crossflow was surveyed to test the three dimensional capability of the instrument. Sample data are presented demonstrating the ability of the 3D LDA to resolve three orthogonal velocity components. Modifications to the optics, signal processing electronics, and data reduction methods are suggested.
Anisotropic Structure of Rotating Homogeneous Turbulence at High Reynolds Numbers
NASA Technical Reports Server (NTRS)
Cambon, Claude; Mansour, Nagi N.; Squires, Kyle D.; Rai, Man Mohan (Technical Monitor)
1995-01-01
Large eddy simulation is used to investigate the development of anisotropies and the evolution towards a quasi two-dimensional state in rotating homogeneous turbulence at high Reynolds number. The present study demonstrates the existence of two transitions in the development of anisotropy. The first transition marks the onset of anisotropy and occurs when a macro-Rossby number (based on a longitudinal integral lengthscale) has decreased to near unity while the second transition occurs when a micro-Rossby number (defined in this work as the ratio of the rms fluctuating vorticity to background vorticity) has decreased to unity. The anisotropy marked by the first transition corresponds to a reduction in dimensionality while the second transition corresponds to a polarization of the flow, i.e., relative dominance of the velocity components in the plane normal to the rotation axis. Polarization is reflected by emergence of anisotropy measures based on the two-dimensional component of the turbulence. Investigation of the vorticity structure shows that the second transition is also characterized by an increasing tendency for alignment between the fluctuating vorticity vector and the background angular velocity vector with a preference for corrotative vorticity.
Kim, Songkil; Russell, Michael; Kulkarni, Dhaval D; Henry, Mathias; Kim, Steve; Naik, Rajesh R; Voevodin, Andrey A; Jang, Seung Soon; Tsukruk, Vladimir V; Fedorov, Andrei G
2016-01-26
Interfacial contact of two-dimensional graphene with three-dimensional metal electrodes is crucial to engineering high-performance graphene-based nanodevices with superior performance. Here, we report on the development of a rapid "nanowelding" method for enhancing properties of interface to graphene buried under metal electrodes using a focused electron beam induced deposition (FEBID). High energy electron irradiation activates two-dimensional graphene structure by generation of structural defects at the interface to metal contacts with subsequent strong bonding via FEBID of an atomically thin graphitic interlayer formed by low energy secondary electron-assisted dissociation of entrapped hydrocarbon contaminants. Comprehensive investigation is conducted to demonstrate formation of the FEBID graphitic interlayer and its impact on contact properties of graphene devices achieved via strong electromechanical coupling at graphene-metal interfaces. Reduction of the device electrical resistance by ∼50% at a Dirac point and by ∼30% at the gate voltage far from the Dirac point is obtained with concurrent improvement in thermomechanical reliability of the contact interface. Importantly, the process is rapid and has an excellent insertion potential into a conventional fabrication workflow of graphene-based nanodevices through single-step postprocessing modification of interfacial properties at the buried heterogeneous contact.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.
Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou
2016-01-01
For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.
Neuschulz, J; Schaefer, I; Scheer, M; Christ, H; Braumann, B
2013-07-01
In order to visualize and quantify the direction and extent of morphological upper-jaw changes in infants with unilateral cleft lip and palate (UCLP) during early orthodontic treatment, a three-dimensional method of cast analysis for routine application was developed. In the present investigation, this method was used to identify reaction patterns associated with specific cleft forms. The study included a cast series reflecting the upper-jaw situations of 46 infants with complete (n=27) or incomplete (n=19) UCLP during week 1 and months 3, 6, and 12 of life. Three-dimensional datasets were acquired and visualized with scanning software (DigiModel®; OrthoProof, The Netherlands). Following interactive identification of landmarks on the digitized surface relief, a defined set of representative linear parameters were three-dimensionally measured. At the same time, the three-dimensional surfaces of one patient series were superimposed based on a defined reference plane. Morphometric differences were statistically analyzed. Thanks to the user-friendly software, all landmarks could be identified quickly and reproducibly, thus, allowing for simultaneous three-dimensional measurement of all defined parameters. The measured values revealed that significant morphometric differences were present in all three planes of space between the two patient groups. Patients with complete UCLP underwent significantly larger reductions in cleft width (p<0.001), and sagittal growth in the complete UCLP group exceeded sagittal growth in the incomplete UCLP group by almost 50% within the first year of life. Based on patients with incomplete versus complete UCLP, different reaction patterns were identified that depended not on apparent severities of malformation but on cleft forms.
Extra-dimensional models on the lattice
Knechtli, Francesco; Rinaldi, Enrico
2016-08-05
In this paper we summarize the ongoing effort to study extra-dimensional gauge theories with lattice simulations. In these models the Higgs field is identified with extra-dimensional components of the gauge field. The Higgs potential is generated by quantum corrections and is protected from divergences by the higher dimensional gauge symmetry. Dimensional reduction to four dimensions can occur through compactification or localization. Gauge-Higgs unification models are often studied using perturbation theory. Numerical lattice simulations are used to go beyond these perturbative expectations and to include nonperturbative effects. We describe the known perturbative predictions and their fate in the strongly-coupled regime formore » various extra-dimensional models.« less
van Unen, Vincent; Höllt, Thomas; Pezzotti, Nicola; Li, Na; Reinders, Marcel J T; Eisemann, Elmar; Koning, Frits; Vilanova, Anna; Lelieveldt, Boudewijn P F
2017-11-23
Mass cytometry allows high-resolution dissection of the cellular composition of the immune system. However, the high-dimensionality, large size, and non-linear structure of the data poses considerable challenges for the data analysis. In particular, dimensionality reduction-based techniques like t-SNE offer single-cell resolution but are limited in the number of cells that can be analyzed. Here we introduce Hierarchical Stochastic Neighbor Embedding (HSNE) for the analysis of mass cytometry data sets. HSNE constructs a hierarchy of non-linear similarities that can be interactively explored with a stepwise increase in detail up to the single-cell level. We apply HSNE to a study on gastrointestinal disorders and three other available mass cytometry data sets. We find that HSNE efficiently replicates previous observations and identifies rare cell populations that were previously missed due to downsampling. Thus, HSNE removes the scalability limit of conventional t-SNE analysis, a feature that makes it highly suitable for the analysis of massive high-dimensional data sets.
Simulation of springback and microstructural analysis of dual phase steels
NASA Astrophysics Data System (ADS)
Kalyan, T. Sri.; Wei, Xing; Mendiguren, Joseba; Rolfe, Bernard
2013-12-01
With increasing demand for weight reduction and better crashworthiness abilities in car development, advanced high strength Dual Phase (DP) steels have been progressively used when making automotive parts. The higher strength steels exhibit higher springback and lower dimensional accuracy after stamping. This has necessitated the use of simulation of each stamped component prior to production to estimate the part's dimensional accuracy. Understanding the micro-mechanical behaviour of AHSS sheet may provide more accuracy to stamping simulations. This work can be divided basically into two parts: first modelling a standard channel forming process; second modelling the micro-structure of the process. The standard top hat channel forming process, benchmark NUMISHEET'93, is used for investigating springback effect of WISCO Dual Phase steels. The second part of this work includes the finite element analysis of microstructures to understand the behaviour of the multi-phase steel at a more fundamental level. The outcomes of this work will help in the dimensional control of steels during manufacturing stage based on the material's microstructure.
Cross-plane coherent acoustic phonons in two-dimensional organic-inorganic hybrid perovskites.
Guo, Peijun; Stoumpos, Constantinos C; Mao, Lingling; Sadasivam, Sridhar; Ketterson, John B; Darancet, Pierre; Kanatzidis, Mercouri G; Schaller, Richard D
2018-05-22
Two-dimensional Ruddlesden-Popper organic-inorganic hybrid layered perovskites (2D RPs) are solution-grown semiconductors with prospective applications in next-generation optoelectronics. The heat-carrying, low-energy acoustic phonons, which are important for heat management of 2D RP-based devices, have remained unexplored. Here we report on the generation and propagation of coherent longitudinal acoustic phonons along the cross-plane direction of 2D RPs, following separate characterizations of below-bandgap refractive indices. Through experiments on single crystals of systematically varied perovskite layer thickness, we demonstrate significant reduction in both group velocity and propagation length of acoustic phonons in 2D RPs as compared to the three-dimensional methylammonium lead iodide counterpart. As borne out by a minimal coarse-grained model, these vibrational properties arise from a large acoustic impedance mismatch between the alternating layers of perovskite sheets and bulky organic cations. Our results inform on thermal transport in highly impedance-mismatched crystal sub-lattices and provide insights towards design of materials that exhibit highly anisotropic thermal dissipation properties.
Enhanced superconductivity in atomically thin TaS2
Navarro-Moratalla, Efrén; Island, Joshua O.; Mañas-Valero, Samuel; Pinilla-Cienfuegos, Elena; Castellanos-Gomez, Andres; Quereda, Jorge; Rubio-Bollinger, Gabino; Chirolli, Luca; Silva-Guillén, Jose Angel; Agraït, Nicolás; Steele, Gary A.; Guinea, Francisco; van der Zant, Herre S. J.; Coronado, Eugenio
2016-01-01
The ability to exfoliate layered materials down to the single layer limit has presented the opportunity to understand how a gradual reduction in dimensionality affects the properties of bulk materials. Here we use this top–down approach to address the problem of superconductivity in the two-dimensional limit. The transport properties of electronic devices based on 2H tantalum disulfide flakes of different thicknesses are presented. We observe that superconductivity persists down to the thinnest layer investigated (3.5 nm), and interestingly, we find a pronounced enhancement in the critical temperature from 0.5 to 2.2 K as the layers are thinned down. In addition, we propose a tight-binding model, which allows us to attribute this phenomenon to an enhancement of the effective electron–phonon coupling constant. This work provides evidence that reducing the dimensionality can strengthen superconductivity as opposed to the weakening effect that has been reported in other 2D materials so far. PMID:26984768
Mathew, Boby; Léon, Jens; Sannemann, Wiebke; Sillanpää, Mikko J.
2018-01-01
Gene-by-gene interactions, also known as epistasis, regulate many complex traits in different species. With the availability of low-cost genotyping it is now possible to study epistasis on a genome-wide scale. However, identifying genome-wide epistasis is a high-dimensional multiple regression problem and needs the application of dimensionality reduction techniques. Flowering Time (FT) in crops is a complex trait that is known to be influenced by many interacting genes and pathways in various crops. In this study, we successfully apply Sure Independence Screening (SIS) for dimensionality reduction to identify two-way and three-way epistasis for the FT trait in a Multiparent Advanced Generation Inter-Cross (MAGIC) barley population using the Bayesian multilocus model. The MAGIC barley population was generated from intercrossing among eight parental lines and thus, offered greater genetic diversity to detect higher-order epistatic interactions. Our results suggest that SIS is an efficient dimensionality reduction approach to detect high-order interactions in a Bayesian multilocus model. We also observe that many of our findings (genomic regions with main or higher-order epistatic effects) overlap with known candidate genes that have been already reported in barley and closely related species for the FT trait. PMID:29254994
NASA Astrophysics Data System (ADS)
Shin, Kyung-Hun; Park, Hyung-Il; Kim, Kwan-Ho; Jang, Seok-Myeong; Choi, Jang-Young
2017-05-01
The shape of the magnet is essential to the performance of a slotless permanent magnet linear synchronous machine (PMLSM) because it is directly related to desirable machine performance. This paper presents a reduction in the thrust ripple of a PMLSM through the use of arc-shaped magnets based on electromagnetic field theory. The magnetic field solutions were obtained by considering end effect using a magnetic vector potential and two-dimensional Cartesian coordinate system. The analytical solution of each subdomain (PM, air-gap, coil, and end region) is derived, and the field solution is obtained by applying the boundary and interface conditions between the subdomains. In particular, an analytical method was derived for the instantaneous thrust and thrust ripple reduction of a PMLSM with arc-shaped magnets. In order to demonstrate the validity of the analytical results, the back electromotive force results of a finite element analysis and experiment on the manufactured prototype model were compared. The optimal point for thrust ripple minimization is suggested.
Combination for electrolytic reduction of alumina
Brown, Craig W.; Brooks, Richard J.; Frizzle, Patrick B.; Juric, Drago D.
2002-04-30
An electrolytic bath for use during the electrolytic reduction of alumina to aluminum. The bath comprises molten electrolyte having the following ingredients: AlF.sub.3 and at least one salt selected from the group consisting of NaF, KF, and LiF; and about 0.004 wt. % to about 0.2 wt. %, based on total weight of the molten electrolyte, of at least one transition metal or at least one compound of the metal or both. The compound is, a fluoride; oxide, or carbonate. The metal is nickel, iron, copper, cobalt, or molybdenum. The bath is employed in a combination including a vessel for containing the bath and at least one non-consumable anode and at least one dimensionally stable cathode in the bath. Employing the instant bath during electrolytic reduction of alumina to aluminum improves the wetting of aluminum on a cathode by reducing or eliminating the formation of non-metallic deposits on the cathode.
A Dimensionality Reduction Technique for Enhancing Information Context.
1980-06-01
table, memory requirements for the difference arrays are based on the FORTRAN G programming languaee as implementated on an IBM 360/67. Single...the greatest amount of insight. All studies were performed on an IBM 360/67. Transformation 53 numerical results were produced as well as two...the origin to (19,19,19,19,19,19,19,19,19,l9). Two classes were generated in each case. The samples were synthetically derived using the IBM 360/57 and
Thomas Edwin Beechem; McDonald, Anthony E.; Ohta, Taisuke; ...
2015-10-26
Oxidation of exfoliated gallium selenide (GaSe) is investigated through Raman, photoluminescence, Auger, and X-ray photoelectron spectroscopies. Photoluminescence and Raman intensity reductions associated with spectral features of GaSe are shown to coincide with the emergence of signatures emanating from the by-products of the oxidation reaction, namely, Ga 2Se 3 and amorphous Se. Furthermore, photoinduced oxidation is initiated over a portion of a flake highlighting the potential for laser based patterning of two-dimensional heterostructures via selective oxidation.
de la Vega de León, Antonio; Bajorath, Jürgen
2016-09-01
The concept of chemical space is of fundamental relevance for medicinal chemistry and chemical informatics. Multidimensional chemical space representations are coordinate-based. Chemical space networks (CSNs) have been introduced as a coordinate-free representation. A computational approach is presented for the transformation of multidimensional chemical space into CSNs. The design of transformation CSNs (TRANS-CSNs) is based upon a similarity function that directly reflects distance relationships in original multidimensional space. TRANS-CSNs provide an immediate visualization of coordinate-based chemical space and do not require the use of dimensionality reduction techniques. At low network density, TRANS-CSNs are readily interpretable and make it possible to evaluate structure-activity relationship information originating from multidimensional chemical space.
A Combinatorial Approach to Detecting Gene-Gene and Gene-Environment Interactions in Family Studies
Lou, Xiang-Yang; Chen, Guo-Bo; Yan, Lei; Ma, Jennie Z.; Mangold, Jamie E.; Zhu, Jun; Elston, Robert C.; Li, Ming D.
2008-01-01
Widespread multifactor interactions present a significant challenge in determining risk factors of complex diseases. Several combinatorial approaches, such as the multifactor dimensionality reduction (MDR) method, have emerged as a promising tool for better detecting gene-gene (G × G) and gene-environment (G × E) interactions. We recently developed a general combinatorial approach, namely the generalized multifactor dimensionality reduction (GMDR) method, which can entertain both qualitative and quantitative phenotypes and allows for both discrete and continuous covariates to detect G × G and G × E interactions in a sample of unrelated individuals. In this article, we report the development of an algorithm that can be used to study G × G and G × E interactions for family-based designs, called pedigree-based GMDR (PGMDR). Compared to the available method, our proposed method has several major improvements, including allowing for covariate adjustments and being applicable to arbitrary phenotypes, arbitrary pedigree structures, and arbitrary patterns of missing marker genotypes. Our Monte Carlo simulations provide evidence that the PGMDR method is superior in performance to identify epistatic loci compared to the MDR-pedigree disequilibrium test (PDT). Finally, we applied our proposed approach to a genetic data set on tobacco dependence and found a significant interaction between two taste receptor genes (i.e., TAS2R16 and TAS2R38) in affecting nicotine dependence. PMID:18834969
Vardhanabhuti, Varut; James, Julia; Nensey, Rehaan; Hyde, Christopher; Roobottom, Carl
2015-05-01
To compare image quality on computed tomographic colonography (CTC) acquired at standard dose (STD) and low dose (LD) using filtered-back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) techniques. A total of 65 symptomatic patients were prospectively enrolled for the study and underwent STD and LD CTC with filtered-back projection, adaptive statistical iterative reconstruction, and MBIR to allow direct per-patient comparison. Objective image noise, subjective image analyses, and polyp detection were assessed. Objective image noise analysis demonstrates significant noise reduction using MBIR technique (P < .05) despite being acquired at lower doses. Subjective image analyses were superior for LD MBIR in all parameters except visibility of extracolonic lesions (two-dimensional) and visibility of colonic wall (three-dimensional) where there were no significant differences. There was no significant difference in polyp detection rates (P > .05). Doses: LD (dose-length product, 257.7), STD (dose-length product, 483.6). LD MBIR CTC objectively shows improved image noise using parameters in our study. Subjectively, image quality is maintained. Polyp detection shows no significant difference but because of small numbers needs further validation. Average dose reduction of 47% can be achieved. This study confirms feasibility of using MBIR in this context of CTC in symptomatic population. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyman, Jeffrey De'Haven; Hagberg, Aric Arild; Mohd-Yusof, Jamaludin
Here, we present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We also derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths.more » First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. We obtain accurate estimates of first passage times with an order of magnitude reduction of CPU time and mesh size using the proposed method.« less
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
Hyman, Jeffrey De'Haven; Hagberg, Aric Arild; Mohd-Yusof, Jamaludin; ...
2017-07-10
Here, we present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We also derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths.more » First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. We obtain accurate estimates of first passage times with an order of magnitude reduction of CPU time and mesh size using the proposed method.« less
Three-dimensional collimation of in-plane-propagating light using silicon micromachined mirror
NASA Astrophysics Data System (ADS)
Sabry, Yasser M.; Khalil, Diaa; Saadany, Bassam; Bourouina, Tarik
2014-03-01
We demonstrate light collimation of single-mode optical fibers using deeply-etched three-dimensional curved micromirror on silicon chip. The three-dimensional curvature of the mirror is controlled by a process combining deep reactive ion etching and isotropic etching of silicon. The produced surface is astigmatic with out-of-plane radius of curvature that is about one half the in-plane radius of curvature. Having a 300-μm in-plane radius and incident beam inplane inclined with an angle of 45 degrees with respect to the principal axis, the reflected beam is maintained stigmatic with about 4.25 times reduction in the beam expansion angle in free space and about 12-dB reduction in propagation losses, when received by a limited-aperture detector.
3D-Hydrogel Based Polymeric Nanoreactors for Silver Nano-Antimicrobial Composites Generation
Soto-Quintero, Albanelly; Romo-Uribe, Ángel; Bermúdez-Morales, Víctor H.; Quijada-Garrido, Isabel
2017-01-01
This study underscores the development of Ag hydrogel nanocomposites, as smart substrates for antibacterial uses, via innovative in situ reactive and reduction pathways. To this end, two different synthetic strategies were used. Firstly thiol-acrylate (PSA) based hydrogels were attained via thiol-ene and radical polymerization of polyethylene glycol (PEG) and polycaprolactone (PCL). As a second approach, polyurethane (PU) based hydrogels were achieved by condensation polymerization from diisocyanates and PCL and PEG diols. In fact, these syntheses rendered active three-dimensional (3D) hydrogel matrices which were used as nanoreactors for in situ reduction of AgNO3 to silver nanoparticles. A redox chemistry of stannous catalyst in PU hydrogel yielded spherical AgNPs formation, even at 4 °C in the absence of external reductant; and an appropriate thiol-functionalized polymeric network promoted spherical AgNPs well dispersed through PSA hydrogel network, after heating up the swollen hydrogel at 103 °C in the presence of citrate-reductant. Optical and swelling behaviors of both series of hydrogel nanocomposites were investigated as key factors involved in their antimicrobial efficacy over time. Lastly, in vitro antibacterial activity of Ag loaded hydrogels exposed to Pseudomona aeruginosa and Escherichia coli strains indicated a noticeable sustained inhibitory effect, especially for Ag–PU hydrogel nanocomposites with bacterial inhibition growth capabilities up to 120 h cultivation. PMID:28763050
Aguilera, Teodoro; Lozano, Jesús; Paredes, José A.; Álvarez, Fernando J.; Suárez, José I.
2012-01-01
The aim of this work is to propose an alternative way for wine classification and prediction based on an electronic nose (e-nose) combined with Independent Component Analysis (ICA) as a dimensionality reduction technique, Partial Least Squares (PLS) to predict sensorial descriptors and Artificial Neural Networks (ANNs) for classification purpose. A total of 26 wines from different regions, varieties and elaboration processes have been analyzed with an e-nose and tasted by a sensory panel. Successful results have been obtained in most cases for prediction and classification. PMID:22969387
A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems
NASA Astrophysics Data System (ADS)
Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix
2018-03-01
We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.
NASA Astrophysics Data System (ADS)
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the orthogonality of the projection matrix by exploiting recent results on the Stiefel manifold, i.e., the manifold of matrices with orthogonal columns. The additional benefit of our probabilistic formulation, is that it allows us to select the dimensionality of the AS via the Bayesian information criterion. We validate our approach by showing that it can discover the right AS in synthetic examples without gradient information using both noiseless and noisy observations. We demonstrate that our method is able to discover the same AS as the classical approach in a challenging one-hundred-dimensional problem involving an elliptic stochastic partial differential equation with random conductivity. Finally, we use our approach to study the effect of geometric and material uncertainties in the propagation of solitary waves in a one dimensional granular system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range ofmore » physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the orthogonality of the projection matrix by exploiting recent results on the Stiefel manifold, i.e., the manifold of matrices with orthogonal columns. The additional benefit of our probabilistic formulation, is that it allows us to select the dimensionality of the AS via the Bayesian information criterion. We validate our approach by showing that it can discover the right AS in synthetic examples without gradient information using both noiseless and noisy observations. We demonstrate that our method is able to discover the same AS as the classical approach in a challenging one-hundred-dimensional problem involving an elliptic stochastic partial differential equation with random conductivity. Finally, we use our approach to study the effect of geometric and material uncertainties in the propagation of solitary waves in a one dimensional granular system.« less
An efficient classification method based on principal component and sparse representation.
Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang
2016-01-01
As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.
Chen, Yifei; Sun, Yuxing; Han, Bing-Qing
2015-01-01
Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.
REBURNING THERMAL AND CHEMICAL PROCESSES IN A TWO-DIMENSIONAL PILOT-SCALE SYSTEM
The paper describes an experimental investigation of the thermal and chemical processes influencing NOx reduction by natural gas reburning in a two-dimensional pilot-scale combustion system. Reburning effectiveness for initial NOx levels of 50-500 ppm and reburn stoichiometric ra...
Wave drag as the objective function in transonic fighter wing optimization
NASA Technical Reports Server (NTRS)
Phillips, P. S.
1984-01-01
The original computational method for determining wave drag in a three dimensional transonic analysis method was replaced by a wave drag formula based on the loss in momentum across an isentropic shock. This formula was used as the objective function in a numerical optimization procedure to reduce the wave drag of a fighter wing at transonic maneuver conditions. The optimization procedure minimized wave drag through modifications to the wing section contours defined by a wing profile shape function. A significant reduction in wave drag was achieved while maintaining a high lift coefficient. Comparisons of the pressure distributions for the initial and optimized wing geometries showed significant reductions in the leading-edge peaks and shock strength across the span.
Data on Support Vector Machines (SVM) model to forecast photovoltaic power.
Malvoni, M; De Giorgi, M G; Congedo, P M
2016-12-01
The data concern the photovoltaic (PV) power, forecasted by a hybrid model that considers weather variations and applies a technique to reduce the input data size, as presented in the paper entitled "Photovoltaic forecast based on hybrid pca-lssvm using dimensionality reducted data" (M. Malvoni, M.G. De Giorgi, P.M. Congedo, 2015) [1]. The quadratic Renyi entropy criteria together with the principal component analysis (PCA) are applied to the Least Squares Support Vector Machines (LS-SVM) to predict the PV power in the day-ahead time frame. The data here shared represent the proposed approach results. Hourly PV power predictions for 1,3,6,12, 24 ahead hours and for different data reduction sizes are provided in Supplementary material.
Similarity solutions of some two-space-dimensional nonlinear wave evolution equations
NASA Technical Reports Server (NTRS)
Redekopp, L. G.
1980-01-01
Similarity reductions of the two-space-dimensional versions of the Korteweg-de Vries, modified Korteweg-de Vries, Benjamin-Davis-Ono, and nonlinear Schroedinger equations are presented, and some solutions of the reduced equations are discussed. Exact dispersive solutions of the two-dimensional Korteweg-de Vries equation are obtained, and the similarity solution of this equation is shown to be reducible to the second Painleve transcendent.
Hidden symmetries of Eisenhart-Duval lift metrics and the Dirac equation with flux
NASA Astrophysics Data System (ADS)
Cariglia, Marco
2012-10-01
The Eisenhart-Duval lift allows embedding nonrelativistic theories into a Lorentzian geometrical setting. In this paper we study the lift from the point of view of the Dirac equation and its hidden symmetries. We show that dimensional reduction of the Dirac equation for the Eisenhart-Duval metric in general gives rise to the nonrelativistic Lévy-Leblond equation in lower dimension. We study in detail in which specific cases the lower dimensional limit is given by the Dirac equation, with scalar and vector flux, and the relation between lift, reduction, and the hidden symmetries of the Dirac equation. While there is a precise correspondence in the case of the lower dimensional massive Dirac equation with no flux, we find that for generic fluxes it is not possible to lift or reduce all solutions and hidden symmetries. As a by-product of this analysis, we construct new Lorentzian metrics with special tensors by lifting Killing-Yano and closed conformal Killing-Yano tensors and describe the general conformal Killing-Yano tensor of the Eisenhart-Duval lift metrics in terms of lower dimensional forms. Last, we show how, by dimensionally reducing the higher dimensional operators of the massless Dirac equation that are associated with shared hidden symmetries, it is possible to recover hidden symmetry operators for the Dirac equation with flux.
Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video
NASA Astrophysics Data System (ADS)
Li, Honggui
2017-09-01
This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.
NASA Astrophysics Data System (ADS)
Brockner, Blake; Veal, Charlie; Dowdy, Joshua; Anderson, Derek T.; Williams, Kathryn; Luke, Robert; Sheen, David
2018-04-01
The identification followed by avoidance or removal of explosive hazards in past and/or present conflict zones is a serious threat for both civilian and military personnel. This is a challenging task as variability exists with respect to the objects, their environment and emplacement context, to name a few factors. A goal is the development of automatic or human-in-the-loop sensor technologies that leverage signal processing, data fusion and machine learning. Herein, we explore the detection of side attack explosive hazards (SAEHs) in three dimensional voxel space radar via different shallow and deep convolutional neural network (CNN) architectures. Dimensionality reduction is performed by using multiple projected images versus the raw three dimensional voxel data, which leads to noteworthy savings in input size and associated network hyperparameters. Last, we explore the accuracy and interpretation of solutions learned via random versus intelligent network weight initialization. Experiments are provided on a U.S. Army data set collected over different times, weather conditions, target types and concealments. Preliminary results indicate that deep learning can perform as good as, if not better, than a skilled domain expert, even in light of limited training data with a class imbalance.
Locating landmarks on high-dimensional free energy surfaces
Chen, Ming; Yu, Tang-Qing; Tuckerman, Mark E.
2015-01-01
Coarse graining of complex systems possessing many degrees of freedom can often be a useful approach for analyzing and understanding key features of these systems in terms of just a few variables. The relevant energy landscape in a coarse-grained description is the free energy surface as a function of the coarse-grained variables, which, despite the dimensional reduction, can still be an object of high dimension. Consequently, navigating and exploring this high-dimensional free energy surface is a nontrivial task. In this paper, we use techniques from multiscale modeling, stochastic optimization, and machine learning to devise a strategy for locating minima and saddle points (termed “landmarks”) on a high-dimensional free energy surface “on the fly” and without requiring prior knowledge of or an explicit form for the surface. In addition, we propose a compact graph representation of the landmarks and connections between them, and we show that the graph nodes can be subsequently analyzed and clustered based on key attributes that elucidate important properties of the system. Finally, we show that knowledge of landmark locations allows for the efficient determination of their relative free energies via enhanced sampling techniques. PMID:25737545
Simulation of Fluid Flow and Collection Efficiency for an SEA Multi-element Probe
NASA Technical Reports Server (NTRS)
Rigby, David L.; Struk, Peter M.; Bidwell, Colin
2014-01-01
Numerical simulations of fluid flow and collection efficiency for a Science Engineering Associates (SEA) multi-element probe are presented. Simulation of the flow field was produced using the Glenn-HT Navier-Stokes solver. Three dimensional unsteady results were produced and then time averaged for the collection efficiency results. Three grid densities were investigated to enable an assessment of grid dependence. Collection efficiencies were generated for three spherical particle sizes, 100, 20, and 5 micron in diameter, using the codes LEWICE3D and LEWICE2D. The free stream Mach number was 0.27, representing a velocity of approximately 86 ms. It was observed that a reduction in velocity of about 15-20 occurred as the flow entered the shroud of the probe.Collection efficiency results indicate a reduction in collection efficiency as particle size is reduced. The reduction with particle size is expected, however, the results tended to be lower than previous results generated for isolated two-dimensional elements. The deviation from the two-dimensional results is more pronounced for the smaller particles and is likely due to the effect of the protective shroud.
Decoding-Accuracy-Based Sequential Dimensionality Reduction of Spatio-Temporal Neural Activities
NASA Astrophysics Data System (ADS)
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
Performance of a brain machine interface (BMI) critically depends on selection of input data because information embedded in the neural activities is highly redundant. In addition, properly selected input data with a reduced dimension leads to improvement of decoding generalization ability and decrease of computational efforts, both of which are significant advantages for the clinical applications. In the present paper, we propose an algorithm of sequential dimensionality reduction (SDR) that effectively extracts motor/sensory related spatio-temporal neural activities. The algorithm gradually reduces input data dimension by dropping neural data spatio-temporally so as not to undermine the decoding accuracy as far as possible. Support vector machine (SVM) was used as the decoder, and tone-induced neural activities in rat auditory cortices were decoded into the test tone frequencies. SDR reduced the input data dimension to a quarter and significantly improved the accuracy of decoding of novel data. Moreover, spatio-temporal neural activity patterns selected by SDR resulted in significantly higher accuracy than high spike rate patterns or conventionally used spatial patterns. These results suggest that the proposed algorithm can improve the generalization ability and decrease the computational effort of decoding.
Geilfus, Christoph-Martin; Ober, Dietrich; Eichacker, Lutz A.; Mühling, Karl Hermann; Zörb, Christian
2015-01-01
The salt-sensitive crop Zea mays L. shows a rapid leaf growth reduction upon NaCl stress. There is increasing evidence that salinity impairs the ability of the cell walls to expand, ultimately inhibiting growth. Wall-loosening is a prerequisite for cell wall expansion, a process that is under the control of cell wall-located expansin proteins. In this study the abundance of those proteins was analyzed against salt stress using gel-based two-dimensional proteomics and two-dimensional Western blotting. Results show that ZmEXPB6 (Z. mays β-expansin 6) protein is lacking in growth-inhibited leaves of salt-stressed maize. Of note, the exogenous application of heterologously expressed and metal-chelate-affinity chromatography-purified ZmEXPB6 on growth-reduced leaves that lack native ZmEXPB6 under NaCl stress partially restored leaf growth. In vitro assays on frozen-thawed leaf sections revealed that recombinant ZmEXPB6 acts on the capacity of the walls to extend. Our results identify expansins as a factor that partially restores leaf growth of maize in saline environments. PMID:25750129
NASA Astrophysics Data System (ADS)
Shen, Wei; Li, Dongsheng; Zhang, Shuaifang; Ou, Jinping
2017-07-01
This paper presents a hybrid method that combines the B-spline wavelet on the interval (BSWI) finite element method and spectral analysis based on fast Fourier transform (FFT) to study wave propagation in One-Dimensional (1D) structures. BSWI scaling functions are utilized to approximate the theoretical wave solution in the spatial domain and construct a high-accuracy dynamic stiffness matrix. Dynamic reduction on element level is applied to eliminate the interior degrees of freedom of BSWI elements and substantially reduce the size of the system matrix. The dynamic equations of the system are then transformed and solved in the frequency domain through FFT-based spectral analysis which is especially suitable for parallel computation. A comparative analysis of four different finite element methods is conducted to demonstrate the validity and efficiency of the proposed method when utilized in high-frequency wave problems. Other numerical examples are utilized to simulate the influence of crack and delamination on wave propagation in 1D rods and beams. Finally, the errors caused by FFT and their corresponding solutions are presented.
Resolvent analysis of shear flows using One-Way Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Rigas, Georgios; Schmidt, Oliver; Towne, Aaron; Colonius, Tim
2017-11-01
For three-dimensional flows, questions of stability, receptivity, secondary flows, and coherent structures require the solution of large partial-derivative eigenvalue problems. Reduced-order approximations are thus required for engineering prediction since these problems are often computationally intractable or prohibitively expensive. For spatially slowly evolving flows, such as jets and boundary layers, the One-Way Navier-Stokes (OWNS) equations permit a fast spatial marching procedure that results in a huge reduction in computational cost. Here, an adjoint-based optimization framework is proposed and demonstrated for calculating optimal boundary conditions and optimal volumetric forcing. The corresponding optimal response modes are validated against modes obtained in terms of global resolvent analysis. For laminar base flows, the optimal modes reveal modal and non-modal transition mechanisms. For turbulent base flows, they predict the evolution of coherent structures in a statistical sense. Results from the application of the method to three-dimensional laminar wall-bounded flows and turbulent jets will be presented. This research was supported by the Office of Naval Research (N00014-16-1-2445) and Boeing Company (CT-BA-GTA-1).
Al-Aziz, Jameel; Christou, Nicolas; Dinov, Ivo D.
2011-01-01
The amount, complexity and provenance of data have dramatically increased in the past five years. Visualization of observed and simulated data is a critical component of any social, environmental, biomedical or scientific quest. Dynamic, exploratory and interactive visualization of multivariate data, without preprocessing by dimensionality reduction, remains a nearly insurmountable challenge. The Statistics Online Computational Resource (www.SOCR.ucla.edu) provides portable online aids for probability and statistics education, technology-based instruction and statistical computing. We have developed a new Java-based infrastructure, SOCR Motion Charts, for discovery-based exploratory analysis of multivariate data. This interactive data visualization tool enables the visualization of high-dimensional longitudinal data. SOCR Motion Charts allows mapping of ordinal, nominal and quantitative variables onto time, 2D axes, size, colors, glyphs and appearance characteristics, which facilitates the interactive display of multidimensional data. We validated this new visualization paradigm using several publicly available multivariate datasets including Ice-Thickness, Housing Prices, Consumer Price Index, and California Ozone Data. SOCR Motion Charts is designed using object-oriented programming, implemented as a Java Web-applet and is available to the entire community on the web at www.socr.ucla.edu/SOCR_MotionCharts. It can be used as an instructional tool for rendering and interrogating high-dimensional data in the classroom, as well as a research tool for exploratory data analysis. PMID:21479108
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas
2017-12-01
Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.
Using Single-trial EEG to Predict and Analyze Subsequent Memory
Noh, Eunho; Herzmann, Grit; Curran, Tim; de Sa, Virginia R.
2013-01-01
We show that it is possible to successfully predict subsequent memory performance based on single-trial EEG activity before and during item presentation in the study phase. Two-class classification was conducted to predict subsequently remembered vs. forgotten trials based on subjects’ responses in the recognition phase. The overall accuracy across 18 subjects was 59.6 % by combining pre- and during-stimulus information. The single-trial classification analysis provides a dimensionality reduction method to project the high-dimensional EEG data onto a discriminative space. These projections revealed novel findings in the pre- and during-stimulus period related to levels of encoding. It was observed that the pre-stimulus information (specifically oscillatory activity between 25–35Hz) −300 to 0 ms before stimulus presentation and during-stimulus alpha (7–12 Hz) information between 1000–1400 ms after stimulus onset distinguished between recollection and familiarity while the during-stimulus alpha information and temporal information between 400–800 ms after stimulus onset mapped these two states to similar values. PMID:24064073
Pseudogap and conduction dimensionalities in high-Tc superconductors
NASA Astrophysics Data System (ADS)
Das Arulsamy, Andrew; Ong, P. C.; Ong, M. T.
2003-01-01
The nature of normal state charge-carriers' dynamics and the transition in conduction and gap dimensionalities between 2D and 3D for YBa2Cu3O7-δ and Bi2Sr2Ca1- xYxCu2O8 high-Tc superconductors were described by computing and fitting the resistivity curves, /ρ(T,δ,x). These were carried out by utilizing the 2D and 3D Fermi liquid and ionization energy (EI) based resistivity models coupled with charge-spin separation based /t-/J model (Phys. Rev. B 64 (2001) 104516). /ρ(T,δ,x) curves of Y123 and Bi2212 samples indicate the beginning of the transition of conduction and gap from 2D to 3D with reduction in oxygen content /(7-δ) and Ca2+(1-x) as such, /c-axis pseudogap could be a different phenomenon from superconductor and spin gaps. These models also indicate that the recent MgB2 superconductor is at least not Y123 or Bi2212 type.
Design of two-dimensional zero reference codes with cross-entropy method.
Chen, Jung-Chieh; Wen, Chao-Kai
2010-06-20
We present a cross-entropy (CE)-based method for the design of optimum two-dimensional (2D) zero reference codes (ZRCs) in order to generate a zero reference signal for a grating measurement system and achieve absolute position, a coordinate origin, or a machine home position. In the absence of diffraction effects, the 2D ZRC design problem is known as the autocorrelation approximation. Based on the properties of the autocorrelation function, the design of the 2D ZRC is first formulated as a particular combination optimization problem. The CE method is then applied to search for an optimal 2D ZRC and thus obtain the desirable zero reference signal. Computer simulation results indicate that there are 15.38% and 14.29% reductions in the second maxima value for the 16x16 grating system with n(1)=64 and the 100x100 grating system with n(1)=300, respectively, where n(1) is the number of transparent pixels, compared with those of the conventional genetic algorithm.
Improved dense trajectories for action recognition based on random projection and Fisher vectors
NASA Astrophysics Data System (ADS)
Ai, Shihui; Lu, Tongwei; Xiong, Yudian
2018-03-01
As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.
Effects of band selection on endmember extraction for forestry applications
NASA Astrophysics Data System (ADS)
Karathanassi, Vassilia; Andreou, Charoula; Andronis, Vassilis; Kolokoussis, Polychronis
2014-10-01
In spectral unmixing theory, data reduction techniques play an important role as hyperspectral imagery contains an immense amount of data, posing many challenging problems such as data storage, computational efficiency, and the so called "curse of dimensionality". Feature extraction and feature selection are the two main approaches for dimensionality reduction. Feature extraction techniques are used for reducing the dimensionality of the hyperspectral data by applying transforms on hyperspectral data. Feature selection techniques retain the physical meaning of the data by selecting a set of bands from the input hyperspectral dataset, which mainly contain the information needed for spectral unmixing. Although feature selection techniques are well-known for their dimensionality reduction potentials they are rarely used in the unmixing process. The majority of the existing state-of-the-art dimensionality reduction methods set criteria to the spectral information, which is derived by the whole wavelength, in order to define the optimum spectral subspace. These criteria are not associated with any particular application but with the data statistics, such as correlation and entropy values. However, each application is associated with specific land c over materials, whose spectral characteristics present variations in specific wavelengths. In forestry for example, many applications focus on tree leaves, in which specific pigments such as chlorophyll, xanthophyll, etc. determine the wavelengths where tree species, diseases, etc., can be detected. For such applications, when the unmixing process is applied, the tree species, diseases, etc., are considered as the endmembers of interest. This paper focuses on investigating the effects of band selection on the endmember extraction by exploiting the information of the vegetation absorbance spectral zones. More precisely, it is explored whether endmember extraction can be optimized when specific sets of initial bands related to leaf spectral characteristics are selected. Experiments comprise application of well-known signal subspace estimation and endmember extraction methods on a hyperspectral imagery that presents a forest area. Evaluation of the extracted endmembers showed that more forest species can be extracted as endmembers using selected bands.
Huayamave, Victor; Rose, Christopher; Serra, Sheila; Jones, Brendan; Divo, Eduardo; Moslehy, Faissal; Kassab, Alain J; Price, Charles T
2015-07-16
A physics-based computational model of neonatal Developmental Dysplasia of the Hip (DDH) following treatment with the Pavlik Harness (PV) was developed to obtain muscle force contribution in order to elucidate biomechanical factors influencing the reduction of dislocated hips. Clinical observation suggests that reduction occurs in deep sleep involving passive muscle action. Consequently, a set of five (5) adductor muscles were identified as mediators of reduction using the PV. A Fung/Hill-type model was used to characterize muscle response. Four grades (1-4) of dislocation were considered, with one (1) being a low subluxation and four (4) a severe dislocation. A three-dimensional model of the pelvis-femur lower limb of a representative 10 week-old female was generated based on CT-scans with the aid of anthropomorphic scaling of anatomical landmarks. The model was calibrated to achieve equilibrium at 90° flexion and 80° abduction. The hip was computationally dislocated according to the grade under investigation, the femur was restrained to move in an envelope consistent with PV restraints, and the dynamic response under passive muscle action and the effect of gravity was resolved. Model results with an anteversion angle of 50° show successful reduction Grades 1-3, while Grade 4 failed to reduce with the PV. These results are consistent with a previous study based on a simplified anatomically-consistent synthetic model and clinical reports of very low success of the PV for Grade 4. However our model indicated that it is possible to achieve reduction of Grade 4 dislocation by hyperflexion and the resultant external rotation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Extended symmetry analysis of generalized Burgers equations
NASA Astrophysics Data System (ADS)
Pocheketa, Oleksandr A.; Popovych, Roman O.
2017-10-01
Using enhanced classification techniques, we carry out the extended symmetry analysis of the class of generalized Burgers equations of the form ut + uux + f(t, x)uxx = 0. This enhances all the previous results on symmetries of these equations and includes the description of admissible transformations, Lie symmetries, Lie and nonclassical reductions, hidden symmetries, conservation laws, potential admissible transformations, and potential symmetries. The study is based on the fact that the class is normalized, and its equivalence group is finite-dimensional.
Multi-stack InAs/InGaAs Sub-monolayer Quantum Dots Infrared Photodetectors
2013-01-01
013110 (2013) Demonstration of high performance bias-selectable dual- band short-/mid-wavelength infrared photodetectors based on type-II InAs/ GaSb ...been used for the growth of QD structures . These include the formation of self-assembled QD, for example, Stranski-Krastanov (SK) growth mode,8,9 atomic...confinement in SML-QD and the reduction in the amount of InAs used per layer of QD can help stack more layers in a 3-dimensional QD structure . Several
Entropic manifestations of topological order in three dimensions
NASA Astrophysics Data System (ADS)
Bullivant, Alex; Pachos, Jiannis K.
2016-03-01
We evaluate the entanglement entropy of exactly solvable Hamiltonians corresponding to general families of three-dimensional topological models. We show that the modification to the entropic area law due to three-dimensional topological properties is richer than the two-dimensional case. In addition to the reduction of the entropy caused by a nonzero vacuum expectation value of contractible loop operators, a topological invariant emerges that increases the entropy if the model consists of nontrivially braiding anyons. As a result the three-dimensional topological entanglement entropy provides only partial information about the two entropic topological invariants.
Ensemble based on static classifier selection for automated diagnosis of Mild Cognitive Impairment.
Nanni, Loris; Lumini, Alessandra; Zaffonato, Nicolò
2018-05-15
Alzheimer's disease (AD) is the most common cause of neurodegenerative dementia in the elderly population. Scientific research is very active in the challenge of designing automated approaches to achieve an early and certain diagnosis. Recently an international competition among AD predictors has been organized: "A Machine learning neuroimaging challenge for automated diagnosis of Mild Cognitive Impairment" (MLNeCh). This competition is based on pre-processed sets of T1-weighted Magnetic Resonance Images (MRI) to be classified in four categories: stable AD, individuals with MCI who converted to AD, individuals with MCI who did not convert to AD and healthy controls. In this work, we propose a method to perform early diagnosis of AD, which is evaluated on MLNeCh dataset. Since the automatic classification of AD is based on the use of feature vectors of high dimensionality, different techniques of feature selection/reduction are compared in order to avoid the curse-of-dimensionality problem, then the classification method is obtained as the combination of Support Vector Machines trained using different clusters of data extracted from the whole training set. The multi-classifier approach proposed in this work outperforms all the stand-alone method tested in our experiments. The final ensemble is based on a set of classifiers, each trained on a different cluster of the training data. The proposed ensemble has the great advantage of performing well using a very reduced version of the data (the reduction factor is more than 90%). The MATLAB code for the ensemble of classifiers will be publicly available 1 to other researchers for future comparisons. Copyright © 2017 Elsevier B.V. All rights reserved.
Hou, Chuantao; Yang, Dapeng; Liang, Bo; Liu, Aihua
2014-06-17
The power output and stability of enzyme-based biofuel cells (BFCs) is greatly dependent on the properties of both the biocathode and bioanode, which may be adapted for portable power production. In this paper, a novel highly uniform three-dimensional (3D) macroporous gold (MP-Au) film was prepared by heating the gold "supraspheres", which were synthesized by a bottom-up protein templating approach, and followed by modification of laccase on the MP-Au film by covalent immobilization. The as-prepared laccase/MP-Au biocathode exihibited an onset potential of 0.62 V versus saturated calomel electrode (SCE, or 0.86 V vs NHE, normal hydrogen electrode) toward O2 reduction and a high catalytic current of 0.61 mAcm(-2). On the other hand, mutated glucose dehydrogenase (GDH) surface displayed bacteria (GDH-bacteria) were used to improve the stability of the glucose oxidation at the bioanode. The as-assembled membraneless glucose/O2 fuel cell showed a high power output of 55.8 ± 2.0 μW cm(-2) and open circuit potential of 0.80 V, contributing to the improved electrocatalysis toward O2 reduction at the laccase/MP-Au biocathode. Moreover, the BFC retained 84% of its maximal power density even after continuous operation for 55 h because of the high stability of the bacterial surface displayed GDH mutant toward glucose oxidation. Our findings may be promising for the development of more efficient glucose BFC for portable battery or self-powered device applications.
Awan, Muaaz Gul; Saeed, Fahad
2017-08-01
Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.
NASA Astrophysics Data System (ADS)
Szopa, S.; Aumont, B.; Madronich, S.
2005-09-01
The objective of this work was to develop and assess an automatic procedure to generate reduced chemical schemes for the atmospheric photooxidation of volatile organic carbon (VOC) compounds. The procedure is based on (i) the development of a tool for writing the fully explicit schemes for VOC oxidation (see companion paper Aumont et al., 2005), (ii) the application of several commonly used reduction methods to the fully explicit scheme, and (iii) the assessment of resulting errors based on direct comparison between the reduced and full schemes.
The reference scheme included seventy emitted VOCs chosen to be representative of both anthropogenic and biogenic emissions, and their atmospheric degradation chemistry required more than two million reactions among 350000 species. Three methods were applied to reduce the size of the reference chemical scheme: (i) use of operators, based on the redundancy of the reaction sequences involved in the VOC oxidation, (ii) grouping of primary species having similar reactivities into surrogate species and (iii) grouping of some secondary products into surrogate species. The number of species in the final reduced scheme is 147, this being small enough for practical inclusion in current three-dimensional models. Comparisons between the fully explicit and reduced schemes, carried out with a box model for several typical tropospheric conditions, showed that the reduced chemical scheme accurately predicts ozone concentrations and some other aspects of oxidant chemistry for both polluted and clean tropospheric conditions.
Ji, Shuiwang
2013-07-11
The structured organization of cells in the brain plays a key role in its functional efficiency. This delicate organization is the consequence of unique molecular identity of each cell gradually established by precise spatiotemporal gene expression control during development. Currently, studies on the molecular-structural association are beginning to reveal how the spatiotemporal gene expression patterns are related to cellular differentiation and structural development. In this article, we aim at a global, data-driven study of the relationship between gene expressions and neuroanatomy in the developing mouse brain. To enable visual explorations of the high-dimensional data, we map the in situ hybridization gene expression data to a two-dimensional space by preserving both the global and the local structures. Our results show that the developing brain anatomy is largely preserved in the reduced gene expression space. To provide a quantitative analysis, we cluster the reduced data into groups and measure the consistency with neuroanatomy at multiple levels. Our results show that the clusters in the low-dimensional space are more consistent with neuroanatomy than those in the original space. Gene expression patterns and developing brain anatomy are closely related. Dimensionality reduction and visual exploration facilitate the study of this relationship.
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang
2017-12-01
Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.
NASA Astrophysics Data System (ADS)
de Wit, Bernard; Reys, Valentin
2017-12-01
Supergravity with eight supercharges in a four-dimensional Euclidean space is constructed at the full non-linear level by performing an off-shell time-like reduction of five-dimensional supergravity. The resulting four-dimensional theory is realized off-shell with the Weyl, vector and tensor supermultiplets and a corresponding multiplet calculus. Hypermultiplets are included as well, but they are themselves only realized with on-shell supersymmetry. We also briefly discuss the non-linear supermultiplet. The off-shell reduction leads to a full understanding of the Euclidean theory. A complete multiplet calculus is presented along the lines of the Minkowskian theory. Unlike in Minkowski space, chiral and anti-chiral multiplets are real and supersymmetric actions are generally unbounded from below. Precisely as in the Minkowski case, where one has different formulations of Poincaré supergravity upon introducing different compensating supermultiplets, one can also obtain different versions of Euclidean supergravity.
Reductive dissolution and reactive solute transport in a sewage-contaminated glacial outwash aquifer
Lee, R.W.; Bennett, P.C.
1998-01-01
Contamination of shallow ground water by sewage effluent typically contains reduced chemical species that consume dissolved oxygen, developing either a low oxygen geochemical environment or an anaerobic geochemical environment. Based on the load of reduced chemical species discharged to shallow ground water and the amounts of reactants in the aquifer matrix, it should be possible to determine chemical processes in the aquifer and compare observed results to predicted ones. At the Otis Air Base research site (Cape Cod, Massachusetts) where sewage effluent has infiltrated the shallow aquifer since 1936, bacterially mediated processes such as nitrification, denitrification, manganese reduction, and iron reduction have been observed in the contaminant plume. In specific areas of the plume, dissolved manganese and iron have increased significantly where local geochemical conditions are favorable for reduction and transport of these constituents from the aquifer matrix. Dissolved manganese and iron concentrations ranged from 0.02 to 7.3 mg/L, and 0.001 to 13.0 mg/L, respectively, for 21 samples collected from 1988 to 1989. Reduction of manganese and iron is linked to microbial oxidation of sewage carbon, producing bicarbonate and the dissolved metal ions as by-products. Calculated production and flux of CO2 through the unsaturated zone from manganese reduction in the aquifer was 0.035 g/m2/d (12% of measured CO2 flux during winter). Manganese is limited in the aquifer, however. A one-dimensional, reaction-coupled transport model developed for the mildly reducing conditions in the sewage plume nearest the source beds showed that reduction, transport, and removal of manganese from the aquifer sediments should result in iron reduction where manganese has been depleted.
National Defense Center of Excellence for Industrial Metrology and 3D Imaging
2012-10-18
validation rather than mundane data-reduction/analysis tasks. Indeed, the new financial and technical resources being brought to bear by integrating CT...of extremely fast axial scanners. By replacing the single-spot detector by a detector array, a three-dimensional image is acquired by one depth scan...the number of acquired voxels per complete two-dimensional or three-dimensional image, the axial and lateral resolution, the depth range, the
A technique for the reduction of banding in Landsat Thematic Mapper Images
Helder, Dennis L.; Quirk, Bruce K.; Hood, Joy J.
1992-01-01
The radiometric difference between forward and reverse scans in Landsat thematic mapper (TM) images, referred to as "banding," can create problems when enhancing the image for interpretation or when performing quantitative studies. Recent research has led to the development of a method that reduces the banding in Landsat TM data sets. It involves passing a one-dimensional spatial kernel over the data set. This kernel is developed from the statistics of the banding pattern and is based on the Wiener filter. It has been implemented on both a DOS-based microcomputer and several UNIX-based computer systems. The algorithm has successfully reduced the banding in several test data sets.
A bio-inspired device for drag reduction on a three-dimensional model vehicle.
Kim, Dongri; Lee, Hoon; Yi, Wook; Choi, Haecheon
2016-03-10
In this paper, we introduce a bio-mimetic device for the reduction of the drag force on a three-dimensional model vehicle, the Ahmed body (Ahmed et al 1984 SAE Technical Paper 840300). The device, called automatic moving deflector (AMD), is designed inspired by the movement of secondary feathers on bird's wing suction surface: i.e., secondary feathers pop up when massive separation occurs on bird's wing suction surface at high angles of attack, which increases the lift force at landing. The AMD is applied to the rear slanted surface of the Ahmed body to control the flow separation there. The angle of the slanted surface considered is 25° at which the drag coefficient on the Ahmed body is highest. The wind tunnel experiment is conducted at Re H = 1.0 × 10(5)-3.8 × 10(5), based on the height of the Ahmed body (H) and the free-stream velocity (U ∞). Several AMDs of different sizes and materials are tested by measuring the drag force on the Ahmed body, and showed drag reductions up to 19%. The velocity and surface-pressure measurements show that AMD starts to pop up when the pressure in the thin gap between the slanted surface and AMD is much larger than that on the upper surface of AMD. We also derive an empirical formula that predicts the critical free-stream velocity at which AMD starts to operate. Finally, it is shown that the drag reduction by AMD is mainly attributed to a pressure recovery on the slanted surface by delaying the flow separation and suppressing the strength of the longitudinal vortices emanating from the lateral edges of the slanted surface.
Principal polynomial analysis.
Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus
2014-11-01
This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.
Polymer-Layered Silicate Nanocomposites for Cryotank Applications
NASA Technical Reports Server (NTRS)
Miller, Sandi G.; Meador, Michael A.
2007-01-01
Previous composite cryotank designs have relied on the use of conventional composite materials to reduce microcracking and permeability. However, revolutionary advances in nanotechnology derived materials may enable the production of ultra-lightweight cryotanks with significantly enhanced durability and damage tolerance, as well as reduced propellant permeability. Layered silicate nanocomposites are especially attractive in cryogenic storage tanks based on results that have been reported for epoxy nanocomposite systems. These materials often exhibit an order of magnitude reduction in gas permeability when compared to the base resin. In addition, polymer-silicate nanocomposites have been shown to yield improved dimensional stability, strength, and toughness. The enhancement in material performance of these systems occurs without property trade-offs which are often observed in conventionally filled polymer composites. Research efforts at NASA Glenn Research Center have led to the development of epoxy-clay nanocomposites with 70% lower hydrogen permeability than the base epoxy resin. Filament wound carbon fiber reinforced tanks made with this nanocomposite had a five-fold lower helium leak rate than the corresponding tanks made without clay. The pronounced reduction observed with the tank may be due to flow induced alignment of the clay layers during processing. Additionally, the nanocomposites showed CTE reductions of up to 30%, as well as a 100% increase in toughness.
Reduced nonlinear prognostic model construction from high-dimensional data
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander
2017-04-01
Construction of a data-driven model of evolution operator using universal approximating functions can only be statistically justified when the dimension of its phase space is small enough, especially in the case of short time series. At the same time in many applications real-measured data is high-dimensional, e.g. it is space-distributed and multivariate in climate science. Therefore it is necessary to use efficient dimensionality reduction methods which are also able to capture key dynamical properties of the system from observed data. To address this problem we present a Bayesian approach to an evolution operator construction which incorporates two key reduction steps. First, the data is decomposed into a set of certain empirical modes, such as standard empirical orthogonal functions or recently suggested nonlinear dynamical modes (NDMs) [1], and the reduced space of corresponding principal components (PCs) is obtained. Then, the model of evolution operator for PCs is constructed which maps a number of states in the past to the current state. The second step is to reduce this time-extended space in the past using appropriate decomposition methods. Such a reduction allows us to capture only the most significant spatio-temporal couplings. The functional form of the evolution operator includes separately linear, nonlinear (based on artificial neural networks) and stochastic terms. Explicit separation of the linear term from the nonlinear one allows us to more easily interpret degree of nonlinearity as well as to deal better with smooth PCs which can naturally occur in the decompositions like NDM, as they provide a time scale separation. Results of application of the proposed method to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510
Yu, Xiaowen; Sheng, Kaixuan; Shi, Gaoquan
2014-09-21
Electrochemical detection of dopamine plays an important role in medical diagnosis. In this paper, we report a three-dimensional (3D) interpenetrating graphene electrode fabricated by electrochemical reduction of graphene oxide for selective detection of dopamine. This electrochemically reduced graphene oxide (ErGO) electrode was used directly without further functionalization or blending with other functional materials. This electrode can efficiently lower the oxidation potential of ascorbic acid; thus, it is able to selectively detect dopamine in the presence of ascorbic acid and uric acid. The ErGO-based biosensor exhibited a linear response towards dopamine in the concentration range of 0.1-10 μM with a low detection limit of 0.1 μM. Furthermore, this electrode has good reproducibility and environmental stability, and can be used to analyse real samples.
NASA Astrophysics Data System (ADS)
Weatherford, Charles; Gebremedhin, Daniel
2016-03-01
A new and efficient way of evolving a solution to an ordinary differential equation is presented. A finite element method is used where we expand in a convenient local basis set of functions that enforce both function and first derivative continuity across the boundaries of each element. We also implement an adaptive step size choice for each element that is based on a Taylor series expansion. The method is applied to solve for the eigenpairs of the one-dimensional soft-coulomb potential and the hard-coulomb limit is studied. The method is then used to calculate a numerical solution of the Kohn-Sham differential equation within the local density approximation is presented and is applied to the helium atom. Supported by the National Nuclear Security Agency, the Nuclear Regulatory Commission, and the Defense Threat Reduction Agency.
Prediction (early recognition) of emerging flu strain clusters
NASA Astrophysics Data System (ADS)
Li, X.; Phillips, J. C.
2017-08-01
Early detection of incipient dominant influenza strains is one of the key steps in the design and manufacture of an effective annual influenza vaccine. Here we report the most current results for pandemic H3N2 flu vaccine design. A 2006 model of dimensional reduction (compaction) of viral mutational complexity derives two-dimensional Cartesian mutational maps (2DMM) that exhibit an emergent dominant strain as a small and distinct cluster of as few as 10 strains. We show that recent extensions of this model can detect incipient strains one year or more in advance of their dominance in the human population. Our structural interpretation of our unexpectedly rich 2DMM involves sialic acid, and is based on nearly 6000 strains in a series of recent 3-year time windows. Vaccine effectiveness is predicted best by analyzing dominant mutational epitopes.
Integrand Reduction Reloaded: Algebraic Geometry and Finite Fields
NASA Astrophysics Data System (ADS)
Sameshima, Ray D.; Ferroglia, Andrea; Ossola, Giovanni
2017-01-01
The evaluation of scattering amplitudes in quantum field theory allows us to compare the phenomenological prediction of particle theory with the measurement at collider experiments. The study of scattering amplitudes, in terms of their symmetries and analytic properties, provides a theoretical framework to develop techniques and efficient algorithms for the evaluation of physical cross sections and differential distributions. Tree-level calculations have been known for a long time. Loop amplitudes, which are needed to reduce the theoretical uncertainty, are more challenging since they involve a large number of Feynman diagrams, expressed as integrals of rational functions. At one-loop, the problem has been solved thanks to the combined effect of integrand reduction, such as the OPP method, and unitarity. However, plenty of work is still needed at higher orders, starting with the two-loop case. Recently, integrand reduction has been revisited using algebraic geometry. In this presentation, we review the salient features of integrand reduction for dimensionally regulated Feynman integrals, and describe an interesting technique for their reduction based on multivariate polynomial division. We also show a novel approach to improve its efficiency by introducing finite fields. Supported in part by the National Science Foundation under Grant PHY-1417354.
An object-oriented data reduction system in Fortran
NASA Technical Reports Server (NTRS)
Bailey, J.
1992-01-01
A data reduction system for the AAO two-degree field project is being developed using an object-oriented approach. Rather than use an object-oriented language (such as C++) the system is written in Fortran and makes extensive use of existing subroutine libraries provided by the UK Starlink project. Objects are created using the extensible N-dimensional Data Format (NDF) which itself is based on the Hierarchical Data System (HDS). The software consists of a class library, with each class corresponding to a Fortran subroutine with a standard calling sequence. The methods of the classes provide operations on NDF objects at a similar level of functionality to the applications of conventional data reduction systems. However, because they are provided as callable subroutines, they can be used as building blocks for more specialist applications. The class library is not dependent on a particular software environment thought it can be used effectively in ADAM applications. It can also be used from standalone Fortran programs. It is intended to develop a graphical user interface for use with the class library to form the 2dF data reduction system.
NASA Astrophysics Data System (ADS)
Sun, Xi-wan; Guo, Zhen-yun; Huang, Wei; Li, Shi-bin; Yan, Li
2017-02-01
The drag reduction and thermal protection system applied to hypersonic re-entry vehicles have attracted an increasing attention, and several novel concepts have been proposed by researchers. In the current study, the influences of performance parameters on drag and heat reduction efficiency of combinational novel cavity and opposing jet concept has been investigated numerically. The Reynolds-average Navier-Stokes (RANS) equations coupled with the SST k-ω turbulence model have been employed to calculate its surrounding flowfields, and the first-order spatially accurate upwind scheme appears to be more suitable for three-dimensional flowfields after grid independent analysis. Different cases of performance parameters, namely jet operating conditions, freestream angle of attack and physical dimensions, are simulated based on the verification of numerical method, and the effects on shock stand-off distance, drag force coefficient, surface pressure and heat flux distributions have been analyzed. This is the basic study for drag reduction and thermal protection by multi-objective optimization of the combinational novel cavity and opposing jet concept in hypersonic flows in the future.
A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.
Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua
2016-05-01
Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.
OBJECTIVE REDUCTION OF THE SPACE-TIME DOMAIN DIMENSIONALITY FOR EVALUATING MODEL PERFORMANCE
In the United States, photochemical air quality models are the principal tools used by governmental agencies to develop emission reduction strategies aimed at achieving National Ambient Air Quality Standards (NAAQS). Before they can be applied with confidence in a regulatory sett...
Phase reduction approach to synchronisation of nonlinear oscillators
NASA Astrophysics Data System (ADS)
Nakao, Hiroya
2016-04-01
Systems of dynamical elements exhibiting spontaneous rhythms are found in various fields of science and engineering, including physics, chemistry, biology, physiology, and mechanical and electrical engineering. Such dynamical elements are often modelled as nonlinear limit-cycle oscillators. In this article, we briefly review phase reduction theory, which is a simple and powerful method for analysing the synchronisation properties of limit-cycle oscillators exhibiting rhythmic dynamics. Through phase reduction theory, we can systematically simplify the nonlinear multi-dimensional differential equations describing a limit-cycle oscillator to a one-dimensional phase equation, which is much easier to analyse. Classical applications of this theory, i.e. the phase locking of an oscillator to a periodic external forcing and the mutual synchronisation of interacting oscillators, are explained. Further, more recent applications of this theory to the synchronisation of non-interacting oscillators induced by common noise and the dynamics of coupled oscillators on complex networks are discussed. We also comment on some recent advances in phase reduction theory for noise-driven oscillators and rhythmic spatiotemporal patterns.
Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K
2013-08-01
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.
Chaos and Robustness in a Single Family of Genetic Oscillatory Networks
Fu, Daniel; Tan, Patrick; Kuznetsov, Alexey; Molkov, Yaroslav I.
2014-01-01
Genetic oscillatory networks can be mathematically modeled with delay differential equations (DDEs). Interpreting genetic networks with DDEs gives a more intuitive understanding from a biological standpoint. However, it presents a problem mathematically, for DDEs are by construction infinitely-dimensional and thus cannot be analyzed using methods common for systems of ordinary differential equations (ODEs). In our study, we address this problem by developing a method for reducing infinitely-dimensional DDEs to two- and three-dimensional systems of ODEs. We find that the three-dimensional reductions provide qualitative improvements over the two-dimensional reductions. We find that the reducibility of a DDE corresponds to its robustness. For non-robust DDEs that exhibit high-dimensional dynamics, we calculate analytic dimension lines to predict the dependence of the DDEs’ correlation dimension on parameters. From these lines, we deduce that the correlation dimension of non-robust DDEs grows linearly with the delay. On the other hand, for robust DDEs, we find that the period of oscillation grows linearly with delay. We find that DDEs with exclusively negative feedback are robust, whereas DDEs with feedback that changes its sign are not robust. We find that non-saturable degradation damps oscillations and narrows the range of parameter values for which oscillations exist. Finally, we deduce that natural genetic oscillators with highly-regular periods likely have solely negative feedback. PMID:24667178
NASA Astrophysics Data System (ADS)
Yang, Liu; Huang, Jun; Yi, Mingxu; Zhang, Chaopu; Xiao, Qian
2017-11-01
A numerical study of a high efficiency propeller in the aerodynamic noise generation is carried out. Based on RANS, three-dimensional numerical simulation is performed to obtain the aerodynamic performance of the propeller. The result of the aerodynamic analysis is given as input of the acoustic calculation. The sound is calculated using the Farassat 1A, which is derived from Ffowcs Williams-Hawkings equation, and compared with the data of wind tunnel. The propeller is modified for noise reduction by changing its geometrical parameters such as diameter, chord width and pitch angle. The trend of variation between aerodynamic analysis data and acoustic calculation result are compared and discussed for different modification tasks. Meaningful conclusions are drawn on the noise reduction of propeller.
A Geometric Method for Model Reduction of Biochemical Networks with Polynomial Rate Functions.
Samal, Satya Swarup; Grigoriev, Dima; Fröhlich, Holger; Weber, Andreas; Radulescu, Ovidiu
2015-12-01
Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low-dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors.
Faust, Kevin; Xie, Quin; Han, Dominick; Goyle, Kartikay; Volynskaya, Zoya; Djuric, Ugljesa; Diamandis, Phedias
2018-05-16
There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yazhou; Yen, Clive H.; Hu, Yun Hang
2016-01-01
Three-dimensional (3D) graphene showed an advanced support for designing porous electrode materials due to its high specific surface area, large pore volume, and excellent electronic property. However, the electrochemical properties of reported porous electrode materials still need to be improved further. The current challenge is how to deposit desirable nanoparticles (NPs) with controllable structure, loading and composition in 3D graphene while maintaining the high dispersion. Herein, we demonstrate a modified supercritical fluid (SCF) technique to address this issue by controlling the SCF system. Using this superior method, a series of Pt-based/3D graphene materials with the ultrafine-sized, highly dispersive and controllablemore » composition multimetallic NPs were successfully synthesized. Specifically, the resultant Pt40Fe60/3D graphene showed a significant enhancement in electrocatalytic performance for the oxygen reduction reaction (ORR), including a factor of 14.2 enhancement in mass activity (1.70 A mgPt 1), a factor of 11.9 enhancement in specific activity (1.55 mA cm 2), and higher durability compared with that of Pt/C catalyst. After careful comparison, the Pt40Fe60/3D graphene catalyst shows the higher ORR activity than most of the reported similar 3D graphene-based catalysts. The successful synthesis of such attractive materials by this method also paves the way to develop 3D graphene in widespread applications.« less
Li, Shuang; Wu, Dongqing; Liang, Haiwei; Wang, Jinzuan; Zhuang, Xiaodong; Mai, Yiyong; Su, Yuezeng; Feng, Xinliang
2014-11-01
We demonstrate a general and efficient self-templating strategy towards transition metal-nitrogen containing mesoporous carbon/graphene nanosheets with a unique two-dimensional (2D) morphology and tunable mesoscale porosity. Owing to the well-defined 2D morphology, nanometer-scale thickness, high specific surface area, and the simultaneous doping of the metal-nitrogen compounds, the as-prepared catalysts exhibits excellent electrocatalytic activity and stability towards the oxygen reduction reaction (ORR) in both alkaline and acidic media. More importantly, such a self-templating approach towards two-dimensional porous carbon hybrids with diverse metal-nitrogen doping opens up new avenues to mesoporous heteroatom-doped carbon materials as electrochemical catalysts for oxygen reduction and hydrogen evolution, with promising applications in fuel cell and battery technologies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Linear discriminant analysis based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu
2013-08-01
Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.
ERIC Educational Resources Information Center
Chen, Chwen Jen; Fauzy Wan Ismail, Wan Mohd
2008-01-01
The real-time interactive nature of three-dimensional virtual environments (VEs) makes this technology very appropriate for exploratory learning purposes. However, many studies have shown that the exploration process may cause cognitive overload that affects the learning of domain knowledge. This article reports a quasi-experimental study that…
Unimodular gravity and the lepton anomalous magnetic moment at one-loop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martín, Carmelo P., E-mail: carmelop@fis.ucm.es
We work out the one-loop contribution to the lepton anomalous magnetic moment coming from Unimodular Gravity. We use Dimensional Regularization and Dimensional Reduction to carry out the computations. In either case, we find that Unimodular Gravity gives rise to the same one-loop correction as that of General Relativity.
Local reduction of certain wave operators to one-dimensional form
NASA Technical Reports Server (NTRS)
Roe, Philip
1994-01-01
It is noted that certain common linear wave operators have the property that linear variation of the initial data gives rise to one-dimensional evolution in a plane defined by time and some direction in space. The analysis is given For operators arising in acoustics, electromagnetics, elastodynamics, and an abstract system.
Integrating diffusion maps with umbrella sampling: Application to alanine dipeptide
NASA Astrophysics Data System (ADS)
Ferguson, Andrew L.; Panagiotopoulos, Athanassios Z.; Debenedetti, Pablo G.; Kevrekidis, Ioannis G.
2011-04-01
Nonlinear dimensionality reduction techniques can be applied to molecular simulation trajectories to systematically extract a small number of variables with which to parametrize the important dynamical motions of the system. For molecular systems exhibiting free energy barriers exceeding a few kBT, inadequate sampling of the barrier regions between stable or metastable basins can lead to a poor global characterization of the free energy landscape. We present an adaptation of a nonlinear dimensionality reduction technique known as the diffusion map that extends its applicability to biased umbrella sampling simulation trajectories in which restraining potentials are employed to drive the system into high free energy regions and improve sampling of phase space. We then propose a bootstrapped approach to iteratively discover good low-dimensional parametrizations by interleaving successive rounds of umbrella sampling and diffusion mapping, and we illustrate the technique through a study of alanine dipeptide in explicit solvent.
Pal, P K; Kamble, Suresh S; Chaurasia, Ranjitkumar Rampratap; Chaurasia, Vishwajit Rampratap; Tiwari, Samarth; Bansal, Deepak
2014-06-01
The present study was done to evaluate the dimensional stability and surface quality of Type IV gypsum casts retrieved from disinfected elastomeric impression materials. In an in vitro study contaminated impression material with known bacterial species was disinfected with disinfectants followed by culturing the swab sample to assess reduction in level of bacterial colony. Changes in surface detail reproduction of impression were assessed fallowing disinfection. All the three disinfectants used in the study produced a 100% reduction in colony forming units of the test organisms. All the three disinfectants produced complete disinfection, and didn't cause any deterioration in surface detail reproduction. How to cite the article: Pal PK, Kamble SS, Chaurasia RR, Chaurasia VR, Tiwari S, Bansal D. Evaluation of dimensional stability and surface quality of type IV gypsum casts retrieved from disinfected elastomeric impression materials. J Int Oral Health 2014;6(3):77-81.
Karayianni, Katerina N; Grimaldi, Keith A; Nikita, Konstantina S; Valavanis, Ioannis K
2015-01-01
This paper aims to enlighten the complex etiology beneath obesity by analysing data from a large nutrigenetics study, in which nutritional and genetic factors associated with obesity were recorded for around two thousand individuals. In our previous work, these data have been analysed using artificial neural network methods, which identified optimised subsets of factors to predict one's obesity status. These methods did not reveal though how the selected factors interact with each other in the obtained predictive models. For that reason, parallel Multifactor Dimensionality Reduction (pMDR) was used here to further analyse the pre-selected subsets of nutrigenetic factors. Within pMDR, predictive models using up to eight factors were constructed, further reducing the input dimensionality, while rules describing the interactive effects of the selected factors were derived. In this way, it was possible to identify specific genetic variations and their interactive effects with particular nutritional factors, which are now under further study.
Design of a 3-dimensional visual illusion speed reduction marking scheme.
Liang, Guohua; Qian, Guomin; Wang, Ye; Yi, Zige; Ru, Xiaolei; Ye, Wei
2017-03-01
To determine which graphic and color combination for a 3-dimensional visual illusion speed reduction marking scheme presents the best visual stimulus, five parameters were designed. According to the Balanced Incomplete Blocks-Law of Comparative Judgment, three schemes, which produce strong stereoscopic impressions, were screened from the 25 initial design schemes of different combinations of graphics and colors. Three-dimensional experimental simulation scenes of the three screened schemes were created to evaluate four different effects according to a semantic analysis. The following conclusions were drawn: schemes with a red color are more effective than those without; the combination of red, yellow and blue produces the best visual stimulus; a larger area from the top surface and the front surface should be colored red; and a triangular prism should be painted as the graphic of the marking according to the stereoscopic impression and the coordination of graphics with the road.
Smith, Imogen; Silveirinha, Vasco; Stein, Jason L; de la Torre-Ubieta, Luis; Farrimond, Jonathan A; Williamson, Elizabeth M; Whalley, Benjamin J
2017-04-01
Differentiated human neural stem cells were cultured in an inert three-dimensional (3D) scaffold and, unlike two-dimensional (2D) but otherwise comparable monolayer cultures, formed spontaneously active, functional neuronal networks that responded reproducibly and predictably to conventional pharmacological treatments to reveal functional, glutamatergic synapses. Immunocytochemical and electron microscopy analysis revealed a neuronal and glial population, where markers of neuronal maturity were observed in the former. Oligonucleotide microarray analysis revealed substantial differences in gene expression conferred by culturing in a 3D vs a 2D environment. Notable and numerous differences were seen in genes coding for neuronal function, the extracellular matrix and cytoskeleton. In addition to producing functional networks, differentiated human neural stem cells grown in inert scaffolds offer several significant advantages over conventional 2D monolayers. These advantages include cost savings and improved physiological relevance, which make them better suited for use in the pharmacological and toxicological assays required for development of stem cell-based treatments and the reduction of animal use in medical research. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Dimensionality Reduction Through Classifier Ensembles
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Tumer, Kagan; Norwig, Peter (Technical Monitor)
1999-01-01
In data mining, one often needs to analyze datasets with a very large number of attributes. Performing machine learning directly on such data sets is often impractical because of extensive run times, excessive complexity of the fitted model (often leading to overfitting), and the well-known "curse of dimensionality." In practice, to avoid such problems, feature selection and/or extraction are often used to reduce data dimensionality prior to the learning step. However, existing feature selection/extraction algorithms either evaluate features by their effectiveness across the entire data set or simply disregard class information altogether (e.g., principal component analysis). Furthermore, feature extraction algorithms such as principal components analysis create new features that are often meaningless to human users. In this article, we present input decimation, a method that provides "feature subsets" that are selected for their ability to discriminate among the classes. These features are subsequently used in ensembles of classifiers, yielding results superior to single classifiers, ensembles that use the full set of features, and ensembles based on principal component analysis on both real and synthetic datasets.
A manifold learning approach to target detection in high-resolution hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ziemann, Amanda K.
Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying "targets" such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data; this is particularly true when implementing traditional target detection approaches, and the limitations of these models are well-documented. With manifold learning based approaches, the only assumption is that the data reside on an underlying manifold that can be discretely modeled by a graph. The research presented here focuses on the use of graph theory and manifold learning in hyperspectral imagery. Early work explored various graph-building techniques with application to the background model of the Topological Anomaly Detection (TAD) algorithm, which is a graph theory based approach to anomaly detection. This led towards a focus on target detection, and in the development of a specific graph-based model of the data and subsequent dimensionality reduction using manifold learning. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, lower-dimensional manifold coordinates. Then, target detection is performed in the manifold space.
Reusing remediated CCA-treated wood
Carol A. Clausen
2003-01-01
Options for recycling and reusing chromated-copper-arsenate- (CCA) treated material include dimensional lumber and round wood size reduction, composites, and remediation. Size reduction by remilling, shaving, or resawing CCA-treated wood reduces the volume of landfilled waste material and provides many options for reusing used treated wood. Manufacturing composite...
Reducing democratic type II supergravity on SU(3) × SU(3) structures
NASA Astrophysics Data System (ADS)
Cassani, Davide
2008-06-01
Type II supergravity on backgrounds admitting SU(3) × SU(3) structure and general fluxes is considered. Using the generalized geometry formalism, we study dimensional reductions leading to N = 2 gauged supergravity in four dimensions, possibly with tensor multiplets. In particular, a geometric formula for the full N = 2 scalar potential is given. Then we implement a truncation ansatz, and derive the complete N = 2 bosonic action. While the NSNS contribution is obtained via a direct dimensional reduction, the contribution of the RR sector is computed starting from the democratic formulation and demanding consistency with the reduced equations of motion.
Symmetry Reductions and Group-Invariant Radial Solutions to the n-Dimensional Wave Equation
NASA Astrophysics Data System (ADS)
Feng, Wei; Zhao, Songlin
2018-01-01
In this paper, we derive explicit group-invariant radial solutions to a class of wave equation via symmetry group method. The optimal systems of one-dimensional subalgebras for the corresponding radial wave equation are presented in terms of the known point symmetries. The reductions of the radial wave equation into second-order ordinary differential equations (ODEs) with respect to each symmetry in the optimal systems are shown. Then we solve the corresponding reduced ODEs explicitly in order to write out the group-invariant radial solutions for the wave equation. Finally, several analytical behaviours and smoothness of the resulting solutions are discussed.
Mechanism of polymer drag reduction using a low-dimensional model.
Roy, Anshuman; Morozov, Alexander; van Saarloos, Wim; Larson, Ronald G
2006-12-08
Using a retarded-motion expansion to describe the polymer stress, we derive a low-dimensional model to understand the effects of polymer elasticity on the self-sustaining process that maintains the coherent wavy streamwise vortical structures underlying wall-bounded turbulence. Our analysis shows that at small Weissenberg numbers, Wi, elasticity enhances the coherent structures. At higher Wi, however, polymer stresses suppress the streamwise vortices (rolls) by calming down the instability of the streaks that regenerates the rolls. We show that this behavior can be attributed to the nonmonotonic dependence of the biaxial extensional viscosity on Wi, and identify it as the key rheological property controlling drag reduction.
Mohammed, Ameer; Zamani, Majid; Bayford, Richard; Demosthenous, Andreas
2017-12-01
In Parkinson's disease (PD), on-demand deep brain stimulation is required so that stimulation is regulated to reduce side effects resulting from continuous stimulation and PD exacerbation due to untimely stimulation. Also, the progressive nature of PD necessitates the use of dynamic detection schemes that can track the nonlinearities in PD. This paper proposes the use of dynamic feature extraction and dynamic pattern classification to achieve dynamic PD detection taking into account the demand for high accuracy, low computation, and real-time detection. The dynamic feature extraction and dynamic pattern classification are selected by evaluating a subset of feature extraction, dimensionality reduction, and classification algorithms that have been used in brain-machine interfaces. A novel dimensionality reduction technique, the maximum ratio method (MRM) is proposed, which provides the most efficient performance. In terms of accuracy and complexity for hardware implementation, a combination having discrete wavelet transform for feature extraction, MRM for dimensionality reduction, and dynamic k-nearest neighbor for classification was chosen as the most efficient. It achieves a classification accuracy of 99.29%, an F1-score of 97.90%, and a choice probability of 99.86%.
Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.
Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua
2016-11-01
This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.
Shape component analysis: structure-preserving dimension reduction on biological shape spaces.
Lee, Hao-Chih; Liao, Tao; Zhang, Yongjie Jessica; Yang, Ge
2016-03-01
Quantitative shape analysis is required by a wide range of biological studies across diverse scales, ranging from molecules to cells and organisms. In particular, high-throughput and systems-level studies of biological structures and functions have started to produce large volumes of complex high-dimensional shape data. Analysis and understanding of high-dimensional biological shape data require dimension-reduction techniques. We have developed a technique for non-linear dimension reduction of 2D and 3D biological shape representations on their Riemannian spaces. A key feature of this technique is that it preserves distances between different shapes in an embedded low-dimensional shape space. We demonstrate an application of this technique by combining it with non-linear mean-shift clustering on the Riemannian spaces for unsupervised clustering of shapes of cellular organelles and proteins. Source code and data for reproducing results of this article are freely available at https://github.com/ccdlcmu/shape_component_analysis_Matlab The implementation was made in MATLAB and supported on MS Windows, Linux and Mac OS. geyang@andrew.cmu.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Formation of dominant mode by evolution in biological systems
NASA Astrophysics Data System (ADS)
Furusawa, Chikara; Kaneko, Kunihiko
2018-04-01
A reduction in high-dimensional phenotypic states to a few degrees of freedom is essential to understand biological systems. Here, we show evolutionary robustness causes such reduction which restricts possible phenotypic changes in response to a variety of environmental conditions. First, global protein expression changes in Escherichia coli after various environmental perturbations were shown to be proportional across components, across different types of environmental conditions. To examine if such dimension reduction is a result of evolution, we analyzed a cell model—with a huge number of components, that reproduces itself via a catalytic reaction network—and confirmed that common proportionality in the concentrations of all components is shaped through evolutionary processes. We found that the changes in concentration across all components in response to environmental and evolutionary changes are constrained to the changes along a one-dimensional major axis, within a huge-dimensional state space. On the basis of these observations, we propose a theory in which such constraints in phenotypic changes are achieved both by evolutionary robustness and plasticity and formulate this proposition in terms of dynamical systems. Accordingly, broad experimental and numerical results on phenotypic changes caused by evolution and adaptation are coherently explained.
Yang, Jing; Ye, Shu-jun; Wu, Ji-chun
2011-05-01
This paper studied on the influence of bioclogging on permeability of saturated porous media. Laboratory hydraulic tests were conducted in a two-dimensional C190 sand-filled cell (55 cm wide x 45 cm high x 1.28 cm thick) to investigate growth of the mixed microorganisms (KB-1) and influence of biofilm on permeability of saturated porous media under condition of rich nutrition. Biomass distributions in the water and on the sand in the cell were measured by protein analysis. The biofilm distribution on the sand was observed by confocal laser scanning microscopy. Permeability was measured by hydraulic tests. The biomass levels measured in water and on the sand increased with time, and were highest at the bottom of the cell. The biofilm on the sand at the bottom of the cell was thicker. The results of the hydraulic tests demonstrated that the permeability due to biofilm growth was estimated to be average 12% of the initial value. To investigate the spatial distribution of permeability in the two dimensional cell, three models (Taylor, Seki, and Clement) were used to calculate permeability of porous media with biofilm growth. The results of Taylor's model showed reduction in permeability of 2-5 orders magnitude. The Clement's model predicted 3%-98% of the initial value. Seki's model could not be applied in this study. Conclusively, biofilm growth could obviously decrease the permeability of two dimensional saturated porous media, however, the reduction was much less than that estimated in one dimensional condition. Additionally, under condition of two dimensional saturated porous media with rich nutrition, Seki's model could not be applied, Taylor's model predicted bigger reductions, and the results of Clement's model were closest to the result of hydraulic test.
Reduced graphene oxide aerogel with high-rate supercapacitive performance in aqueous electrolytes
NASA Astrophysics Data System (ADS)
Si, Weijiang; Wu, Xiaozhong; Zhou, Jin; Guo, Feifei; Zhuo, Shuping; Cui, Hongyou; Xing, Wei
2013-05-01
Reduced graphene oxide aerogel (RGOA) is synthesized successfully through a simultaneous self-assembly and reduction process using hypophosphorous acid and I2 as reductant. Nitrogen sorption analysis shows that the Brunauer-Emmett-Teller surface area of RGOA could reach as high as 830 m2 g-1, which is the largest value ever reported for graphene-based aerogels obtained through the simultaneous self-assembly and reduction strategy. The as-prepared RGOA is characterized by a variety of means such as scanning electron microscopy, transmission electron microscopy, X-ray diffraction, Raman spectroscopy, and X-ray photoelectron spectroscopy. Electrochemical tests show that RGOA exhibits a high-rate supercapacitive performance in aqueous electrolytes. The specific capacitance of RGOA is calculated to be 211.8 and 278.6 F g-1 in KOH and H2SO4 electrolytes, respectively. The perfect supercapacitive performance of RGOA is ascribed to its three-dimensional structure and the existence of oxygen-containing groups.
Integrated Model Reduction and Control of Aircraft with Flexible Wings
NASA Technical Reports Server (NTRS)
Swei, Sean Shan-Min; Zhu, Guoming G.; Nguyen, Nhan T.
2013-01-01
This paper presents an integrated approach to the modeling and control of aircraft with exible wings. The coupled aircraft rigid body dynamics with a high-order elastic wing model can be represented in a nite dimensional state-space form. Given a set of desired output covariance, a model reduction process is performed by using the weighted Modal Cost Analysis (MCA). A dynamic output feedback controller, which is designed based on the reduced-order model, is developed by utilizing output covariance constraint (OCC) algorithm, and the resulting OCC design weighting matrix is used for the next iteration of the weighted cost analysis. This controller is then validated for full-order evaluation model to ensure that the aircraft's handling qualities are met and the uttering motion of the wings suppressed. An iterative algorithm is developed in CONDUIT environment to realize the integration of model reduction and controller design. The proposed integrated approach is applied to NASA Generic Transport Model (GTM) for demonstration.
Reduced graphene oxide aerogel with high-rate supercapacitive performance in aqueous electrolytes.
Si, Weijiang; Wu, Xiaozhong; Zhou, Jin; Guo, Feifei; Zhuo, Shuping; Cui, Hongyou; Xing, Wei
2013-05-21
Reduced graphene oxide aerogel (RGOA) is synthesized successfully through a simultaneous self-assembly and reduction process using hypophosphorous acid and I2 as reductant. Nitrogen sorption analysis shows that the Brunauer-Emmett-Teller surface area of RGOA could reach as high as 830 m2 g-1, which is the largest value ever reported for graphene-based aerogels obtained through the simultaneous self-assembly and reduction strategy. The as-prepared RGOA is characterized by a variety of means such as scanning electron microscopy, transmission electron microscopy, X-ray diffraction, Raman spectroscopy, and X-ray photoelectron spectroscopy. Electrochemical tests show that RGOA exhibits a high-rate supercapacitive performance in aqueous electrolytes. The specific capacitance of RGOA is calculated to be 211.8 and 278.6 F g-1 in KOH and H2SO4 electrolytes, respectively. The perfect supercapacitive performance of RGOA is ascribed to its three-dimensional structure and the existence of oxygen-containing groups.
Combined Aero and Underhood Thermal Analysis for Heavy Duty Trucks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vegendla, Prasad; Sofu, Tanju; Saha, Rohit
2017-01-31
Aerodynamic analysis of the medium-duty delivery truck was performed to achieve vehicle design optimization. Three dimensional CFD simulations were carried out for several improved designs, with a detailed external component analysis of wheel covers, side skirts, roof fairings, and rounded trailer corners. The overall averaged aerodynamics drag reduction through the design modifications were shown up to 22.3% through aerodynamic considerations alone, which is equivalent to 11.16% fuel savings. The main identified fuel efficiencies were based on second generation devices, including wheel covers, side skirts, roof fairings, and rounded trailer corners. The important findings of this work were; (i) the optimummore » curvature radius of the rounded trailer edges found to be 125 mm, with an arc length of 196.3 mm, (ii) aerodynamic drag reduction increases with dropping clearance of side skirts between wheels and ground, and (iii) aerodynamic drag reduction increases with an extension of front bumper towards the ground.« less
Bath for electrolytic reduction of alumina and method therefor
Brown, Craig W.; Brooks, Richard J.; Frizzle, Patrick B.; Juric, Drago D.
2001-07-10
An electrolytic bath for use during the electrolytic reduction of alumina to aluminum. The bath comprises a molten electrolyte having the following ingredients: (a) AlF.sub.3 and at least one salt selected from the group consisting of NaF, KF, and LiF; and (b) about 0.004 wt. % to about 0.2 wt. %, based on total weight of the molten electrolyte, of at least one transition metal or at least one compound of the metal or both. The compound may be, for example, a fluoride, oxide, or carbonate. The metal can be nickel, iron, copper, cobalt, or molybdenum. The bath can be employed in a combination that includes a vessel for containing the bath and at least one non-consumable anode and at least one dimensionally stable cathode in the bath. Employing the bath of the present invention during electrolytic reduction of alumina to aluminum can improve the wetting of aluminum on a cathode by reducing or eliminating the formation of non-metallic deposits on the cathode.
2013-01-01
Background The structured organization of cells in the brain plays a key role in its functional efficiency. This delicate organization is the consequence of unique molecular identity of each cell gradually established by precise spatiotemporal gene expression control during development. Currently, studies on the molecular-structural association are beginning to reveal how the spatiotemporal gene expression patterns are related to cellular differentiation and structural development. Results In this article, we aim at a global, data-driven study of the relationship between gene expressions and neuroanatomy in the developing mouse brain. To enable visual explorations of the high-dimensional data, we map the in situ hybridization gene expression data to a two-dimensional space by preserving both the global and the local structures. Our results show that the developing brain anatomy is largely preserved in the reduced gene expression space. To provide a quantitative analysis, we cluster the reduced data into groups and measure the consistency with neuroanatomy at multiple levels. Our results show that the clusters in the low-dimensional space are more consistent with neuroanatomy than those in the original space. Conclusions Gene expression patterns and developing brain anatomy are closely related. Dimensionality reduction and visual exploration facilitate the study of this relationship. PMID:23845024
Manufacture of astroloy turbine disk shapes by hot isostatic pressing, volume 1
NASA Technical Reports Server (NTRS)
Eng, R. D.; Evans, D. J.
1978-01-01
The Materials in Advanced Turbine Engines project was conducted to demonstrate container technology and establish manufacturing procedures for fabricating direct Hot Isostatic Pressing (HIP) of low carbon Astroloy to ultrasonic disk shapes. The HIP processing procedures including powder manufacture and handling, container design and fabrication, and HIP consolidation techniques were established by manufacturing five HIP disks. Based upon dimensional analysis of the first three disks, container technology was refined by modifying container tooling which resulted in closer conformity of the HIP surfaces to the sonic shape. The microstructure, chemistry and mechanical properties of two HIP low carbon Astroloy disks were characterized. One disk was subjected to a ground base experimental engine test, and the results of HIP low carbon Astroloy were analyzed and compared to conventionally forged Waspaloy. The mechanical properties of direct HIP low carbon Astroloy exceeded all property goals and the objectives of reduction in material input weight and reduction in cost were achieved.
Thermal energy harvesting and solar energy conversion utilizing carbon-based nanomaterials
NASA Astrophysics Data System (ADS)
McCarthy, Patrick T.
This dissertation provides details of carbon-based nanomaterial fabrication for applications in energy harvesting and generation. As energy demands increase, and concerns about mankind's environmental impact increase, alternative methods of generating energy will be widely researched. Carbon-based nanomaterials may be effective in such applications as their fabrication is often inexpensive and they have highly desirable electrical, mechanical, and thermal properties. Synthesis and characterization of carbon nanotube thermal interfaces on gadolinium foils is described herein. Total thermal interface resistances of carbon nanotube coated gadolinium were measured using a one-dimensional reference calorimeter technique, and the effect of hydrogen embrittlement on the magnetic properties of gadolinium foils is discussed. The samples generated in this study were consistently measured with reduced total thermal interface resistances of 55-70% compared to bare gadolinium. Characterization of gadolinium foils in a cooling device called a magneto thermoelectric generator was also performed. A gadolinium shuttle drives the device as it transitions between ferromagnetic and paramagnetic states. Reduced interface resistances from the carbon nanotube arrays led to increased shuttle frequency and effective heat transfer coefficients. Detailed theoretical derivations for electron emission during thermal and photo-excitation are provided for both three-dimensional and two-dimensional materials. The derived theories were fitted to experimental data from variable temperature photoemission studies of potassium-intercalated graphitic nanopetals. A work function reduction from approximately 4.5 eV to 2 -- 3 eV resulted from potassium intercalation and adsorption. While changes in the electron energy distribution shape and intensity were significant within 310 -- 680 K, potassium-intercalated graphitic petals demonstrate very high thermal stability after heating to nearly 1000 K. Boron nitride modification of the nanopetals was performed in an effort to minimize deintercalation of potassium from the nanopetal lattice and while multiple work functions were present within the electron energy distribution, massive reductions in emission intensity took place above 580 K. Finally, a device for measuring the current density during photoemission was also developed and photoemission induced by a solar simulator at room temperature produced currents on the order of 1 nA/cm 2 resulting in a quantum efficiency of approximately 8.0x10 --8 electrons emitted per photon of illumination.
Strong anti-gravity Life in the shock wave
NASA Astrophysics Data System (ADS)
Fabbrichesi, Marco; Roland, Kaj
1992-12-01
Strong anti-gravity is the vanishing of the net force between two massive particles at rest, to all orders in Newton's constant. We study this phenomenon and show that it occurs in any effective theory of gravity which is obtained from a higher-dimensional model by compactification on a manifold with flat directions. We find the exact solution of the Einstein equations in the presence of a point-like source of strong anti-gravity by dimensional reduction of a shock-wave solution in the higher-dimensional model.
Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy
NASA Technical Reports Server (NTRS)
Ford, G. E.
1986-01-01
To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.
Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds
NASA Astrophysics Data System (ADS)
Abdo, Mohammad Gamal Mohammad Mostafa
This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).
Augustin, Moritz; Ladenbauer, Josef; Baumann, Fabian; Obermayer, Klaus
2017-06-01
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models.
Baumann, Fabian; Obermayer, Klaus
2017-01-01
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models. PMID:28644841
Reduced-order model based feedback control of the modified Hasegawa-Wakatani model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goumiri, I. R.; Rowley, C. W.; Ma, Z.
2013-04-15
In this work, the development of model-based feedback control that stabilizes an unstable equilibrium is obtained for the Modified Hasegawa-Wakatani (MHW) equations, a classic model in plasma turbulence. First, a balanced truncation (a model reduction technique that has proven successful in flow control design problems) is applied to obtain a low dimensional model of the linearized MHW equation. Then, a model-based feedback controller is designed for the reduced order model using linear quadratic regulators. Finally, a linear quadratic Gaussian controller which is more resistant to disturbances is deduced. The controller is applied on the non-reduced, nonlinear MHW equations to stabilizemore » the equilibrium and suppress the transition to drift-wave induced turbulence.« less
Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.
Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi
2017-09-22
DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.
Design and analysis of compound flexible skin based on deformable honeycomb
NASA Astrophysics Data System (ADS)
Zou, Tingting; Zhou, Li
2017-04-01
In this study, we focused at the development and verification of a robust framework for surface crack detection in steel pipes using measured vibration responses; with the presence of multiple progressive damage occurring in different locations within the structure. Feature selection, dimensionality reduction, and multi-class support vector machine were established for this purpose. Nine damage cases, at different locations, orientations and length, were introduced into the pipe structure. The pipe was impacted 300 times using an impact hammer, after each damage case, the vibration data were collected using 3 PZT wafers which were installed on the outer surface of the pipe. At first, damage sensitive features were extracted using the frequency response function approach followed by recursive feature elimination for dimensionality reduction. Then, a multi-class support vector machine learning algorithm was employed to train the data and generate a statistical model. Once the model is established, decision values and distances from the hyper-plane were generated for the new collected data using the trained model. This process was repeated on the data collected from each sensor. Overall, using a single sensor for training and testing led to a very high accuracy reaching 98% in the assessment of the 9 damage cases used in this study.
NASA Astrophysics Data System (ADS)
Mustapha, S.; Braytee, A.; Ye, L.
2017-04-01
In this study, we focused at the development and verification of a robust framework for surface crack detection in steel pipes using measured vibration responses; with the presence of multiple progressive damage occurring in different locations within the structure. Feature selection, dimensionality reduction, and multi-class support vector machine were established for this purpose. Nine damage cases, at different locations, orientations and length, were introduced into the pipe structure. The pipe was impacted 300 times using an impact hammer, after each damage case, the vibration data were collected using 3 PZT wafers which were installed on the outer surface of the pipe. At first, damage sensitive features were extracted using the frequency response function approach followed by recursive feature elimination for dimensionality reduction. Then, a multi-class support vector machine learning algorithm was employed to train the data and generate a statistical model. Once the model is established, decision values and distances from the hyper-plane were generated for the new collected data using the trained model. This process was repeated on the data collected from each sensor. Overall, using a single sensor for training and testing led to a very high accuracy reaching 98% in the assessment of the 9 damage cases used in this study.
Radtke, Valentin; Himmel, Daniel; Pütz, Katharina; Goll, Sascha K; Krossing, Ingo
2014-04-07
We introduce the protoelectric potential map (PPM) as a novel, two-dimensional plot of the absolute reduction potential (peabs scale) combined with the absolute protochemical potential (Brønsted acidity: pHabs scale). The validity of this thermodynamically derived PPM is solvent-independent due to the scale zero points, which were chosen as the ideal electron gas and the ideal proton gas at standard conditions. To tie a chemical environment to these reference states, the standard Gibbs energies for the transfer of the gaseous electrons/protons to the medium are needed as anchor points. Thereby, the thermodynamics of any redox, acid-base or combined system in any medium can be related to any other, resulting in a predictability of reactions even over different media or phase boundaries. Instruction is given on how to construct the PPM from the anchor points derived and tabulated with this work. Since efforts to establish "absolute" reduction potential scales and also "absolute" pH scales already exist, a short review in this field is given and brought into relation to the PPM. Some comments on the electrochemical validation and realization conclude this concept article. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Geilfus, Christoph-Martin; Ober, Dietrich; Eichacker, Lutz A; Mühling, Karl Hermann; Zörb, Christian
2015-05-01
The salt-sensitive crop Zea mays L. shows a rapid leaf growth reduction upon NaCl stress. There is increasing evidence that salinity impairs the ability of the cell walls to expand, ultimately inhibiting growth. Wall-loosening is a prerequisite for cell wall expansion, a process that is under the control of cell wall-located expansin proteins. In this study the abundance of those proteins was analyzed against salt stress using gel-based two-dimensional proteomics and two-dimensional Western blotting. Results show that ZmEXPB6 (Z. mays β-expansin 6) protein is lacking in growth-inhibited leaves of salt-stressed maize. Of note, the exogenous application of heterologously expressed and metal-chelate-affinity chromatography-purified ZmEXPB6 on growth-reduced leaves that lack native ZmEXPB6 under NaCl stress partially restored leaf growth. In vitro assays on frozen-thawed leaf sections revealed that recombinant ZmEXPB6 acts on the capacity of the walls to extend. Our results identify expansins as a factor that partially restores leaf growth of maize in saline environments. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Aggregate Measures of Watershed Health from Reconstructed ...
Risk-based indices such as reliability, resilience, and vulnerability (R-R-V), have the potential to serve as watershed health assessment tools. Recent research has demonstrated the applicability of such indices for water quality (WQ) constituents such as total suspended solids and nutrients on an individual basis. However, the calculations can become tedious when time-series data for several WQ constituents have to be evaluated individually. Also, comparisons between locations with different sets of constituent data can prove difficult. In this study, data reconstruction using relevance vector machine algorithm was combined with dimensionality reduction via variational Bayesian noisy principal component analysis to reconstruct and condense sparse multidimensional WQ data sets into a single time series. The methodology allows incorporation of uncertainty in both the reconstruction and dimensionality-reduction steps. The R-R-V values were calculated using the aggregate time series at multiple locations within two Indiana watersheds. Results showed that uncertainty present in the reconstructed WQ data set propagates to the aggregate time series and subsequently to the aggregate R-R-V values as well. serving as motivating examples. Locations with different WQ constituents and different standards for impairment were successfully combined to provide aggregate measures of R-R-V values. Comparisons with individual constituent R-R-V values showed that v
Villoria, Eduardo M; Lenzi, Antônio R; Soares, Rodrigo V; Souki, Bernardo Q; Sigurdsson, Asgeir; Marques, Alexandre P; Fidel, Sandra R
2017-01-01
To describe the use of open-source software for the post-processing of CBCT imaging for the assessment of periapical lesions development after endodontic treatment. CBCT scans were retrieved from endodontic records of two patients. Three-dimensional virtual models, voxel counting, volumetric measurement (mm 3 ) and mean intensity of the periapical lesion were performed with ITK-SNAP v. 3.0 software. Three-dimensional models of the lesions were aligned and overlapped through the MeshLab software, which performed an automatic recording of the anatomical structures, based on the best fit. Qualitative and quantitative analyses of the changes in lesions size after treatment were performed with the 3DMeshMetric software. The ITK-SNAP v. 3.0 showed the smaller value corresponding to the voxel count and the volume of the lesion segmented in yellow, indicating reduction in volume of the lesion after the treatment. A higher value of the mean intensity of the segmented image in yellow was also observed, which suggested new bone formation. Colour mapping and "point value" tool allowed the visualization of the reduction of periapical lesions in several regions. Researchers and clinicians in the monitoring of endodontic periapical lesions have the opportunity to use open-source software.
Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.
2013-01-01
The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition- and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity. PMID:23366954
Cowley, Benjamin R; Kaufman, Matthew T; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M
2012-01-01
The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition-and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity.
Marginal hepatectomy in the rat: from anatomy to surgery.
Madrahimov, Nodir; Dirsch, Olaf; Broelsch, Christoph; Dahmen, Uta
2006-07-01
Based on the 3-dimensional visualization of vascular supply and drainage, a vessel-oriented resection technique was optimized. The new surgical technique was used to determine the maximal reduction in liver mass enabling a 50% 1-week survival rate. Determination of the minimal liver mass is necessary in clinical as well as in experimental liver surgery. In rats, survival seems to depend on the surgical technique applied. Extended hepatectomy with removal of 90% of the liver mass was long regarded as a lethal model. Introduction of a vessel-oriented approach enabled long-term survival in this model. The lobar and vascular anatomy of rat livers was visualized by plastination of the whole organ, respectively, by corrosion casts of the portal vein, hepatic artery and liver veins. The three-dimensional models were used to extract the underlying anatomic structure. In 90% partial hepatectomy, the liver parenchyma was clamped close to the base of the respective liver lobes (left lateral, median and right, liver lobe). Piercing sutures were placed through the liver parenchyma, so that the stem of portal vein and the accompanying hepatic artery but also the hepatic vein were included. A 1-week survival rate of 100% was achieved after 90% hepatectomy. Extending the procedure to 95% resection by additional removal of the upper caudate lobe led to a 1-week survival rate of 66%; 97% partial hepatectomy, accomplished by additional resection of the lower caudate lobe only leaving the paracaval parts of the liver behind, resulted in 100% lethality within 4 days. Using a anatomically based, vessel-oriented, parenchyma-preserving surgical technique in 95% liver resections led to long-term survival. This represents the maximal reduction of liver mass compatible with survival.
Evaluation of WES One-Dimensional Dynamic Soil Testing Procedures.
1983-06-01
relations: 18 AC - (2K +4G (6)a so I so r (6) Dividing the stress reduction Aa by a from equation (5), we obtain ana r estimate of the fractional error in...walls, the radial expansion of the soil caused by the expansion of the steel side walls, and the nonuniform stress and strain states in the sample...is applied rapidly, the stress state in the soil at the steel base may be very different from that at the top surface of the soil. With such
Development of non-linear finite element computer code
NASA Technical Reports Server (NTRS)
Becker, E. B.; Miller, T.
1985-01-01
Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.
Polyaniline-based memristive microdevice with high switching rate and endurance
NASA Astrophysics Data System (ADS)
Lapkin, D. A.; Emelyanov, A. V.; Demin, V. A.; Erokhin, V. V.; Feigin, L. A.; Kashkarov, P. K.; Kovalchuk, M. V.
2018-01-01
Polyaniline (PANI) based memristive devices have emerged as promising candidates for hardware implementation of artificial synapses (the key components of neuromorphic systems) due to their high flexibility, low cost, solution processability, three-dimensional stacking capability, and biocompatibility. Here, we report on a way of the significant improvement of the switching rate and endurance of PANI-based memristive devices. The reduction of the PANI active channel dimension leads to the increase in the resistive switching rate by hundreds of times in comparison with the conventional one. The miniaturized memristive device was shown to be stable within at least 104 cyclic switching events between high- and low-conductive states with a retention time of at least 103 s. The obtained results make PANI-based memristive devices potentially widely applicable in neuromorphic systems.
2.5D multi-view gait recognition based on point cloud registration.
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-03-28
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.
Subbarao, Udumula; Sarkar, Sumanta; Jana, Rajkumar; Bera, Sourav S; Peter, Sebastian C
2016-06-06
We conceptually selected the compounds REPb3 (RE = Eu, Yb), which are unstable in air, and converted them to the stable materials in ambient conditions by the chemical processes of "nanoparticle formation" and "dimensional reduction". The nanoparticles and the bulk counterparts were synthesized by the solvothermal and high-frequency induction furnace heating methods, respectively. The reduction of the particle size led to the valence transition of the rare earth atom, which was monitored through magnetic susceptibility and X-ray absorption near edge spectroscopy (XANES) measurements. The stability was checked by X-ray diffraction and thermogravimetric analysis over a period of seven months in oxygen and argon atmospheres and confirmed by XANES. The nanoparticles showed outstanding stability toward aerial oxidation over a period of seven months compared to the bulk counterpart, as the latter one is more prone to the oxidation within a few days.
The staircase method: integrals for periodic reductions of integrable lattice equations
NASA Astrophysics Data System (ADS)
van der Kamp, Peter H.; Quispel, G. R. W.
2010-11-01
We show, in full generality, that the staircase method (Papageorgiou et al 1990 Phys. Lett. A 147 106-14, Quispel et al 1991 Physica A 173 243-66) provides integrals for mappings, and correspondences, obtained as traveling wave reductions of (systems of) integrable partial difference equations. We apply the staircase method to a variety of equations, including the Korteweg-De Vries equation, the five-point Bruschi-Calogero-Droghei equation, the quotient-difference (QD)-algorithm and the Boussinesq system. We show that, in all these cases, if the staircase method provides r integrals for an n-dimensional mapping, with 2r, then one can introduce q <= 2r variables, which reduce the dimension of the mapping from n to q. These dimension-reducing variables are obtained as joint invariants of k-symmetries of the mappings. Our results support the idea that often the staircase method provides sufficiently many integrals for the periodic reductions of integrable lattice equations to be completely integrable. We also study reductions on other quad-graphs than the regular {\\ Z}^2 lattice, and we prove linear growth of the multi-valuedness of iterates of high-dimensional correspondences obtained as reductions of the QD-algorithm.
Localized contourlet features in vehicle make and model recognition
NASA Astrophysics Data System (ADS)
Zafar, I.; Edirisinghe, E. A.; Acar, B. S.
2009-02-01
Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic Number Plate Recognition (ANPR) systems. Several vehicle MMR systems have been proposed in literature. In parallel to this, the usefulness of multi-resolution based feature analysis techniques leading to efficient object classification algorithms have received close attention from the research community. To this effect, Contourlet transforms that can provide an efficient directional multi-resolution image representation has recently been introduced. Already an attempt has been made in literature to use Curvelet/Contourlet transforms in vehicle MMR. In this paper we propose a novel localized feature detection method in Contourlet transform domain that is capable of increasing the classification rates up to 4%, as compared to the previously proposed Contourlet based vehicle MMR approach in which the features are non-localized and thus results in sub-optimal classification. Further we show that the proposed algorithm can achieve the increased classification accuracy of 96% at significantly lower computational complexity due to the use of Two Dimensional Linear Discriminant Analysis (2DLDA) for dimensionality reduction by preserving the features with high between-class variance and low inter-class variance.
Multi-dimensional multi-species modeling of transient electrodeposition in LIGA microfabrication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Gregory Herbert; Chen, Ken Shuang
2004-06-01
This report documents the efforts and accomplishments of the LIGA electrodeposition modeling project which was headed by the ASCI Materials and Physics Modeling Program. A multi-dimensional framework based on GOMA was developed for modeling time-dependent diffusion and migration of multiple charged species in a dilute electrolyte solution with reduction electro-chemical reactions on moving deposition surfaces. By combining the species mass conservation equations with the electroneutrality constraint, a Poisson equation that explicitly describes the electrolyte potential was derived. The set of coupled, nonlinear equations governing species transport, electric potential, velocity, hydrodynamic pressure, and mesh motion were solved in GOMA, using themore » finite-element method and a fully-coupled implicit solution scheme via Newton's method. By treating the finite-element mesh as a pseudo solid with an arbitrary Lagrangian-Eulerian formulation and by repeatedly performing re-meshing with CUBIT and re-mapping with MAPVAR, the moving deposition surfaces were tracked explicitly from start of deposition until the trenches were filled with metal, thus enabling the computation of local current densities that potentially influence the microstructure and frictional/mechanical properties of the deposit. The multi-dimensional, multi-species, transient computational framework was demonstrated in case studies of two-dimensional nickel electrodeposition in single and multiple trenches, without and with bath stirring or forced flow. Effects of buoyancy-induced convection on deposition were also investigated. To further illustrate its utility, the framework was employed to simulate deposition in microscreen-based LIGA molds. Lastly, future needs for modeling LIGA electrodeposition are discussed.« less
NASA Astrophysics Data System (ADS)
Franck, I. M.; Koutsourelakis, P. S.
2017-01-01
This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.
An Interview with Matthew P. Greving, PhD. Interview by Vicki Glaser.
Greving, Matthew P
2011-10-01
Matthew P. Greving is Chief Scientific Officer at Nextval Inc., a company founded in early 2010 that has developed a discovery platform called MassInsight™.. He received his PhD in Biochemistry from Arizona State University, and prior to that he spent nearly 7 years working as a software engineer. This experience in solving complex computational problems fueled his interest in developing technologies and algorithms related to acquisition and analysis of high-dimensional biochemical data. To address the existing problems associated with label-based microarray readouts, he beganwork on a technique for label-free mass spectrometry (MS) microarray readout compatible with both matrix-assisted laser/desorption ionization (MALDI) and matrix-free nanostructure initiator mass spectrometry (NIMS). This is the core of Nextval’s MassInsight technology, which utilizes picoliter noncontact deposition of high-density arrays on mass-readout substrates along with computational algorithms for high-dimensional data processingand reduction.
Martin, Bryan D.; Wolfson, Julian; Adomavicius, Gediminas; Fan, Yingling
2017-01-01
We propose and compare combinations of several methods for classifying transportation activity data from smartphone GPS and accelerometer sensors. We have two main objectives. First, we aim to classify our data as accurately as possible. Second, we aim to reduce the dimensionality of the data as much as possible in order to reduce the computational burden of the classification. We combine dimension reduction and classification algorithms and compare them with a metric that balances accuracy and dimensionality. In doing so, we develop a classification algorithm that accurately classifies five different modes of transportation (i.e., walking, biking, car, bus and rail) while being computationally simple enough to run on a typical smartphone. Further, we use data that required no behavioral changes from the smartphone users to collect. Our best classification model uses the random forest algorithm to achieve 96.8% accuracy. PMID:28885550
Martin, Bryan D; Addona, Vittorio; Wolfson, Julian; Adomavicius, Gediminas; Fan, Yingling
2017-09-08
We propose and compare combinations of several methods for classifying transportation activity data from smartphone GPS and accelerometer sensors. We have two main objectives. First, we aim to classify our data as accurately as possible. Second, we aim to reduce the dimensionality of the data as much as possible in order to reduce the computational burden of the classification. We combine dimension reduction and classification algorithms and compare them with a metric that balances accuracy and dimensionality. In doing so, we develop a classification algorithm that accurately classifies five different modes of transportation (i.e., walking, biking, car, bus and rail) while being computationally simple enough to run on a typical smartphone. Further, we use data that required no behavioral changes from the smartphone users to collect. Our best classification model uses the random forest algorithm to achieve 96.8% accuracy.
Moiré-reduction method for slanted-lenticular-based quasi-three-dimensional displays
NASA Astrophysics Data System (ADS)
Zhuang, Zhenfeng; Surman, Phil; Zhang, Lei; Rawat, Rahul; Wang, Shizheng; Zheng, Yuanjin; Sun, Xiao Wei
2016-12-01
In this paper we present a method for determining the preferred slanted angle for a lenticular film that minimizes moiré patterns in quasi-three-dimensional (Q3D) displays. We evaluate the preferred slanted angles of the lenticular film for the stripe-type sub-pixel structure liquid crystal display (LCD) panel. Additionally, the sub-pixels mapping algorithm of the specific angle is proposed to assign the images to either the right or left eye channel. A Q3D display prototype is built. Compared with the conventional SLF, this newly implemented Q3D display can not only eliminate moiré patterns but also provide 3D images in both portrait and landscape orientations. It is demonstrated that the developed slanted lenticular film (SLF) provides satisfactory 3D images by employing a compact structure, minimum moiré patterns and stabilized 3D contrast.
Joe, Yong S; Lee, Sun H; Hedin, Eric R; Kim, Young D
2013-06-01
We utilize a two-dimensional four-channel DNA model, with a tight-binding (TB) Hamiltonian, and investigate the temperature and the magnetic field dependence of the transport behavior of a short DNA molecule. Random variation of the hopping integrals due to the thermal structural disorder, which partially destroy phase coherence of electrons and reduce quantum interference, leads to a reduction of the localization length and causes suppressed overall transmission. We also incorporate a variation of magnetic field flux density into the hopping integrals as a phase factor and observe Aharonov-Bohm (AB) oscillations in the transmission. It is shown that for non-zero magnetic flux, the transmission zero leaves the real-energy axis and moves up into the complex-energy plane. We also point out that the hydrogen bonds between the base pair with flux variations play a role to determine the periodicity of AB oscillations in the transmission.
Geometric entropy and edge modes of the electromagnetic field
NASA Astrophysics Data System (ADS)
Donnelly, William; Wall, Aron C.
2016-11-01
We calculate the vacuum entanglement entropy of Maxwell theory in a class of curved spacetimes by Kaluza-Klein reduction of the theory onto a two-dimensional base manifold. Using two-dimensional duality, we express the geometric entropy of the electromagnetic field as the entropy of a tower of scalar fields, constant electric and magnetic fluxes, and a contact term, whose leading-order divergence was discovered by Kabat. The complete contact term takes the form of one negative scalar degree of freedom confined to the entangling surface. We show that the geometric entropy agrees with a statistical definition of entanglement entropy that includes edge modes: classical solutions determined by their boundary values on the entangling surface. This resolves a long-standing puzzle about the statistical interpretation of the contact term in the entanglement entropy. We discuss the implications of this negative term for black hole thermodynamics and the renormalization of Newton's constant.
Spontaneous bending of pre-stretched bilayers.
DeSimone, Antonio
2018-01-01
We discuss spontaneously bent configurations of pre-stretched bilayer sheets that can be obtained by tuning the pre-stretches in the two layers. The two-dimensional nonlinear plate model we use for this purpose is an adaptation of the one recently obtained for thin sheets of nematic elastomers, by means of a rigorous dimensional reduction argument based on the theory of Gamma-convergence (Agostiniani and DeSimone in Meccanica. doi:10.1007/s11012-017-0630-4, 2017, Math Mech Solids. doi:10.1177/1081286517699991, arXiv:1509.07003, 2017). We argue that pre-stretched bilayer sheets provide us with an interesting model system to study shape programming and morphing of surfaces in other, more complex systems, where spontaneous deformations are induced by swelling due to the absorption of a liquid, phase transformations, thermal or electro-magnetic stimuli. These include bio-mimetic structures inspired by biological systems from both the plant and the animal kingdoms.
On the emergence of the ΛCDM model from self-interacting Brans-Dicke theory in d= 5
NASA Astrophysics Data System (ADS)
Reyes, Luz Marina; Perez Bergliaffa, Santiago Esteban
2018-01-01
We investigate whether a self-interacting Brans-Dicke theory in d=5 without matter and with a time-dependent metric can describe, after dimensional reduction to d=4, the FLRW model with accelerated expansion and non-relativistic matter. By rewriting the effective 4-dimensional theory as an autonomous 3-dimensional dynamical system and studying its critical points, we show that the ΛCDM cosmology cannot emerge from such a model. This result suggests that a richer structure in d=5 may be needed to obtain the accelerated expansion as well as the matter content of the 4-dimensional universe.
Cai, Jia; Tang, Yi
2018-02-01
Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Qian, Jianjun; Yang, Jian; Xu, Yong
2013-09-01
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.
Exploring multicollinearity using a random matrix theory approach.
Feher, Kristen; Whelan, James; Müller, Samuel
2012-01-01
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J
2015-01-01
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.
Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.
2015-01-01
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483
Yoshii, Yuichi; Kusakabe, Takuya; Akita, Kenichi; Tung, Wen Lin; Ishii, Tomoo
2017-12-01
A three-dimensional (3D) digital preoperative planning system for the osteosynthesis of distal radius fractures was developed for clinical practice. To assess the usefulness of the 3D planning for osteosynthesis, we evaluated the reproducibility of the reduction shapes and selected implants in the patients with distal radius fractures. Twenty wrists of 20 distal radius fracture patients who underwent osteosynthesis using volar locking plates were evaluated. The 3D preoperative planning was performed prior to each surgery. Four surgeons conducted the surgeries. The surgeons performed the reduction and the placement of the plate while comparing images between the preoperative plan and fluoroscopy. Preoperative planning and postoperative reductions were compared by measuring volar tilt and radial inclination of the 3D images. Intra-class correlation coefficients (ICCs) of the volar tilt and radial inclination were evaluated. For the implant choices, the ICCs for the screw lengths between the preoperative plan and the actual choices were evaluated. The ICCs were 0.644 (p < 0.01) and 0.625 (p < 0.01) for the volar tilt and radial inclination in the 3D measurements, respectively. The planned size of plate was used in all of the patients. The ICC for the screw length between preoperative planning and actual choice was 0.860 (p < 0.01). Good reproducibility for the reduction shape and excellent reproducibility for the implant choices were achieved using 3D preoperative planning for distal radius fracture. Three-dimensional digital planning was useful to visualize the reduction process and choose a proper implant for distal radius fractures. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:2646-2651, 2017. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Error reduction program: A progress report
NASA Technical Reports Server (NTRS)
Syed, S. A.
1984-01-01
Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.
Analysis of Information Content in High-Spectral Resolution Sounders using Subset Selection Analysis
NASA Technical Reports Server (NTRS)
Velez-Reyes, Miguel; Joiner, Joanna
1998-01-01
In this paper, we summarize the results of the sensitivity analysis and data reduction carried out to determine the information content of AIRS and IASI channels. The analysis and data reduction was based on the use of subset selection techniques developed in the linear algebra and statistical community to study linear dependencies in high dimensional data sets. We applied the subset selection method to study dependency among channels by studying the dependency among their weighting functions. Also, we applied the technique to study the information provided by the different levels in which the atmosphere is discretized for retrievals and analysis. Results from the method correlate well with intuition in many respects and point out to possible modifications for band selection in sensor design and number and location of levels in the analysis process.
Biased normalized cuts for target detection in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Xuewen; Dorado-Munoz, Leidy P.; Messinger, David W.; Cahill, Nathan D.
2016-05-01
The Biased Normalized Cuts (BNC) algorithm is a useful technique for detecting targets or objects in RGB imagery. In this paper, we propose modifying BNC for the purpose of target detection in hyperspectral imagery. As opposed to other target detection algorithms that typically encode target information prior to dimensionality reduction, our proposed algorithm encodes target information after dimensionality reduction, enabling a user to detect different targets in interactive mode. To assess the proposed BNC algorithm, we utilize hyperspectral imagery (HSI) from the SHARE 2012 data campaign, and we explore the relationship between the number and the position of expert-provided target labels and the precision/recall of the remaining targets in the scene.
Gui, Jiang; Moore, Jason H.; Williams, Scott M.; Andrews, Peter; Hillege, Hans L.; van der Harst, Pim; Navis, Gerjan; Van Gilst, Wiek H.; Asselbergs, Folkert W.; Gilbert-Diamond, Diane
2013-01-01
We present an extension of the two-class multifactor dimensionality reduction (MDR) algorithm that enables detection and characterization of epistatic SNP-SNP interactions in the context of a quantitative trait. The proposed Quantitative MDR (QMDR) method handles continuous data by modifying MDR’s constructive induction algorithm to use a T-test. QMDR replaces the balanced accuracy metric with a T-test statistic as the score to determine the best interaction model. We used a simulation to identify the empirical distribution of QMDR’s testing score. We then applied QMDR to genetic data from the ongoing prospective Prevention of Renal and Vascular End-Stage Disease (PREVEND) study. PMID:23805232
Computational analysis of gene-gene interactions using multifactor dimensionality reduction.
Moore, Jason H
2004-11-01
Understanding the relationship between DNA sequence variations and biologic traits is expected to improve the diagnosis, prevention and treatment of common human diseases. Success in characterizing genetic architecture will depend on our ability to address nonlinearities in the genotype-to-phenotype mapping relationship as a result of gene-gene interactions, or epistasis. This review addresses the challenges associated with the detection and characterization of epistasis. A novel strategy known as multifactor dimensionality reduction that was specifically designed for the identification of multilocus genetic effects is presented. Several case studies that demonstrate the detection of gene-gene interactions in common diseases such as atrial fibrillation, Type II diabetes and essential hypertension are also discussed.
Dutt-Mazumder, Aviroop; Button, Chris; Robins, Anthony; Bartlett, Roger
2011-12-01
Recent studies have explored the organization of player movements in team sports using a range of statistical tools. However, the factors that best explain the performance of association football teams remain elusive. Arguably, this is due to the high-dimensional behavioural outputs that illustrate the complex, evolving configurations typical of team games. According to dynamical system analysts, movement patterns in team sports exhibit nonlinear self-organizing features. Nonlinear processing tools (i.e. Artificial Neural Networks; ANNs) are becoming increasingly popular to investigate the coordination of participants in sports competitions. ANNs are well suited to describing high-dimensional data sets with nonlinear attributes, however, limited information concerning the processes required to apply ANNs exists. This review investigates the relative value of various ANN learning approaches used in sports performance analysis of team sports focusing on potential applications for association football. Sixty-two research sources were summarized and reviewed from electronic literature search engines such as SPORTDiscus, Google Scholar, IEEE Xplore, Scirus, ScienceDirect and Elsevier. Typical ANN learning algorithms can be adapted to perform pattern recognition and pattern classification. Particularly, dimensionality reduction by a Kohonen feature map (KFM) can compress chaotic high-dimensional datasets into low-dimensional relevant information. Such information would be useful for developing effective training drills that should enhance self-organizing coordination among players. We conclude that ANN-based qualitative analysis is a promising approach to understand the dynamical attributes of association football players.
Adaptive sampling strategies with high-throughput molecular dynamics
NASA Astrophysics Data System (ADS)
Clementi, Cecilia
Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.
Choi, David; Poudel, Nirakar; Park, Saungeun; Akinwande, Deji; Cronin, Stephen B; Watanabe, Kenji; Taniguchi, Takashi; Yao, Zhen; Shi, Li
2018-04-04
Scanning thermal microscopy measurements reveal a significant thermal benefit of including a high thermal conductivity hexagonal boron nitride (h-BN) heat-spreading layer between graphene and either a SiO 2 /Si substrate or a 100 μm thick Corning flexible Willow glass (WG) substrate. At the same power density, an 80 nm thick h-BN layer on the silicon substrate can yield a factor of 2.2 reduction of the hot spot temperature, whereas a 35 nm thick h-BN layer on the WG substrate is sufficient to obtain a factor of 4.1 reduction. The larger effect of the h-BN heat spreader on WG than on SiO 2 /Si is attributed to a smaller effective heat transfer coefficient per unit area for three-dimensional heat conduction into the thick, low-thermal conductivity WG substrate than for one-dimensional heat conduction through the thin oxide layer on silicon. Consequently, the h-BN lateral heat-spreading length is much larger on WG than on SiO 2 /Si, resulting in a larger degree of temperature reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasoyinu, Yemi; Griffin, John A.
2014-03-31
With the increased emphasis on vehicle weight reduction, production of near-net shape components by lost foam casting will make significant inroad into the next-generation of engineering component designs. The lost foam casting process is a cost effective method for producing complex castings using an expandable polystyrene pattern and un-bonded sand. The use of un-bonded molding media in the lost foam process will impose less constraint on the solidifying casting, making hot tearing less prevalent. This is especially true in Al-Mg and Al-Cu alloy systems that are prone to hot tearing when poured in rigid molds partially due to their longmore » freezing range. Some of the unique advantages of using the lost foam casting process are closer dimensional tolerance, higher casting yield, and the elimination of sand cores and binders. Most of the aluminum alloys poured using the lost foam process are based on the Al-Si system. Very limited research work has been performed with Al-Mg and Al-Cu type alloys. With the increased emphasis on vehicle weight reduction, and given the high-strength-to-weight-ratio of magnesium, significant weight savings can be achieved by casting thin-wall (≤ 3 mm) engineering components from both aluminum- and magnesium-base alloys.« less
A solution to the Navier-Stokes equations based upon the Newton Kantorovich method
NASA Technical Reports Server (NTRS)
Davis, J. E.; Gabrielsen, R. E.; Mehta, U. B.
1977-01-01
An implicit finite difference scheme based on the Newton-Kantorovich technique was developed for the numerical solution of the nonsteady, incompressible, two-dimensional Navier-Stokes equations in conservation-law form. The algorithm was second-order-time accurate, noniterative with regard to the nonlinear terms in the vorticity transport equation except at the earliest few time steps, and spatially factored. Numerical results were obtained with the technique for a circular cylinder at Reynolds number 15. Results indicate that the technique is in excellent agreement with other numerical techniques for all geometries and Reynolds numbers investigated, and indicates a potential for significant reduction in computation time over current iterative techniques.
Glutathione-Triggered Formation of a Fmoc-Protected Short Peptide-Based Supramolecular Hydrogel
Shi, Yang; Wang, Jingyu; Wang, Huaimin; Hu, Yanhui; Chen, Xuemei; Yang, Zhimou
2014-01-01
A biocompatible method of glutathione (GSH) catalyzed disulfide bond reduction was used to form Fmoc-short peptide-based supramolecular hydrogels. The hydrogels could form in both buffer solution and cell culture medium containing 10% of Fetal Bovine Serum (FBS) within minutes. The hydrogel was characterized by rheology, transmission electron microscopy, and fluorescence emission spectra. Their potential in three dimensional (3D) cell culture was evaluated and the results indicated that the gel with a low concentration of the peptide (0.1 wt%) was suitable for 3D cell culture of 3T3 cells. This study provides an alternative candidate of supramolecular hydrogel for 3D cell culture and cell delivery. PMID:25222132
Carbon-based electrocatalysts for advanced energy conversion and storage
Zhang, Jintao; Xia, Zhenhai; Dai, Liming
2015-01-01
Oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) play curial roles in electrochemical energy conversion and storage, including fuel cells and metal-air batteries. Having rich multidimensional nanoarchitectures [for example, zero-dimensional (0D) fullerenes, 1D carbon nanotubes, 2D graphene, and 3D graphite] with tunable electronic and surface characteristics, various carbon nanomaterials have been demonstrated to act as efficient metal-free electrocatalysts for ORR and OER in fuel cells and batteries. We present a critical review on the recent advances in carbon-based metal-free catalysts for fuel cells and metal-air batteries, and discuss the perspectives and challenges in this rapidly developing field of practical significance. PMID:26601241
Relativistic collisions as Yang-Baxter maps
NASA Astrophysics Data System (ADS)
Kouloukas, Theodoros E.
2017-10-01
We prove that one-dimensional elastic relativistic collisions satisfy the set-theoretical Yang-Baxter equation. The corresponding collision maps are symplectic and admit a Lax representation. Furthermore, they can be considered as reductions of a higher dimensional integrable Yang-Baxter map on an invariant manifold. In this framework, we study the integrability of transfer maps that represent particular periodic sequences of collisions.
Black Hole Entropy from Bondi-Metzner-Sachs Symmetry at the Horizon.
Carlip, S
2018-03-09
Near the horizon, the obvious symmetries of a black hole spacetime-the horizon-preserving diffeomorphisms-are enhanced to a larger symmetry group with a three-dimensional Bondi-Metzner-Sachs algebra. Using dimensional reduction and covariant phase space techniques, I investigate this augmented symmetry and show that it is strong enough to determine the black hole entropy in any dimension.
Chaos in plasma simulation and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, C.; Newman, D.E.; Sprott, J.C.
1993-09-01
We investigate the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas using data from both numerical simulations and experiment. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos. These tools include phase portraits and Poincard sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulate the plasma dynamics. These are -the DEBS code, which models global RFPmore » dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low,dimensional chaos and simple determinism. Experimental data were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or other simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less
Online dimensionality reduction using competitive learning and Radial Basis Function network.
Tomenko, Vladimir
2011-06-01
The general purpose dimensionality reduction method should preserve data interrelations at all scales. Additional desired features include online projection of new data, processing nonlinearly embedded manifolds and large amounts of data. The proposed method, called RBF-NDR, combines these features. RBF-NDR is comprised of two modules. The first module learns manifolds by utilizing modified topology representing networks and geodesic distance in data space and approximates sampled or streaming data with a finite set of reference patterns, thus achieving scalability. Using input from the first module, the dimensionality reduction module constructs mappings between observation and target spaces. Introduction of specific loss function and synthesis of the training algorithm for Radial Basis Function network results in global preservation of data structures and online processing of new patterns. The RBF-NDR was applied for feature extraction and visualization and compared with Principal Component Analysis (PCA), neural network for Sammon's projection (SAMANN) and Isomap. With respect to feature extraction, the method outperformed PCA and yielded increased performance of the model describing wastewater treatment process. As for visualization, RBF-NDR produced superior results compared to PCA and SAMANN and matched Isomap. For the Topic Detection and Tracking corpus, the method successfully separated semantically different topics. Copyright © 2011 Elsevier Ltd. All rights reserved.
Inflation from extra dimensions
NASA Astrophysics Data System (ADS)
Levin, Janna J.
1995-02-01
A gravity-driven inflation is shown to arise from a simple higher-dimensional universe. In vacuum, the shear of n > 1 contracting dimensions is able to inflate the remaining three spatial dimensions. Said another way, the expansion of the 3-volume is accelerated by the contraction of the n-volume. Upon dimensional reduction, the theory is equivalent to a four-dimensional cosmology with a dynamical Planck mass. A connection can therefore be made to recent examples of inflation powered by a dilaton kinetic energy. Unfortunately, the graceful exit problem encountered in dilaton cosmologies will haunt this cosmology as well.
Min, Yuho; Seo, Ho Jun; Choi, Jong-Jin; Hahn, Byung-Dong; Moon, Geon Dae
2018-08-24
As part of the oxygen family, chalcogen (Se, Te) nanostructures have been considered important elements for various practical fields and further exploited to constitute metal chalcogenides for each targeted application. Here, we report a controlled synthesis of well-defined one-dimensional chalcogen nanostructures such as nanowries, nanorods, and nanotubes by controlling reduction reaction rate to fine-tune the dimension and composition of the products. Tunable optical properties (localized surface plasmon resonances) of these chalcogen nanostructures are observed depending on their morphological, dimensional, and compositional variation.
Sensitivity Analysis for Probabilistic Neural Network Structure Reduction.
Kowalski, Piotr A; Kusy, Maciej
2018-05-01
In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the probabilistic neural network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.
The role of floodplain restoration in mitigating flood risk, Lower Missouri River, USA
Jacobson, Robert B.; Lindner, Garth; Bitner, Chance; Hudson, Paul F.; Middelkoop, Hans
2015-01-01
Recent extreme floods on the Lower Missouri River have reinvigorated public policy debate about the potential role of floodplain restoration in decreasing costs of floods and possibly increasing other ecosystem service benefits. The first step to addressing the benefits of floodplain restoration is to understand the interactions of flow, floodplain morphology, and land cover that together determine the biophysical capacity of the floodplain. In this article we address interactions between ecological restoration of floodplains and flood-risk reduction at 3 scales. At the scale of the Lower Missouri River corridor (1300 km) floodplain elevation datasets and flow models provide first-order calculations of the potential for Missouri River floodplains to store floods of varying magnitude and duration. At this same scale assessment of floodplain sand deposition from the 2011 Missouri River flood indicates the magnitude of flood damage that could potentially be limited by floodplain restoration. At the segment scale (85 km), 1-dimensional hydraulic modeling predicts substantial stage reductions with increasing area of floodplain restoration; mean stage reductions range from 0.12 to 0.66 m. This analysis also indicates that channel widening may contribute substantially to stage reductions as part of a comprehensive strategy to restore floodplain and channel habitats. Unsteady 1-dimensional flow modeling of restoration scenarios at this scale indicates that attenuation of peak discharges of an observed hydrograph from May 2007, of similar magnitude to a 10 % annual exceedance probability flood, would be minimal, ranging from 0.04 % (with 16 % floodplain restoration) to 0.13 % (with 100 % restoration). At the reach scale (15–20 km) 2-dimensional hydraulic models of alternative levee setbacks and floodplain roughness indicate complex processes and patterns of flooding including substantial variation in stage reductions across floodplains depending on topographic complexity and hydraulic roughness. Detailed flow patterns captured in the 2-dimensional model indicate that most floodplain storage occurs on the rising limb of the flood as water flows into floodplain bottoms from downstream; at a later time during the rising limb this pattern is reversed and the entire bottom conveys discharge down the valley. These results indicate that flood-risk reduction by attenuation is likely to be small on a large river like the Missouri and design strategies to optimize attenuation and ecological restoration should focus on frequent floods (20–50 % annual exceedance probability). Local stage reductions are a more certain benefit of floodplain restoration but local effects are highly dependent on magnitude of flood discharge and how floodplain vegetation communities contribute to hydraulic roughness. The most certain flood risk reduction benefit of floodplain restoration is avoidance of flood damages to crops and infrastructure.
NASA Astrophysics Data System (ADS)
Bogiatzis, P.; Ishii, M.; Davis, T. A.
2016-12-01
Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.
Light-cone reduction vs. TsT transformations: a fluid dynamics perspective
NASA Astrophysics Data System (ADS)
Dutta, Suvankar; Krishna, Hare
2018-05-01
We compute constitutive relations for a charged (2+1) dimensional Schrödinger fluid up to first order in derivative expansion, using holographic techniques. Starting with a locally boosted, asymptotically AdS, 4 + 1 dimensional charged black brane geometry, we uplift that to ten dimensions and perform TsT transformations to obtain an effective five dimensional local black brane solution with asymptotically Schrödinger isometries. By suitably implementing the holographic techniques, we compute the constitutive relations for the effective fluid living on the boundary of this space-time and extract first order transport coefficients from these relations. Schrödinger fluid can also be obtained by reducing a charged relativistic conformal fluid over light-cone. It turns out that both the approaches result the same system at the end. Fluid obtained by light-cone reduction satisfies a restricted class of thermodynamics. Here, we see that the charged fluid obtained holographically also belongs to the same restricted class.
Ovtchinnikov, Evgueni E.; Xanthis, Leonidas S.
2000-01-01
We present a methodology for the efficient numerical solution of eigenvalue problems of full three-dimensional elasticity for thin elastic structures, such as shells, plates and rods of arbitrary geometry, discretized by the finite element method. Such problems are solved by iterative methods, which, however, are known to suffer from slow convergence or even convergence failure, when the thickness is small. In this paper we show an effective way of resolving this difficulty by invoking a special preconditioning technique associated with the effective dimensional reduction algorithm (EDRA). As an example, we present an algorithm for computing the minimal eigenvalue of a thin elastic plate and we show both theoretically and numerically that it is robust with respect to both the thickness and discretization parameters, i.e. the convergence does not deteriorate with diminishing thickness or mesh refinement. This robustness is sine qua non for the efficient computation of large-scale eigenvalue problems for thin elastic structures. PMID:10655469
Holocinematographic velocimeter for measuring time-dependent, three-dimensional flows
NASA Technical Reports Server (NTRS)
Beeler, George B.; Weinstein, Leonard M.
1987-01-01
Two simulatneous, orthogonal-axis holographic movies are made of tracer particles in a low-speed water tunnel to determine the time-dependent, three-dimensional velocity field. This instrument is called a Holocinematographic Velocimeter (HCV). The holographic movies are reduced to the velocity field with an automatic data reduction system. This permits the reduction of large numbers of holograms (time steps) in a reasonable amount of time. The current version of the HCV, built for proof-of-concept tests, uses low-frame rate holographic cameras and a prototype of a new type of water tunnel. This water tunnel is a unique low-disturbance facility which has minimal wall effects on the flow. This paper presents the first flow field examined by the HCV, the two-dimensional von Karman vortex street downstream of an unswept circular cylinder. Key factors in the HCV are flow speed, spatial and temporal resolution required, measurement volume, film transport speed, and laser pulse length. The interactions between these factors are discussed.
Reductions in finite-dimensional integrable systems and special points of classical r-matrices
NASA Astrophysics Data System (ADS)
Skrypnyk, T.
2016-12-01
For a given 𝔤 ⊗ 𝔤-valued non-skew-symmetric non-dynamical classical r-matrices r(u, v) with spectral parameters, we construct the general form of 𝔤-valued Lax matrices of finite-dimensional integrable systems satisfying linear r-matrix algebra. We show that the reduction in the corresponding finite-dimensional integrable systems is connected with "the special points" of the classical r-matrices in which they become degenerated. We also propose a systematic way of the construction of additional integrals of the Lax-integrable systems associated with the symmetries of the corresponding r-matrices. We consider examples of the Lax matrices and integrable systems that are obtained in the framework of the general scheme. Among them there are such physically important systems as generalized Gaudin systems in an external magnetic field, ultimate integrable generalization of Toda-type chains (including "modified" or "deformed" Toda chains), generalized integrable Jaynes-Cummings-Dicke models, integrable boson models generalizing Bose-Hubbard dimer models, etc.
Russo, Mario S; Drago, Fabrizio; Silvetti, Massimo S; Righi, Daniela; Di Mambro, Corrado; Placidi, Silvia; Prosperi, Monica; Ciani, Michele; Naso Onofrio, Maria T; Cannatà, Vittorio
2016-06-01
Aim Transcatheter cryoablation is a well-established technique for the treatment of atrioventricular nodal re-entry tachycardia and atrioventricular re-entry tachycardia in children. Fluoroscopy or three-dimensional mapping systems can be used to perform the ablation procedure. The aim of this study was to compare the success rate of cryoablation procedures for the treatment of right septal accessory pathways and atrioventricular nodal re-entry circuits in children using conventional or three-dimensional mapping and to evaluate whether three-dimensional mapping was associated with reduced patient radiation dose compared with traditional mapping. In 2013, 81 children underwent transcatheter cryoablation at our institution, using conventional mapping in 41 children - 32 atrioventricular nodal re-entry tachycardia and nine atrioventricular re-entry tachycardia - and three-dimensional mapping in 40 children - 24 atrioventricular nodal re-entry tachycardia and 16 atrioventricular re-entry tachycardia. Using conventional mapping, the overall success rate was 78.1 and 66.7% in patients with atrioventricular nodal re-entry tachycardia or atrioventricular re-entry tachycardia, respectively. Using three-dimensional mapping, the overall success rate was 91.6 and 75%, respectively (p=ns). The use of three-dimensional mapping was associated with a reduction in cumulative air kerma and cumulative air kerma-area product of 76.4 and 67.3%, respectively (p<0.05). The use of three-dimensional mapping compared with the conventional fluoroscopy-guided method for cryoablation of right septal accessory pathways and atrioventricular nodal re-entry circuits in children was associated with a significant reduction in patient radiation dose without an increase in success rate.
Discriminant locality preserving projections based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu; Li, Defang
2014-11-01
Conventional discriminant locality preserving projection (DLPP) is a dimensionality reduction technique based on manifold learning, which has demonstrated good performance in pattern recognition. However, because its objective function is based on the distance criterion using L2-norm, conventional DLPP is not robust to outliers which are present in many applications. This paper proposes an effective and robust DLPP version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based locality preserving between-class dispersion and the L1-norm-based locality preserving within-class dispersion. The proposed method is proven to be feasible and also robust to outliers while overcoming the small sample size problem. The experimental results on artificial datasets, Binary Alphadigits dataset, FERET face dataset and PolyU palmprint dataset have demonstrated the effectiveness of the proposed method.
Dimensional reduction of a general advection–diffusion equation in 2D channels
NASA Astrophysics Data System (ADS)
Kalinay, Pavol; Slanina, František
2018-06-01
Diffusion of point-like particles in a two-dimensional channel of varying width is studied. The particles are driven by an arbitrary space dependent force. We construct a general recurrence procedure mapping the corresponding two-dimensional advection-diffusion equation onto the longitudinal coordinate x. Unlike the previous specific cases, the presented procedure enables us to find the one-dimensional description of the confined diffusion even for non-conservative (vortex) forces, e.g. caused by flowing solvent dragging the particles. We show that the result is again the generalized Fick–Jacobs equation. Despite of non existing scalar potential in the case of vortex forces, the effective one-dimensional scalar potential, as well as the corresponding quasi-equilibrium and the effective diffusion coefficient can be always found.
Phulpoto, Anwar Hussain; Qazi, Muneer Ahmed; Haq, Ihsan Ul; Phul, Abdul Rahman; Ahmed, Safia; Kanhar, Nisar Ahmed
2018-06-01
The present study validates the oil-based paint bioremediation potential of Bacillus subtilis NAP1 for ecotoxicological assessment using a three-dimensional multi-species bio-testing model. The model included bioassays to determine phytotoxic effect, cytotoxic effect, and antimicrobial effect of oil-based paint. Additionally, the antioxidant activity of pre- and post-bioremediation samples was also detected to confirm its detoxification. Although, the pre-bioremediation samples of oil-based paint displayed significant toxicity against all the life forms. However, post-bioremediation, the cytotoxic effect against Artemia salina revealed substantial detoxification of oil-based paint with LD 50 of 121 μl ml -1 (without glucose) and > 400 μl ml -1 (with glucose). Similarly, the reduction in toxicity against Raphanus raphanistrum seeds germination (%FG = 98 to 100%) was also evident of successful detoxification under experimental conditions. Moreover, the toxicity against test bacterial strains and fungal strains was completely removed after bioremediation. In addition, the post-bioremediation samples showed reduced antioxidant activities (% scavenging = 23.5 ± 0.35 and 28.9 ± 2.7) without and with glucose, respectively. Convincingly, the present multi-species bio-testing model in addition to antioxidant studies could be suggested as a validation tool for bioremediation experiments, especially for middle and low-income countries. Graphical abstract ᅟ.
NASA Astrophysics Data System (ADS)
Seadawy, Aly R.
2017-01-01
The propagation of three-dimensional nonlinear irrotational flow of an inviscid and incompressible fluid of the long waves in dispersive shallow-water approximation is analyzed. The problem formulation of the long waves in dispersive shallow-water approximation lead to fifth-order Kadomtsev-Petviashvili (KP) dynamical equation by applying the reductive perturbation theory. By using an extended auxiliary equation method, the solitary travelling-wave solutions of the two-dimensional nonlinear fifth-order KP dynamical equation are derived. An analytical as well as a numerical solution of the two-dimensional nonlinear KP equation are obtained and analyzed with the effects of external pressure flow.
Yan, Zhenya; Konotop, V V
2009-09-01
It is shown that using the similarity transformations, a set of three-dimensional p-q nonlinear Schrödinger (NLS) equations with inhomogeneous coefficients can be reduced to one-dimensional stationary NLS equation with constant or varying coefficients, thus allowing for obtaining exact localized and periodic wave solutions. In the suggested reduction the original coordinates in the (1+3) space are mapped into a set of one-parametric coordinate surfaces, whose parameter plays the role of the coordinate of the one-dimensional equation. We describe the algorithm of finding solutions and concentrate on power (linear and nonlinear) potentials presenting a number of case examples. Generalizations of the method are also discussed.
Correlation coefficient based supervised locally linear embedding for pulmonary nodule recognition.
Wu, Panpan; Xia, Kewen; Yu, Hengyong
2016-11-01
Dimensionality reduction techniques are developed to suppress the negative effects of high dimensional feature space of lung CT images on classification performance in computer aided detection (CAD) systems for pulmonary nodule detection. An improved supervised locally linear embedding (SLLE) algorithm is proposed based on the concept of correlation coefficient. The Spearman's rank correlation coefficient is introduced to adjust the distance metric in the SLLE algorithm to ensure that more suitable neighborhood points could be identified, and thus to enhance the discriminating power of embedded data. The proposed Spearman's rank correlation coefficient based SLLE (SC(2)SLLE) is implemented and validated in our pilot CAD system using a clinical dataset collected from the publicly available lung image database consortium and image database resource initiative (LICD-IDRI). Particularly, a representative CAD system for solitary pulmonary nodule detection is designed and implemented. After a sequential medical image processing steps, 64 nodules and 140 non-nodules are extracted, and 34 representative features are calculated. The SC(2)SLLE, as well as SLLE and LLE algorithm, are applied to reduce the dimensionality. Several quantitative measurements are also used to evaluate and compare the performances. Using a 5-fold cross-validation methodology, the proposed algorithm achieves 87.65% accuracy, 79.23% sensitivity, 91.43% specificity, and 8.57% false positive rate, on average. Experimental results indicate that the proposed algorithm outperforms the original locally linear embedding and SLLE coupled with the support vector machine (SVM) classifier. Based on the preliminary results from a limited number of nodules in our dataset, this study demonstrates the great potential to improve the performance of a CAD system for nodule detection using the proposed SC(2)SLLE. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Gender recognition from unconstrained and articulated human body.
Wu, Qin; Guo, Guodong
2014-01-01
Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition.
Gender Recognition from Unconstrained and Articulated Human Body
Wu, Qin; Guo, Guodong
2014-01-01
Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition. PMID:24977203
Based on user interest level of modeling scenarios and browse content
NASA Astrophysics Data System (ADS)
Zhao, Yang
2017-08-01
User interest modeling is the core of personalized service, taking into account the impact of situational information on user preferences, the user behavior days of financial information. This paper proposes a method of user interest modeling based on scenario information, which is obtained by calculating the similarity of the situation. The user's current scene of the approximate scenario set; on the "user - interest items - scenarios" three-dimensional model using the situation pre-filtering method of dimension reduction processing. View the content of the user interested in the theme, the analysis of the page content to get each topic of interest keywords, based on the level of vector space model user interest. The experimental results show that the user interest model based on the scenario information is within 9% of the user's interest prediction, which is effective.
2.5D Multi-View Gait Recognition Based on Point Cloud Registration
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-01-01
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727
The improved electrochemical performance of cross-linked 3D graphene nanoribbon monolith electrodes
NASA Astrophysics Data System (ADS)
Vineesh, Thazhe Veettil; Alwarappan, Subbiah; Narayanan, Tharangattu N.
2015-04-01
Technical advancement in the field of ultra-small sensors and devices demands the development of novel micro- or nano-based architectures. Here we report the design and assembly of cross-linked three dimensional graphene nanoribbons (3D GNRs) using solution based covalent binding of individual 2D GNRs and demonstrate its electrochemical application as a 3D electrode. The enhanced performance of 3D GNRs over individual 2D GNRs is established using standard redox probes - [Ru(NH3)6]3+/2+, [Fe(CN)6]3-/4- and important bio-analytes - dopamine and ascorbic acid. 3D GNRs are found to have high double layer capacitance (2482 μF cm-2) and faster electron transfer kinetics; their exceptional electrocatalytic activity towards the oxygen reduction reaction is indicative of their potential over a wide range of electrochemical applications. Moreover, this study opens a new platform for the design of novel point-of-care devices and electrodes for energy devices.Technical advancement in the field of ultra-small sensors and devices demands the development of novel micro- or nano-based architectures. Here we report the design and assembly of cross-linked three dimensional graphene nanoribbons (3D GNRs) using solution based covalent binding of individual 2D GNRs and demonstrate its electrochemical application as a 3D electrode. The enhanced performance of 3D GNRs over individual 2D GNRs is established using standard redox probes - [Ru(NH3)6]3+/2+, [Fe(CN)6]3-/4- and important bio-analytes - dopamine and ascorbic acid. 3D GNRs are found to have high double layer capacitance (2482 μF cm-2) and faster electron transfer kinetics; their exceptional electrocatalytic activity towards the oxygen reduction reaction is indicative of their potential over a wide range of electrochemical applications. Moreover, this study opens a new platform for the design of novel point-of-care devices and electrodes for energy devices. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr07315k
Charged black holes in compactified spacetimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karlovini, Max; Unge, Rikard von
2005-11-15
We construct and investigate a compactified version of the four-dimensional Reissner-Nordstroem-Taub-NUT solution, generalizing the compactified Schwarzschild black hole that has been previously studied by several workers. Our approach to compactification is based on dimensional reduction with respect to the stationary Killing vector, resulting in three-dimensional gravity coupled to a nonlinear sigma model. Knowing that the original noncompactified solution corresponds to a target space geodesic, the problem can be linearized much in the same way as in the case of no electric or Taub-NUT charge. An interesting feature of the solution family is that, for nonzero electric charge but vanishing Taub-NUTmore » charge, the solution has a curvature singularity on a torus that surrounds the event horizon, but this singularity is removed when the Taub-NUT charge is switched on. We also treat the Schwarzschild case in a more complete way than has been done previously. In particular, the asymptotic solution (the Levi-Civita solution with the height coordinate made periodic) has to our knowledge only been calculated up to a determination of the mass parameter. The periodic Levi-Civita solution contains three essential parameters, however, and the remaining two are explicitly calculated here.« less
Wilhite, D. Ray; White, Matt A.; Wroe, Stephen
2017-01-01
Digital dissection is a relatively new technique that has enabled scientists to gain a better understanding of vertebrate anatomy. It can be used to rapidly disseminate detailed, three-dimensional information in an easily accessible manner that reduces the need for destructive, traditional dissections. Here we present the results of a digital dissection on the appendicular musculature of the Australian estuarine crocodile (Crocodylus porosus). A better understanding of this until now poorly known system in C. porosus is important, not only because it will expand research into crocodilian locomotion, but because of its potential to inform muscle reconstructions in dinosaur taxa. Muscles of the forelimb and hindlimb are described and three-dimensional interactive models are included based on CT and MRI scans as well as fresh-tissue dissections. Differences in the arrangement of musculature between C. porosus and other groups within the Crocodylia were found. In the forelimb, differences are restricted to a single tendon of origin for triceps longus medialis. For the hindlimb, a reduction in the number of heads of ambiens was noted as well as changes to the location of origin and insertion for iliofibularis and gastrocnemius externus. PMID:28384201
A three-dimensional Navier-Stokes stage analysis of the flow through a compact radial turbine
NASA Technical Reports Server (NTRS)
Heidmann, James D.
1991-01-01
A steady, three dimensional Navier-Stokes average passage computer code is used to analyze the flow through a compact radial turbine stage. The code is based upon the average passage set of equations for turbomachinery, whereby the flow fields for all passages in a given blade row are assumed to be identical while retaining their three-dimensionality. A stage solution is achieved by alternating between stator and rotor calculations, while coupling the two solutions by means of a set of axisymmetric body forces which model the absent blade row. Results from the stage calculation are compared with experimental data and with results from an isolated rotor solution having axisymmetric inlet flow quantities upstream of the vacated stator space. Although the mass-averaged loss through the rotor is comparable for both solutions, the details of the loss distribution differ due to stator effects. The stage calculation predicts smaller spanwise variations in efficiency, in closer agreement with the data. The results of the study indicate that stage analyses hold promise for improved prediction of loss mechanisms in multi-blade row turbomachinery, which could lead to improved designs through the reduction of these losses.
A three-dimensional Navier-Stokes stage analysis of the flow through a compact radial turbine
NASA Technical Reports Server (NTRS)
Heidmann, James D.
1991-01-01
A steady, three-dimensional Navier-Stokes average passage computer code is used to analyze the flow through a compact radial turbine stage. The code is based upon the average passage set of equations for turbomachinery, whereby the flow fields for all passages in a given blade row are assumed to be identical while retaining their three-dimensionality. A stage solution is achieved by alternating between stator and rotor calculations, while coupling the two solutions by means of a set of axisymmetric body forces which model the absent blade row. Results from the stage calculation are compared with experimental data and with results from an isolated rotor solution having axisymmetric inlet flow quantities upstream of the vacated stator space. Although the mass-averaged loss through the rotor is comparable for both solutions, the details of the loss distribution differ due to stator effects. The stage calculation predicts smaller spanwise variations in efficiency, in closer agreement with the data. The results of the study indicate that stage analyses hold promise for improved prediction of loss mechanisms in multi-blade row turbomachinery, which could lead to improved designs through the reduction of these losses.
Exact Holography of Massive M2-brane Theories and Entanglement Entropy
NASA Astrophysics Data System (ADS)
Jang, Dongmin; Kim, Yoonbai; Kwon, O.-Kab; Tolla, D. D.
2018-01-01
We test the gauge/gravity duality between the N = 6 mass-deformed ABJM theory with Uk(N) × U-k(N) gauge symmetry and the 11-dimensional supergravity on LLM geometries with SO(4)=ℤk × SO(4)=ℤk isometry. Our analysis is based on the evaluation of vacuum expectation values of chiral primary operators from the supersymmetric vacua of mass-deformed ABJM theory and from the implementation of Kaluza-Klein (KK) holography to the LLM geometries. We focus on the chiral primary operator (CPO) with conformal dimension Δ = 1. The non-vanishing vacuum expectation value (vev) implies the breaking of conformal symmetry. In that case, we show that the variation of the holographic entanglement entropy (HEE) from it's value in the CFT, is related to the non-vanishing one-point function due to the relevant deformation as well as the source field. Applying Ryu Takayanagi's HEE conjecture to the 4-dimensional gravity solutions, which are obtained from the KK reduction of the 11-dimensional LLM solutions, we calculate the variation of the HEE. We show how the vev and the value of the source field determine the HEE.
Webb, C A; Weber, M; Mundy, E A; Killgore, W D S
2014-10-01
Studies investigating structural brain abnormalities in depression have typically employed a categorical rather than dimensional approach to depression [i.e., comparing subjects with Diagnostic and Statistical Manual of Mental Disorders (DSM)-defined major depressive disorder (MDD) v. healthy controls]. The National Institute of Mental Health, through their Research Domain Criteria initiative, has encouraged a dimensional approach to the study of psychopathology as opposed to an over-reliance on categorical (e.g., DSM-based) diagnostic approaches. Moreover, subthreshold levels of depressive symptoms (i.e., severity levels below DSM criteria) have been found to be associated with a range of negative outcomes, yet have been relatively neglected in neuroimaging research. To examine the extent to which depressive symptoms--even at subclinical levels--are linearly related to gray matter volume reductions in theoretically important brain regions, we employed whole-brain voxel-based morphometry in a sample of 54 participants. The severity of mild depressive symptoms, even in a subclinical population, was associated with reduced gray matter volume in the orbitofrontal cortex, anterior cingulate, thalamus, superior temporal gyrus/temporal pole and superior frontal gyrus. A conjunction analysis revealed concordance across two separate measures of depression. Reduced gray matter volume in theoretically important brain regions can be observed even in a sample that does not meet DSM criteria for MDD, but who nevertheless report relatively elevated levels of depressive symptoms. Overall, these findings highlight the need for additional research using dimensional conceptual and analytic approaches, as well as further investigation of subclinical populations.
A three-dimensional non-isothermal model for a membraneless direct methanol redox fuel cell
NASA Astrophysics Data System (ADS)
Wei, Lin; Yuan, Xianxia; Jiang, Fangming
2018-05-01
In the membraneless direct methanol redox fuel cell (DMRFC), three-dimensional electrodes contribute to the reduction of methanol crossover and the open separator design lowers the system cost and extends its service life. In order to better understand the mechanisms of this configuration and further optimize its performance, the development of a three-dimensional numerical model is reported in this work. The governing equations of the multi-physics field are solved based on computational fluid dynamics methodology, and the influence of the CO2 gas is taken into consideration through the effective diffusivities. The numerical results are in good agreement with experimental data, and the deviation observed for cases of large current density may be related to the single-phase assumption made. The three-dimensional electrode is found to be effective in controlling methanol crossover in its multi-layer structure, while it also increases the flow resistance for the discharging products. It is found that the current density distribution is affected by both the electronic conductivity and the concentration of reactants, and the temperature rise can be primarily attributed to the current density distribution. The sensitivity and reliability of the model are analyzed through the investigation of the effects of cell parameters, including porosity values of gas diffusion layers and catalyst layers, methanol concentration and CO2 volume fraction, on the polarization characteristics.
Two-dimensional nanostructured Y{sub 2}O{sub 3} particles for viscosity modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xingliang; Xiao, Huaping; Liang, Hong, E-mail: hliang@tamu.edu
Nanoparticle additives have been shown to improve the mechanical and transport phenomena of various liquids; however, little has been done to try and explain the rheological modifications provided from such modifications from a theoretical standpoint. Here, we report a non-Einstein-like reduction of viscosity of mineral oil with the utilization of yttrium oxide nanosheet additives. Experimental results, coupled with generalized smoothed-particle hydrodynamics simulations, provide insight into the mechanism behind this reduction of fluid shear stress. The ordered inclination of these two-dimensional nanoparticle additives markedly improves the lubricating properties of the mineral oil, ultimately reducing the friction, and providing a way inmore » designing and understanding next generation of lubricants.« less
NASA Astrophysics Data System (ADS)
Akarsu, Özgür; Dereli, Tekin; Katırcı, Nihan; Sheftel, Mikhail B.
2015-05-01
In a recent study Akarsu and Dereli (Gen. Relativ. Gravit. 45:1211, 2013) discussed the dynamical reduction of a higher dimensional cosmological model which is augmented by a kinematical constraint characterized by a single real parameter, correlating and controlling the expansion of both the external (physical) and internal spaces. In that paper explicit solutions were found only for the case of three dimensional internal space (). Here we derive a general solution of the system using Lie group symmetry properties, in parametric form for arbitrary number of internal dimensions. We also investigate the dynamical reduction of the model as a function of cosmic time for various values of and generate parametric plots to discuss cosmologically relevant results.
ODF Maxima Extraction in Spherical Harmonic Representation via Analytical Search Space Reduction
Aganj, Iman; Lenglet, Christophe; Sapiro, Guillermo
2015-01-01
By revealing complex fiber structure through the orientation distribution function (ODF), q-ball imaging has recently become a popular reconstruction technique in diffusion-weighted MRI. In this paper, we propose an analytical dimension reduction approach to ODF maxima extraction. We show that by expressing the ODF, or any antipodally symmetric spherical function, in the common fourth order real and symmetric spherical harmonic basis, the maxima of the two-dimensional ODF lie on an analytically derived one-dimensional space, from which we can detect the ODF maxima. This method reduces the computational complexity of the maxima detection, without compromising the accuracy. We demonstrate the performance of our technique on both artificial and human brain data. PMID:20879302
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2016-12-01
Surrogate construction has become a routine procedure when facing computationally intensive studies requiring multiple evaluations of complex models. In particular, surrogate models, otherwise called emulators or response surfaces, replace complex models in uncertainty quantification (UQ) studies, including uncertainty propagation (forward UQ) and parameter estimation (inverse UQ). Further, surrogates based on Polynomial Chaos (PC) expansions are especially convenient for forward UQ and global sensitivity analysis, also known as variance-based decomposition. However, the PC surrogate construction strongly suffers from the curse of dimensionality. With a large number of input parameters, the number of model simulations required for accurate surrogate construction is prohibitively large. Relatedly, non-adaptive PC expansions typically include infeasibly large number of basis terms far exceeding the number of available model evaluations. We develop Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth and PC surrogate construction leading to a sparse, high-dimensional PC surrogate with a very few model evaluations. The surrogate is then readily employed for global sensitivity analysis leading to further dimensionality reduction. Besides numerical tests, we demonstrate the construction on the example of Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
The current status and future prospects of computer-assisted hip surgery.
Inaba, Yutaka; Kobayashi, Naomi; Ike, Hiroyuki; Kubota, So; Saito, Tomoyuki
2016-03-01
The advances in computer assistance technology have allowed detailed three-dimensional preoperative planning and simulation of preoperative plans. The use of a navigation system as an intraoperative assistance tool allows more accurate execution of the preoperative plan, compared to manual operation without assistance of the navigation system. In total hip arthroplasty using CT-based navigation, three-dimensional preoperative planning with computer software allows the surgeon to determine the optimal angle of implant placement at which implant impingement is unlikely to occur in the range of hip joint motion necessary for daily activities of living, and to determine the amount of three-dimensional correction for leg length and offset. With the use of computer navigation for intraoperative assistance, the preoperative plan can be precisely executed. In hip osteotomy using CT-based navigation, the navigation allows three-dimensional preoperative planning, intraoperative confirmation of osteotomy sites, safe performance of osteotomy even under poor visual conditions, and a reduction in exposure doses from intraoperative fluoroscopy. Positions of the tips of chisels can be displayed on the computer monitor during surgery in real time, and staff other than the operator can also be aware of the progress of surgery. Thus, computer navigation also has an educational value. On the other hand, its limitations include the need for placement of trackers, increased radiation exposure from preoperative CT scans, and prolonged operative time. Moreover, because the position of a bone fragment cannot be traced after osteotomy, methods to find its precise position after its movement need to be developed. Despite the need to develop methods for the postoperative evaluation of accuracy for osteotomy, further application and development of these systems are expected in the future. Copyright © 2016 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.
High- and low-level hierarchical classification algorithm based on source separation process
NASA Astrophysics Data System (ADS)
Loghmari, Mohamed Anis; Karray, Emna; Naceur, Mohamed Saber
2016-10-01
High-dimensional data applications have earned great attention in recent years. We focus on remote sensing data analysis on high-dimensional space like hyperspectral data. From a methodological viewpoint, remote sensing data analysis is not a trivial task. Its complexity is caused by many factors, such as large spectral or spatial variability as well as the curse of dimensionality. The latter describes the problem of data sparseness. In this particular ill-posed problem, a reliable classification approach requires appropriate modeling of the classification process. The proposed approach is based on a hierarchical clustering algorithm in order to deal with remote sensing data in high-dimensional space. Indeed, one obvious method to perform dimensionality reduction is to use the independent component analysis process as a preprocessing step. The first particularity of our method is the special structure of its cluster tree. Most of the hierarchical algorithms associate leaves to individual clusters, and start from a large number of individual classes equal to the number of pixels; however, in our approach, leaves are associated with the most relevant sources which are represented according to mutually independent axes to specifically represent some land covers associated with a limited number of clusters. These sources contribute to the refinement of the clustering by providing complementary rather than redundant information. The second particularity of our approach is that at each level of the cluster tree, we combine both a high-level divisive clustering and a low-level agglomerative clustering. This approach reduces the computational cost since the high-level divisive clustering is controlled by a simple Boolean operator, and optimizes the clustering results since the low-level agglomerative clustering is guided by the most relevant independent sources. Then at each new step we obtain a new finer partition that will participate in the clustering process to enhance semantic capabilities and give good identification rates.
Nonlinear structures: Cnoidal, soliton, and periodical waves in quantum semiconductor plasma
NASA Astrophysics Data System (ADS)
Tolba, R. E.; El-Bedwehy, N. A.; Moslem, W. M.; El-Labany, S. K.; Yahia, M. E.
2016-01-01
Properties and emerging conditions of various nonlinear acoustic waves in a three dimensional quantum semiconductor plasma are explored. A plasma fluid model characterized by degenerate pressures, exchange correlation, and quantum recoil forces is established and solved. Our analysis approach is based on the reductive perturbation theory for deriving the Kadomtsev-Petviashvili equation from the fluid model and solving it by using Painlevé analysis to come up with different nonlinear solutions that describe different pulse profiles such as cnoidal, soliton, and periodical pulses. The model is then employed to recognize the possible perturbations in GaN semiconductor.
Nonlinear structures: Cnoidal, soliton, and periodical waves in quantum semiconductor plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tolba, R. E., E-mail: tolba-math@yahoo.com; El-Bedwehy, N. A., E-mail: nab-elbedwehy@yahoo.com; Moslem, W. M., E-mail: wmmoslem@hotmail.com
2016-01-15
Properties and emerging conditions of various nonlinear acoustic waves in a three dimensional quantum semiconductor plasma are explored. A plasma fluid model characterized by degenerate pressures, exchange correlation, and quantum recoil forces is established and solved. Our analysis approach is based on the reductive perturbation theory for deriving the Kadomtsev-Petviashvili equation from the fluid model and solving it by using Painlevé analysis to come up with different nonlinear solutions that describe different pulse profiles such as cnoidal, soliton, and periodical pulses. The model is then employed to recognize the possible perturbations in GaN semiconductor.
Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.
Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli
2015-05-01
2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.
Simplex-stochastic collocation method with improved scalability
NASA Astrophysics Data System (ADS)
Edeling, W. N.; Dwight, R. P.; Cinnella, P.
2016-04-01
The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kishino, Katsumi, E-mail: kishino@sophia.ac.jp; Sophia Nanotechnology Research Center, Sophia University, 7-1 Kioi-cho, Chiyoda-ku, Tokyo 102-8554; Ishizawa, Shunsuke
Bottom-up grown structurally graded InGaN-based nanocolumn photonic crystals, in which nanocolumns were arranged in triangular lattice and the nanocolumn diameter changed one-dimensionally from 93 to 213 nm with a fixed lattice constant of 250 nm, were fabricated. The spatial distribution of the diameter resulted in random-laser-like operation under optical excitation. A broad multi-wavelength lasing spectrum with more than 10 peaks was obtained with a full width at half maximum of 27 nm at 505 nm wavelength as well as lowering of the polarization degree, which is expected to be suitable for speckle contrast reduction in laser projection display applications.
Heidbrink, William W.; Ferron, John R.; Holcomb, Christopher T.; ...
2014-08-21
Here, analysis of neutron and fast-ion D α data from the DIII-D tokamak shows that Alfvén eigenmode activity degrades fast-ion confinement in many high β N, high q min, steady-state scenario discharges. (β N is the normalized plasma pressure and q min is the minimum value of the plasma safety factor.) Fast-ion diagnostics that are sensitive to the co-passing population exhibit the largest reduction relative to classical predictions. The increased fast-ion transport in discharges with strong AE activity accounts for the previously observed reduction in global confinement with increasing q min; however, not all high q min discharges show appreciablemore » degradation. Two relatively simple empirical quantities provide convenient monitors of these effects: (1) an 'AE amplitude' signal based on interferometer measurements and (2) the ratio of the neutron rate to a zero-dimensional classical prediction.« less
A Computational Fluid Dynamic Model for a Novel Flash Ironmaking Process
NASA Astrophysics Data System (ADS)
Perez-Fontes, Silvia E.; Sohn, Hong Yong; Olivas-Martinez, Miguel
A computational fluid dynamic model for a novel flash ironmaking process based on the direct gaseous reduction of iron oxide concentrates is presented. The model solves the three-dimensional governing equations including both gas-phase and gas-solid reaction kinetics. The turbulence-chemistry interaction in the gas-phase is modeled by the eddy dissipation concept incorporating chemical kinetics. The particle cloud model is used to track the particle phase in a Lagrangian framework. A nucleation and growth kinetics rate expression is adopted to calculate the reduction rate of magnetite concentrate particles. Benchmark experiments reported in the literature for a nonreacting swirling gas jet and a nonpremixed hydrogen jet flame were simulated for validation. The model predictions showed good agreement with measurements in terms of gas velocity, gas temperature and species concentrations. The relevance of the computational model for the analysis of a bench reactor operation and the design of an industrial-pilot plant is discussed.
Imai, Takashi; Ohyama, Shusaku; Kovalenko, Andriy; Hirata, Fumio
2007-01-01
The partial molar volume (PMV) change associated with the pressure-induced structural transition of ubiquitin is analyzed by the three-dimensional reference interaction site model (3D-RISM) theory of molecular solvation. The theory predicts that the PMV decreases upon the structural transition, which is consistent with the experimental observation. The volume decomposition analysis demonstrates that the PMV reduction is primarily caused by the decrease in the volume of structural voids in the protein, which is partially canceled by the volume expansion due to the hydration effects. It is found from further analysis that the PMV reduction is ascribed substantially to the penetration of water molecules into a specific part of the protein. Based on the thermodynamic relation, this result implies that the water penetration causes the pressure-induced structural transition. It supports the water penetration model of pressure denaturation of proteins proposed earlier. PMID:17660257
Imai, Takashi; Ohyama, Shusaku; Kovalenko, Andriy; Hirata, Fumio
2007-09-01
The partial molar volume (PMV) change associated with the pressure-induced structural transition of ubiquitin is analyzed by the three-dimensional reference interaction site model (3D-RISM) theory of molecular solvation. The theory predicts that the PMV decreases upon the structural transition, which is consistent with the experimental observation. The volume decomposition analysis demonstrates that the PMV reduction is primarily caused by the decrease in the volume of structural voids in the protein, which is partially canceled by the volume expansion due to the hydration effects. It is found from further analysis that the PMV reduction is ascribed substantially to the penetration of water molecules into a specific part of the protein. Based on the thermodynamic relation, this result implies that the water penetration causes the pressure-induced structural transition. It supports the water penetration model of pressure denaturation of proteins proposed earlier.
Jung, Kyu-Nam; Hwang, Soo Min; Park, Min-Sik; Kim, Ki Jae; Kim, Jae-Geun; Dou, Shi Xue; Kim, Jung Ho; Lee, Jong-Won
2015-01-01
Rechargeable metal-air batteries are considered a promising energy storage solution owing to their high theoretical energy density. The major obstacles to realising this technology include the slow kinetics of oxygen reduction and evolution on the cathode (air electrode) upon battery discharging and charging, respectively. Here, we report non-precious metal oxide catalysts based on spinel-type manganese-cobalt oxide nanofibres fabricated by an electrospinning technique. The spinel oxide nanofibres exhibit high catalytic activity towards both oxygen reduction and evolution in an alkaline electrolyte. When incorporated as cathode catalysts in Zn-air batteries, the fibrous spinel oxides considerably reduce the discharge-charge voltage gaps (improve the round-trip efficiency) in comparison to the catalyst-free cathode. Moreover, the nanofibre catalysts remain stable over the course of repeated discharge-charge cycling; however, carbon corrosion in the catalyst/carbon composite cathode degrades the cycling performance of the batteries. PMID:25563733