Minimizers with Bounded Action for the High-Dimensional Frenkel-Kontorova Model
NASA Astrophysics Data System (ADS)
Miao, Xue-Qing; Wang, Ya-Nan; Qin, Wen-Xin
In Aubry-Mather theory for monotone twist maps or for one-dimensional Frenkel-Kontorova (FK) model with nearest neighbor interactions, each global minimizer (minimal energy configuration) is naturally Birkhoff. However, this is not true for the one-dimensional FK model with non-nearest neighbor interactions or for the high-dimensional FK model. In this paper, we study the Birkhoff property of minimizers with bounded action for the high-dimensional FK model.
Sparse High Dimensional Models in Economics
Fan, Jianqing; Lv, Jinchi; Qi, Lei
2010-01-01
This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.
Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela
2016-12-01
Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.
Binder, Harald; Porzelius, Christine; Schumacher, Martin
2011-03-01
Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bayesian Analysis of High Dimensional Classification
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Subhadeep; Liang, Faming
2009-12-01
Modern data mining and bioinformatics have presented an important playground for statistical learning techniques, where the number of input variables is possibly much larger than the sample size of the training data. In supervised learning, logistic regression or probit regression can be used to model a binary output and form perceptron classification rules based on Bayesian inference. In these cases , there is a lot of interest in searching for sparse model in High Dimensional regression(/classification) setup. we first discuss two common challenges for analyzing high dimensional data. The first one is the curse of dimensionality. The complexity of many existing algorithms scale exponentially with the dimensionality of the space and by virtue of that algorithms soon become computationally intractable and therefore inapplicable in many real applications. secondly, multicollinearities among the predictors which severely slowdown the algorithm. In order to make Bayesian analysis operational in high dimension we propose a novel 'Hierarchical stochastic approximation monte carlo algorithm' (HSAMC), which overcomes the curse of dimensionality, multicollinearity of predictors in high dimension and also it possesses the self-adjusting mechanism to avoid the local minima separated by high energy barriers. Models and methods are illustrated by simulation inspired from from the feild of genomics. Numerical results indicate that HSAMC can work as a general model selection sampler in high dimensional complex model space.
NASA Astrophysics Data System (ADS)
Tikhonov, Mikhail; Monasson, Remi
2018-01-01
Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
NASA Astrophysics Data System (ADS)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.
NASA Astrophysics Data System (ADS)
Agapov, Vladimir
2018-03-01
The necessity of new approaches to the modeling of rods in the analysis of high-rise constructions is justified. The possibility of the application of the three-dimensional superelements of rods with rectangular cross section for the static and dynamic calculation of the bar and combined structures is considered. The results of the eighteen-story spatial frame free vibrations analysis using both one-dimensional and three-dimensional models of rods are presented. A comparative analysis of the obtained results is carried out and the conclusions on the possibility of three-dimensional superelements application in static and dynamic analysis of high-rise constructions are given on its basis.
EM in high-dimensional spaces.
Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim
2005-06-01
This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.
Model-based Clustering of High-Dimensional Data in Astrophysics
NASA Astrophysics Data System (ADS)
Bouveyron, C.
2016-05-01
The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.
Xue, Hongqi; Wu, Shuang; Wu, Yichao; Ramirez Idarraga, Juan C; Wu, Hulin
2018-05-02
Mechanism-driven low-dimensional ordinary differential equation (ODE) models are often used to model viral dynamics at cellular levels and epidemics of infectious diseases. However, low-dimensional mechanism-based ODE models are limited for modeling infectious diseases at molecular levels such as transcriptomic or proteomic levels, which is critical to understand pathogenesis of diseases. Although linear ODE models have been proposed for gene regulatory networks (GRNs), nonlinear regulations are common in GRNs. The reconstruction of large-scale nonlinear networks from time-course gene expression data remains an unresolved issue. Here, we use high-dimensional nonlinear additive ODEs to model GRNs and propose a 4-step procedure to efficiently perform variable selection for nonlinear ODEs. To tackle the challenge of high dimensionality, we couple the 2-stage smoothing-based estimation method for ODEs and a nonlinear independence screening method to perform variable selection for the nonlinear ODE models. We have shown that our method possesses the sure screening property and it can handle problems with non-polynomial dimensionality. Numerical performance of the proposed method is illustrated with simulated data and a real data example for identifying the dynamic GRN of Saccharomyces cerevisiae. Copyright © 2018 John Wiley & Sons, Ltd.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data.
Song, Hongchao; Jiang, Zhuqing; Men, Aidong; Yang, Bo
2017-01-01
Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k -nearest neighbor graphs- ( K -NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data
Jiang, Zhuqing; Men, Aidong; Yang, Bo
2017-01-01
Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k-nearest neighbor graphs- (K-NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity. PMID:29270197
Modelling Parsing Constraints with High-Dimensional Context Space.
ERIC Educational Resources Information Center
Burgess, Curt; Lund, Kevin
1997-01-01
Presents a model of high-dimensional context space, the Hyperspace Analogue to Language (HAL), with a series of simulations modelling human empirical results. Proposes that HAL's context space can be used to provide a basic categorization of semantic and grammatical concepts; model certain aspects of morphological ambiguity in verbs; and provide…
Manifold learning in machine vision and robotics
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-02-01
Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.
Bruce Bagwell, C
2018-01-01
This chapter outlines how to approach the complex tasks associated with designing models for high-dimensional cytometry data. Unlike gating approaches, modeling lends itself to automation and accounts for measurement overlap among cellular populations. Designing these models is now easier because of a new technique called high-definition t-SNE mapping. Nontrivial examples are provided that serve as a guide to create models that are consistent with data.
Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio
2015-12-01
This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.
Zhang, Miaomiao; Wells, William M; Golland, Polina
2016-10-01
Using image-based descriptors to investigate clinical hypotheses and therapeutic implications is challenging due to the notorious "curse of dimensionality" coupled with a small sample size. In this paper, we present a low-dimensional analysis of anatomical shape variability in the space of diffeomorphisms and demonstrate its benefits for clinical studies. To combat the high dimensionality of the deformation descriptors, we develop a probabilistic model of principal geodesic analysis in a bandlimited low-dimensional space that still captures the underlying variability of image data. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than models based on the high-dimensional state-of-the-art approaches such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA).
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Julien, Clavel; Leandro, Aristide; Hélène, Morlon
2018-06-19
Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
A two-dimensional kinematic dynamo model of the ionospheric magnetic field at Venus
NASA Technical Reports Server (NTRS)
Cravens, T. E.; Wu, D.; Shinagawa, H.
1990-01-01
The results of a high-resolution, two-dimensional, time dependent, kinematic dynamo model of the ionospheric magnetic field of Venus are presented. Various one-dimensional models are considered and the two-dimensional model is then detailed. In this model, the two-dimensional magnetic induction equation, the magnetic diffusion-convection equation, is numerically solved using specified plasma velocities. Origins of the vertical velocity profile and of the horizontal velocities are discussed. It is argued that the basic features of the vertical magnetic field profile remain unaltered by horizontal flow effects and also that horizontal plasma flow can strongly affect the magnetic field for altitudes above 300 km.
A note on two-dimensional asymptotic magnetotail equilibria
NASA Technical Reports Server (NTRS)
Voigt, Gerd-Hannes; Moore, Brian D.
1994-01-01
In order to understand, on the fluid level, the structure, the time evolution, and the stability of current sheets, such as the magnetotail plasma sheet in Earth's magnetosphere, one has to consider magnetic field configurations that are in magnetohydrodynamic (MHD) force equilibrium. Any reasonable MHD current sheet model has to be two-dimensional, at least in an asymptotic sense (B(sub z)/B (sub x)) = epsilon much less than 1. The necessary two-dimensionality is described by a rather arbitrary function f(x). We utilize the free function f(x) to construct two-dimensional magnetotail equilibria are 'equivalent' to current sheets in empirical three-dimensional models. We obtain a class of asymptotic magnetotail equilibria ordered with respect to the magnetic disturbance index Kp. For low Kp values the two-dimensional MHD equilibria reflect some of the realistic, observation-based, aspects of three-dimensional models. For high Kp values the three-dimensional models do not fit the asymptotic MHD equlibria, which is indicative of their inconsistency with the assumed pressure function. This, in turn, implies that high magnetic activity levels of the real magnetosphere might be ruled by thermodynamic conditions different from local thermodynamic equilibrium.
Bi Sparsity Pursuit: A Paradigm for Robust Subspace Recovery
2016-09-27
16. SECURITY CLASSIFICATION OF: The success of sparse models in computer vision and machine learning is due to the fact that, high dimensional data...Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Signal recovery, Sparse learning , Subspace modeling REPORT DOCUMENTATION PAGE 11...vision and machine learning is due to the fact that, high dimensional data is distributed in a union of low dimensional subspaces in many real-world
Using High-Dimensional Image Models to Perform Highly Undetectable Steganography
NASA Astrophysics Data System (ADS)
Pevný, Tomáš; Filler, Tomáš; Bas, Patrick
This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.
Inverse finite-size scaling for high-dimensional significance analysis
NASA Astrophysics Data System (ADS)
Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki
2018-06-01
We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.
NASA Technical Reports Server (NTRS)
Chan, S. T. K.; Lee, C. H.; Brashears, M. R.
1975-01-01
A finite element algorithm for solving unsteady, three-dimensional high velocity impact problems is presented. A computer program was developed based on the Eulerian hydroelasto-viscoplastic formulation and the utilization of the theorem of weak solutions. The equations solved consist of conservation of mass, momentum, and energy, equation of state, and appropriate constitutive equations. The solution technique is a time-dependent finite element analysis utilizing three-dimensional isoparametric elements, in conjunction with a generalized two-step time integration scheme. The developed code was demonstrated by solving one-dimensional as well as three-dimensional impact problems for both the inviscid hydrodynamic model and the hydroelasto-viscoplastic model.
Multivariate Boosting for Integrative Analysis of High-Dimensional Cancer Genomic Data
Xiong, Lie; Kuan, Pei-Fen; Tian, Jianan; Keles, Sunduz; Wang, Sijian
2015-01-01
In this paper, we propose a novel multivariate component-wise boosting method for fitting multivariate response regression models under the high-dimension, low sample size setting. Our method is motivated by modeling the association among different biological molecules based on multiple types of high-dimensional genomic data. Particularly, we are interested in two applications: studying the influence of DNA copy number alterations on RNA transcript levels and investigating the association between DNA methylation and gene expression. For this purpose, we model the dependence of the RNA expression levels on DNA copy number alterations and the dependence of gene expression on DNA methylation through multivariate regression models and utilize boosting-type method to handle the high dimensionality as well as model the possible nonlinear associations. The performance of the proposed method is demonstrated through simulation studies. Finally, our multivariate boosting method is applied to two breast cancer studies. PMID:26609213
NASA Astrophysics Data System (ADS)
Fedors, R. W.; Painter, S. L.
2004-12-01
Temperature gradients along the thermally-perturbed drifts of the potential high-level waste repository at Yucca Mountain, Nevada, will drive natural convection and associated heat and mass transfer along drifts. A three-dimensional, dual-permeability, thermohydrological model of heat and mass transfer was used to estimate the magnitude of temperature gradients along a drift. Temperature conditions along heated drifts are needed to support estimates of repository-edge cooling and as input to computational fluid dynamics modeling of in-drift axial convection and the cold-trap process. Assumptions associated with abstracted heat transfer models and two-dimensional thermohydrological models weakly coupled to mountain-scale thermal models can readily be tested using the three-dimensional thermohydrological model. Although computationally expensive, the fully coupled three-dimensional thermohydrological model is able to incorporate lateral heat transfer, including host rock processes of conduction, convection in gas phase, advection in liquid phase, and latent-heat transfer. Results from the three-dimensional thermohydrological model showed that weakly coupling three-dimensional thermal and two-dimensional thermohydrological models lead to underestimates of temperatures and underestimates of temperature gradients over large portions of the drift. The representative host rock thermal conductivity needed for abstracted heat transfer models are overestimated using the weakly coupled models. If axial flow patterns over large portions of drifts are not impeded by the strong cross-sectional flow patterns imparted by the heat rising directly off the waste package, condensation from the cold-trap process will not be limited to the extreme ends of each drift. Based on the three-dimensional thermohydrological model, axial temperature gradients occur sooner over a larger portion of the drift, though high gradients nearest the edge of the potential repository are dampened. This abstract is an independent product of CNWRA and does not necessarily reflect the view or regulatory position of the Nuclear Regulatory Commission.
USDA-ARS?s Scientific Manuscript database
Recent advances in technology have led to the collection of high-dimensional data not previously encountered in many scientific environments. As a result, scientists are often faced with the challenging task of including these high-dimensional data into statistical models. For example, data from sen...
Towards an Automated Full-Turbofan Engine Numerical Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Turner, Mark G.; Norris, Andrew; Veres, Joseph P.
2003-01-01
The objective of this study was to demonstrate the high-fidelity numerical simulation of a modern high-bypass turbofan engine. The simulation utilizes the Numerical Propulsion System Simulation (NPSS) thermodynamic cycle modeling system coupled to a high-fidelity full-engine model represented by a set of coupled three-dimensional computational fluid dynamic (CFD) component models. Boundary conditions from the balanced, steady-state cycle model are used to define component boundary conditions in the full-engine model. Operating characteristics of the three-dimensional component models are integrated into the cycle model via partial performance maps generated automatically from the CFD flow solutions using one-dimensional meanline turbomachinery programs. This paper reports on the progress made towards the full-engine simulation of the GE90-94B engine, highlighting the generation of the high-pressure compressor partial performance map. The ongoing work will provide a system to evaluate the steady and unsteady aerodynamic and mechanical interactions between engine components at design and off-design operating conditions.
Thermal model development and validation for rapid filling of high pressure hydrogen tanks
Johnson, Terry A.; Bozinoski, Radoslav; Ye, Jianjun; ...
2015-06-30
This paper describes the development of thermal models for the filling of high pressure hydrogen tanks with experimental validation. Two models are presented; the first uses a one-dimensional, transient, network flow analysis code developed at Sandia National Labs, and the second uses the commercially available CFD analysis tool Fluent. These models were developed to help assess the safety of Type IV high pressure hydrogen tanks during the filling process. The primary concern for these tanks is due to the increased susceptibility to fatigue failure of the liner caused by the fill process. Thus, a thorough understanding of temperature changes ofmore » the hydrogen gas and the heat transfer to the tank walls is essential. The effects of initial pressure, filling time, and fill procedure were investigated to quantify the temperature change and verify the accuracy of the models. In this paper we show that the predictions of mass averaged gas temperature for the one and three-dimensional models compare well with the experiment and both can be used to make predictions for final mass delivery. Furthermore, due to buoyancy and other three-dimensional effects, however, the maximum wall temperature cannot be predicted using one-dimensional tools alone which means that a three-dimensional analysis is required for a safety assessment of the system.« less
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
Algamal, Z Y; Lee, M H
2017-01-01
A high-dimensional quantitative structure-activity relationship (QSAR) classification model typically contains a large number of irrelevant and redundant descriptors. In this paper, a new design of descriptor selection for the QSAR classification model estimation method is proposed by adding a new weight inside L1-norm. The experimental results of classifying the anti-hepatitis C virus activity of thiourea derivatives demonstrate that the proposed descriptor selection method in the QSAR classification model performs effectively and competitively compared with other existing penalized methods in terms of classification performance on both the training and the testing datasets. Moreover, it is noteworthy that the results obtained in terms of stability test and applicability domain provide a robust QSAR classification model. It is evident from the results that the developed QSAR classification model could conceivably be employed for further high-dimensional QSAR classification studies.
NASA Astrophysics Data System (ADS)
Kohno, Masanori
2018-05-01
The single-particle spectral properties of the two-dimensional t-J model with next-nearest-neighbor hopping are investigated near the Mott transition by using cluster perturbation theory. The spectral features are interpreted by considering the effects of the next-nearest-neighbor hopping on the shift of the spectral-weight distribution of the two-dimensional t-J model. Various anomalous features observed in hole-doped and electron-doped high-temperature cuprate superconductors are collectively explained in the two-dimensional t-J model with next-nearest-neighbor hopping near the Mott transition.
Maartens, Roy; Koyama, Kazuya
2010-01-01
The observable universe could be a 1+3-surface (the "brane") embedded in a 1+3+ d -dimensional spacetime (the "bulk"), with Standard Model particles and fields trapped on the brane while gravity is free to access the bulk. At least one of the d extra spatial dimensions could be very large relative to the Planck scale, which lowers the fundamental gravity scale, possibly even down to the electroweak (∼ TeV) level. This revolutionary picture arises in the framework of recent developments in M theory. The 1+10-dimensional M theory encompasses the known 1+9-dimensional superstring theories, and is widely considered to be a promising potential route to quantum gravity. At low energies, gravity is localized at the brane and general relativity is recovered, but at high energies gravity "leaks" into the bulk, behaving in a truly higher-dimensional way. This introduces significant changes to gravitational dynamics and perturbations, with interesting and potentially testable implications for high-energy astrophysics, black holes, and cosmology. Brane-world models offer a phenomenological way to test some of the novel predictions and corrections to general relativity that are implied by M theory. This review analyzes the geometry, dynamics and perturbations of simple brane-world models for cosmology and astrophysics, mainly focusing on warped 5-dimensional brane-worlds based on the Randall-Sundrum models. We also cover the simplest brane-world models in which 4-dimensional gravity on the brane is modified at low energies - the 5-dimensional Dvali-Gabadadze-Porrati models. Then we discuss co-dimension two branes in 6-dimensional models.
Banerjee, Arindam; Ghosh, Joydeep
2004-05-01
Competitive learning mechanisms for clustering, in general, suffer from poor performance for very high-dimensional (>1000) data because of "curse of dimensionality" effects. In applications such as document clustering, it is customary to normalize the high-dimensional input vectors to unit length, and it is sometimes also desirable to obtain balanced clusters, i.e., clusters of comparable sizes. The spherical kmeans (spkmeans) algorithm, which normalizes the cluster centers as well as the inputs, has been successfully used to cluster normalized text documents in 2000+ dimensional space. Unfortunately, like regular kmeans and its soft expectation-maximization-based version, spkmeans tends to generate extremely imbalanced clusters in high-dimensional spaces when the desired number of clusters is large (tens or more). This paper first shows that the spkmeans algorithm can be derived from a certain maximum likelihood formulation using a mixture of von Mises-Fisher distributions as the generative model, and in fact, it can be considered as a batch-mode version of (normalized) competitive learning. The proposed generative model is then adapted in a principled way to yield three frequency-sensitive competitive learning variants that are applicable to static data and produced high-quality and well-balanced clusters for high-dimensional data. Like kmeans, each iteration is linear in the number of data points and in the number of clusters for all the three algorithms. A frequency-sensitive algorithm to cluster streaming data is also proposed. Experimental results on clustering of high-dimensional text data sets are provided to show the effectiveness and applicability of the proposed techniques. Index Terms-Balanced clustering, expectation maximization (EM), frequency-sensitive competitive learning (FSCL), high-dimensional clustering, kmeans, normalized data, scalable clustering, streaming data, text clustering.
State-of-charge estimation in lithium-ion batteries: A particle filter approach
NASA Astrophysics Data System (ADS)
Tulsyan, Aditya; Tsai, Yiting; Gopaluni, R. Bhushan; Braatz, Richard D.
2016-11-01
The dynamics of lithium-ion batteries are complex and are often approximated by models consisting of partial differential equations (PDEs) relating the internal ionic concentrations and potentials. The Pseudo two-dimensional model (P2D) is one model that performs sufficiently accurately under various operating conditions and battery chemistries. Despite its widespread use for prediction, this model is too complex for standard estimation and control applications. This article presents an original algorithm for state-of-charge estimation using the P2D model. Partial differential equations are discretized using implicit stable algorithms and reformulated into a nonlinear state-space model. This discrete, high-dimensional model (consisting of tens to hundreds of states) contains implicit, nonlinear algebraic equations. The uncertainty in the model is characterized by additive Gaussian noise. By exploiting the special structure of the pseudo two-dimensional model, a novel particle filter algorithm that sweeps in time and spatial coordinates independently is developed. This algorithm circumvents the degeneracy problems associated with high-dimensional state estimation and avoids the repetitive solution of implicit equations by defining a 'tether' particle. The approach is illustrated through extensive simulations.
2014-04-01
surrogate model generation is difficult for high -dimensional problems, due to the curse of dimensionality. Variable screening methods have been...a variable screening model was developed for the quasi-molecular treatment of ion-atom collision [16]. In engineering, a confidence interval of...for high -level radioactive waste [18]. Moreover, the design sensitivity method can be extended to the variable screening method because vital
Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji
2015-01-01
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.
Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji
2015-01-01
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach. PMID:25734662
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate ofmore » $$O(n^{-1/2})$$, the corresponding IRUQ converges at $$O(n^{-1})$$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.« less
Wagner, Chad R.
2007-01-01
The use of one-dimensional hydraulic models currently is the standard method for estimating velocity fields through a bridge opening for scour computations and habitat assessment. Flood-flow contraction through bridge openings, however, is hydrodynamically two dimensional and often three dimensional. Although there is awareness of the utility of two-dimensional models to predict the complex hydraulic conditions at bridge structures, little guidance is available to indicate whether a one- or two-dimensional model will accurately estimate the hydraulic conditions at a bridge site. The U.S. Geological Survey, in cooperation with the North Carolina Department of Transportation, initiated a study in 2004 to compare one- and two-dimensional model results with field measurements at complex riverine and tidal bridges in North Carolina to evaluate the ability of each model to represent field conditions. The field data consisted of discharge and depth-averaged velocity profiles measured with an acoustic Doppler current profiler and surveyed water-surface profiles for two high-flow conditions. For the initial study site (U.S. Highway 13 over the Tar River at Greenville, North Carolina), the water-surface elevations and velocity distributions simulated by the one- and two-dimensional models showed appreciable disparity in the highly sinuous reach upstream from the U.S. Highway 13 bridge. Based on the available data from U.S. Geological Survey streamgaging stations and acoustic Doppler current profiler velocity data, the two-dimensional model more accurately simulated the water-surface elevations and the velocity distributions in the study reach, and contracted-flow magnitudes and direction through the bridge opening. To further compare the results of the one- and two-dimensional models, estimated hydraulic parameters (flow depths, velocities, attack angles, blocked flow width) for measured high-flow conditions were used to predict scour depths at the U.S. Highway 13 bridge by using established methods. Comparisons of pier-scour estimates from both models indicated that the scour estimates from the two-dimensional model were as much as twice the depth of the estimates from the one-dimensional model. These results can be attributed to higher approach velocities and the appreciable flow angles at the piers simulated by the two-dimensional model and verified in the field. Computed flood-frequency estimates of the 10-, 50-, 100-, and 500-year return-period floods on the Tar River at Greenville were also simulated with both the one- and two-dimensional models. The simulated water-surface profiles and velocity fields of the various return-period floods were used to compare the modeling approaches and provide information on what return-period discharges would result in road over-topping and(or) pressure flow. This information is essential in the design of new and replacement structures. The ability to accurately simulate water-surface elevations and velocity magnitudes and distributions at bridge crossings is essential in assuring that bridge plans balance public safety with the most cost-effective design. By compiling pertinent bridge-site characteristics and relating them to the results of several model-comparison studies, the framework for developing guidelines for selecting the most appropriate model for a given bridge site can be accomplished.
THREE-DIMENSIONAL MODEL FOR HYPERTHERMIA CALCULATIONS
Realistic three-dimensional models that predict temperature distributions with a high degree of spatial resolution in bodies exposed to electromagnetic (EM) fields are required in the application of hyperthermia for cancer treatment. To ascertain the thermophysiologic response of...
NASA Astrophysics Data System (ADS)
Davis, L. J.; Boggess, M.; Kodpuak, E.; Deutsch, M.
2012-11-01
We report on a model for the deposition of three dimensional, aggregated nanocrystalline silver films, and an efficient numerical simulation method developed for visualizing such structures. We compare our results to a model system comprising chemically deposited silver films with morphologies ranging from dilute, uniform distributions of nanoparticles to highly porous aggregated networks. Disordered silver films grown in solution on silica substrates are characterized using digital image analysis of high resolution scanning electron micrographs. While the latter technique provides little volume information, plane-projected (two dimensional) island structure and surface coverage may be reliably determined. Three parameters governing film growth are evaluated using these data and used as inputs for the deposition model, greatly reducing computing requirements while still providing direct access to the complete (bulk) structure of the films throughout the growth process. We also show how valuable three dimensional characteristics of the deposited materials can be extracted using the simulated structures.
Modeling, Monitoring and Fault Diagnosis of Spacecraft Air Contaminants
NASA Technical Reports Server (NTRS)
Ramirez, W. Fred; Skliar, Mikhail; Narayan, Anand; Morgenthaler, George W.; Smith, Gerald J.
1996-01-01
Progress and results in the development of an integrated air quality modeling, monitoring, fault detection, and isolation system are presented. The focus was on development of distributed models of the air contaminants transport, the study of air quality monitoring techniques based on the model of transport process and on-line contaminant concentration measurements, and sensor placement. Different approaches to the modeling of spacecraft air contamination are discussed, and a three-dimensional distributed parameter air contaminant dispersion model applicable to both laminar and turbulent transport is proposed. A two-dimensional approximation of a full scale transport model is also proposed based on the spatial averaging of the three dimensional model over the least important space coordinate. A computer implementation of the transport model is considered and a detailed development of two- and three-dimensional models illustrated by contaminant transport simulation results is presented. The use of a well established Kalman filtering approach is suggested as a method for generating on-line contaminant concentration estimates based on both real time measurements and the model of contaminant transport process. It is shown that high computational requirements of the traditional Kalman filter can render difficult its real-time implementation for high-dimensional transport model and a novel implicit Kalman filtering algorithm is proposed which is shown to lead to an order of magnitude faster computer implementation in the case of air quality monitoring.
A Selective Overview of Variable Selection in High Dimensional Feature Space
Fan, Jianqing
2010-01-01
High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods. PMID:21572976
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
Integration of Local Observations into the One Dimensional Fog Model PAFOG
NASA Astrophysics Data System (ADS)
Thoma, Christina; Schneider, Werner; Masbou, Matthieu; Bott, Andreas
2012-05-01
The numerical prediction of fog requires a very high vertical resolution of the atmosphere. Owing to a prohibitive computational effort of high resolution three dimensional models, operational fog forecast is usually done by means of one dimensional fog models. An important condition for a successful fog forecast with one dimensional models consists of the proper integration of observational data into the numerical simulations. The goal of the present study is to introduce new methods for the consideration of these data in the one dimensional radiation fog model PAFOG. First, it will be shown how PAFOG may be initialized with observed visibilities. Second, a nudging scheme will be presented for the inclusion of measured temperature and humidity profiles in the PAFOG simulations. The new features of PAFOG have been tested by comparing the model results with observations of the German Meteorological Service. A case study will be presented that reveals the importance of including local observations in the model calculations. Numerical results obtained with the modified PAFOG model show a distinct improvement of fog forecasts regarding the times of fog formation, dissipation as well as the vertical extent of the investigated fog events. However, model results also reveal that a further improvement of PAFOG might be possible if several empirical model parameters are optimized. This tuning can only be realized by comprehensive comparisons of model simulations with corresponding fog observations.
Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi
2010-05-01
The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.
Zhang, Miaomiao; Wells, William M; Golland, Polina
2017-10-01
We present an efficient probabilistic model of anatomical variability in a linear space of initial velocities of diffeomorphic transformations and demonstrate its benefits in clinical studies of brain anatomy. To overcome the computational challenges of the high dimensional deformation-based descriptors, we develop a latent variable model for principal geodesic analysis (PGA) based on a low dimensional shape descriptor that effectively captures the intrinsic variability in a population. We define a novel shape prior that explicitly represents principal modes as a multivariate complex Gaussian distribution on the initial velocities in a bandlimited space. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than the state-of-the-art method such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA) that operate in the high dimensional image space. Copyright © 2017 Elsevier B.V. All rights reserved.
Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
NASA Astrophysics Data System (ADS)
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
A reduced-order model from high-dimensional frictional hysteresis
Biswas, Saurabh; Chatterjee, Anindya
2014-01-01
Hysteresis in material behaviour includes both signum nonlinearities as well as high dimensionality. Available models for component-level hysteretic behaviour are empirical. Here, we derive a low-order model for rate-independent hysteresis from a high-dimensional massless frictional system. The original system, being given in terms of signs of velocities, is first solved incrementally using a linear complementarity problem formulation. From this numerical solution, to develop a reduced-order model, basis vectors are chosen using the singular value decomposition. The slip direction in generalized coordinates is identified as the minimizer of a dissipation-related function. That function includes terms for frictional dissipation through signum nonlinearities at many friction sites. Luckily, it allows a convenient analytical approximation. Upon solution of the approximated minimization problem, the slip direction is found. A final evolution equation for a few states is then obtained that gives a good match with the full solution. The model obtained here may lead to new insights into hysteresis as well as better empirical modelling thereof. PMID:24910522
Geometrical structure of Neural Networks: Geodesics, Jeffrey's Prior and Hyper-ribbons
NASA Astrophysics Data System (ADS)
Hayden, Lorien; Alemi, Alex; Sethna, James
2014-03-01
Neural networks are learning algorithms which are employed in a host of Machine Learning problems including speech recognition, object classification and data mining. In practice, neural networks learn a low dimensional representation of high dimensional data and define a model manifold which is an embedding of this low dimensional structure in the higher dimensional space. In this work, we explore the geometrical structure of a neural network model manifold. A Stacked Denoising Autoencoder and a Deep Belief Network are trained on handwritten digits from the MNIST database. Construction of geodesics along the surface and of slices taken from the high dimensional manifolds reveal a hierarchy of widths corresponding to a hyper-ribbon structure. This property indicates that neural networks fall into the class of sloppy models, in which certain parameter combinations dominate the behavior. Employing this information could prove valuable in designing both neural network architectures and training algorithms. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No . DGE-1144153.
NASA Astrophysics Data System (ADS)
Aleshin, I. M.; Alpatov, V. V.; Vasil'ev, A. E.; Burguchev, S. S.; Kholodkov, K. I.; Budnikov, P. A.; Molodtsov, D. A.; Koryagin, V. N.; Perederin, F. V.
2014-07-01
A service is described that makes possible the effective construction of a three-dimensional ionospheric model based on the data of ground receivers of signals from global navigation satellite positioning systems (GNSS). The obtained image has a high resolution, mainly because data from the IPG GNSS network of the Federal Service for Hydrometeorology and Environmental Monitoring (Rosgidromet) are used. A specially developed format and its implementation in the form of SQL structures are used to collect, transmit, and store data. The method of high-altitude radio tomography is used to construct the three-dimensional model. The operation of all system components (from registration point organization to the procedure for constructing the electron density three-dimensional distribution and publication of the total electron content map on the Internet) has been described in detail. The three-dimensional image of the ionosphere, obtained automatically, is compared with the ionosonde measurements, calculated using the two-dimensional low-altitude tomography method and averaged by the ionospheric model.
Lin, Wei; Feng, Rui; Li, Hongzhe
2014-01-01
In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642
High dimensional linear regression models under long memory dependence and measurement error
NASA Astrophysics Data System (ADS)
Kaul, Abhishek
This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.
NASA Astrophysics Data System (ADS)
Manzhos, Sergei; Carrington, Tucker
2006-08-01
We combine the high dimensional model representation (HDMR) idea of Rabitz and co-workers [J. Phys. Chem. 110, 2474 (2006)] with neural network (NN) fits to obtain an effective means of building multidimensional potentials. We verify that it is possible to determine an accurate many-dimensional potential by doing low dimensional fits. The final potential is a sum of terms each of which depends on a subset of the coordinates. This form facilitates quantum dynamics calculations. We use NNs to represent HDMR component functions that minimize error mode term by mode term. This NN procedure makes it possible to construct high-order component functions which in turn enable us to determine a good potential. It is shown that the number of available potential points determines the order of the HDMR which should be used.
Manzhos, Sergei; Carrington, Tucker
2006-08-28
We combine the high dimensional model representation (HDMR) idea of Rabitz and co-workers [J. Phys. Chem. 110, 2474 (2006)] with neural network (NN) fits to obtain an effective means of building multidimensional potentials. We verify that it is possible to determine an accurate many-dimensional potential by doing low dimensional fits. The final potential is a sum of terms each of which depends on a subset of the coordinates. This form facilitates quantum dynamics calculations. We use NNs to represent HDMR component functions that minimize error mode term by mode term. This NN procedure makes it possible to construct high-order component functions which in turn enable us to determine a good potential. It is shown that the number of available potential points determines the order of the HDMR which should be used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Lin, Guang
In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less
A multi scale multi-dimensional thermo electrochemical modelling of high capacity lithium-ion cells
NASA Astrophysics Data System (ADS)
Tourani, Abbas; White, Peter; Ivey, Paul
2014-06-01
Lithium iron phosphate (LFP) and lithium manganese oxide (LMO) are competitive and complementary to each other as cathode materials for lithium-ion batteries, especially for use in electric vehicles. A multi scale multi-dimensional physic-based model is proposed in this paper to study the thermal behaviour of the two lithium-ion chemistries. The model consists of two sub models, a one dimensional (1D) electrochemical sub model and a two dimensional (2D) thermo-electric sub model, which are coupled and solved concurrently. The 1D model predicts the heat generation rate (Qh) and voltage (V) of the battery cell through different load cycles. The 2D model of the battery cell accounts for temperature distribution and current distribution across the surface of the battery cell. The two cells are examined experimentally through 90 h load cycles including high/low charge/discharge rates. The experimental results are compared with the model results and they are in good agreement. The presented results in this paper verify the cells temperature behaviour at different operating conditions which will lead to the design of a cost effective thermal management system for the battery pack.
Chen, Yingyi; Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang
2018-01-01
A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies.
Bommert, Andrea; Rahnenführer, Jörg; Lang, Michel
2017-01-01
Finding a good predictive model for a high-dimensional data set can be challenging. For genetic data, it is not only important to find a model with high predictive accuracy, but it is also important that this model uses only few features and that the selection of these features is stable. This is because, in bioinformatics, the models are used not only for prediction but also for drawing biological conclusions which makes the interpretability and reliability of the model crucial. We suggest using three target criteria when fitting a predictive model to a high-dimensional data set: the classification accuracy, the stability of the feature selection, and the number of chosen features. As it is unclear which measure is best for evaluating the stability, we first compare a variety of stability measures. We conclude that the Pearson correlation has the best theoretical and empirical properties. Also, we find that for the stability assessment behaviour it is most important that a measure contains a correction for chance or large numbers of chosen features. Then, we analyse Pareto fronts and conclude that it is possible to find models with a stable selection of few features without losing much predictive accuracy.
Blended particle filters for large-dimensional chaotic dynamical systems
Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.
2014-01-01
A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886
REASSESSING MECHANISM AS A PREDICTOR OF PEDIATRIC INJURY MORTALITY
Beck, Haley; Mittal, Sushil; Madigan, David; Burd, Randall S.
2015-01-01
Background The use of mechanism of injury as a predictor of injury outcome presents practical challenges because this variable may be missing or inaccurate in many databases. The purpose of this study was to determine the importance of mechanism of injury as a predictor of mortality among injured children. Methods The records of children (<15 years old) sustaining a blunt injury were obtained from the National Trauma Data Bank. Models predicting injury mortality were developed using mechanism of injury and injury coding using either Abbreviated Injury Scale post-dot values (low-dimensional injury coding) or injury ICD-9 codes and their two-way interactions (high-dimensional injury coding). Model performance with and without inclusion of mechanism of injury was compared for both coding schemes, and the relative importance of mechanism of injury as a variable in each model type was evaluated. Results Among 62,569 records, a mortality rate of 0.9% was observed. Inclusion of mechanism of injury improved model performance when using low-dimensional injury coding but was associated with no improvement when using high-dimensional injury coding. Mechanism of injury contributed to 28% of model variance when using low-dimensional injury coding and <1% when high-dimensional injury coding was used. Conclusions Although mechanism of injury may be an important predictor of injury mortality among children sustaining blunt trauma, its importance as a predictor of mortality depends on approach used for injury coding. Mechanism of injury is not an essential predictor of outcome after injury when coding schemes are used that better characterize injuries sustained after blunt pediatric trauma. PMID:26197948
Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon
2017-12-01
Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
Impact of turbulence anisotropy near walls in room airflow.
Schälin, A; Nielsen, P V
2004-06-01
The influence of different turbulence models used in computational fluid dynamics predictions is studied in connection with room air movement. The turbulence models used are the high Re-number kappa-epsilon model and the high Re-number Reynolds stress model (RSM). The three-dimensional wall jet is selected for the work. The growth rate parallel to the wall in a three-dimensional wall jet is large compared with the growth rate perpendicular to the wall, and it is large compared with the growth rate in a free circular jet. It is shown that it is not possible to predict the high growth rate parallel with a surface in a three-dimensional wall jet by the kappa-epsilon turbulence model. Furthermore, it is shown that the growth rate can be predicted to a certain extent by the RSM with wall reflection terms. The flow in a deep room can be strongly influenced by details as the growth rate of a three-dimensional wall jet. Predictions by a kappa-epsilon model and RSM show large deviations in the occupied zone. Measurements and observations of streamline patterns in model experiments indicate that a reasonable solution is obtained by the RSM compared with the solution obtained by the kappa-epsilon model. Computational fluid dynamics (CFD) is often used for the prediction of air distribution in rooms and for the evaluation of thermal comfort and indoor air quality. The most used turbulence model in CFD is the kappa-epsilon model. This model often produces good results; however, some cases require more sophisticated models. The prediction of a three-dimensional wall jet is improved if it is made by a Reynolds stress model (RSM). This model improves the prediction of the velocity level in the jet and in some special cases it may influence the entire flow in the occupied zone.
Three-Dimensional Transgenic Cell Models to Quantify Space Genotoxic Effects
NASA Technical Reports Server (NTRS)
Gonda, S.; Wu, H.; Pingerelli, P.; Glickman, B.
2000-01-01
In this paper we describe a three-dimensional, multicellular tissue-equivalent model, produced in NASA-designed, rotating wall bioreactors using mammalian cells engineered for genomic containment of mUltiple copies of defined target genes for genotoxic assessment. The Rat 2(lambda) fibroblasts (Stratagene, Inc.) were genetically engineered to contain high-density target genes for mutagenesis. Stable three-dimensional, multicellular spheroids were formed when human mammary epithelial cells and Rat 2(lambda) fibroblasts were cocultured on Cytodex 3 Beads in a rotating wall bioreactor. The utility of this spheroidal model for genotoxic assessment was indicated by a linear dose response curve and by results of gene sequence analysis of mutant clones from 400micron diameter spheroids following low-dose, high-energy, neon radiation exposure
Phases and approximations of baryonic popcorn in a low-dimensional analogue of holographic QCD
NASA Astrophysics Data System (ADS)
Elliot-Ripley, Matthew
2015-07-01
The Sakai-Sugimoto model is the most pre-eminent model of holographic QCD, in which baryons correspond to topological solitons in a five-dimensional bulk spacetime. Recently it has been shown that a single soliton in this model can be well approximated by a flat-space self-dual Yang-Mills instanton with a small size, although studies of multi-solitons and solitons at finite density are currently beyond numerical computations. A lower-dimensional analogue of the model has also been studied in which the Sakai-Sugimoto soliton is replaced by a baby Skyrmion in three spacetime dimensions with a warped metric. The lower dimensionality of this model means that full numerical field calculations are possible, and static multi-solitons and solitons at finite density were both investigated, in particular the baryonic popcorn phase transitions at high densities. Here we present and investigate an alternative lower-dimensional analogue of the Sakai-Sugimoto model in which the Sakai-Sugimoto soliton is replaced by an O(3)-sigma model instanton in a warped three-dimensional spacetime stabilized by a massive vector meson. A more detailed range of baryonic popcorn phase transitions are found, and the low-dimensional model is used as a testing ground to check the validity of common approximations made in the full five-dimensional model, namely approximating fields using their flat-space equations of motion, and performing a leading order expansion in the metric.
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
Scalable Learning for Geostatistics and Speaker Recognition
2011-01-01
of prior knowledge of the model or due to improved robustness requirements). Both these methods have their own advantages and disadvantages. The use...application. If the data is well-correlated and low-dimensional, any prior knowledge available on the data can be used to build a parametric model. In the...absence of prior knowledge , non-parametric methods can be used. If the data is high-dimensional, PCA based dimensionality reduction is often the first
Modeling of heavy-gas effects on airfoil flows
NASA Technical Reports Server (NTRS)
Drela, Mark
1992-01-01
Thermodynamic models were constructed for a calorically imperfect gas and for a non-ideal gas. These were incorporated into a quasi one dimensional flow solver to develop an understanding of the differences in flow behavior between the new models and the perfect gas model. The models were also incorporated into a two dimensional flow solver to investigate their effects on transonic airfoil flows. Specifically, the calculations simulated airfoil testing in a proposed high Reynolds number heavy gas test facility. The results indicate that the non-idealities caused significant differences in the flow field, but that matching of an appropriate non-dimensional parameter led to flows similar to those in air.
Office-Based Three-Dimensional Printing Workflow for Craniomaxillofacial Fracture Repair.
Elegbede, Adekunle; Diaconu, Silviu C; McNichols, Colton H L; Seu, Michelle; Rasko, Yvonne M; Grant, Michael P; Nam, Arthur J
2018-03-08
Three-dimensional printing of patient-specific models is being used in various aspects of craniomaxillofacial reconstruction. Printing is typically outsourced to off-site vendors, with the main disadvantages being increased costs and time for production. Office-based 3-dimensional printing has been proposed as a means to reduce costs and delays, but remains largely underused because of the perception among surgeons that it is futuristic, highly technical, and prohibitively expensive. The goal of this report is to demonstrate the feasibility and ease of incorporating in-office 3-dimensional printing into the standard workflow for facial fracture repair.Patients with complex mandible fractures requiring open repair were identified. Open-source software was used to create virtual 3-dimensional skeletal models of the, initial injury pattern, and then the ideally reduced fractures based on preoperative computed tomography (CT) scan images. The virtual 3-dimensional skeletal models were then printed in our office using a commercially available 3-dimensional printer and bioplastic filament. The 3-dimensional skeletal models were used as templates to bend and shape titanium plates that were subsequently used for intraoperative fixation.Average print time was 6 hours. Excluding the 1-time cost of the 3-dimensional printer of $2500, roughly the cost of a single commercially produced model, the average material cost to print 1 model mandible was $4.30. Postoperative CT imaging demonstrated precise, predicted reduction in all patients.Office-based 3-dimensional printing of skeletal models can be routinely used in repair of facial fractures in an efficient and cost-effective manner.
A One Dimensional, Time Dependent Inlet/Engine Numerical Simulation for Aircraft Propulsion Systems
NASA Technical Reports Server (NTRS)
Garrard, Doug; Davis, Milt, Jr.; Cole, Gary
1999-01-01
The NASA Lewis Research Center (LeRC) and the Arnold Engineering Development Center (AEDC) have developed a closely coupled computer simulation system that provides a one dimensional, high frequency inlet/engine numerical simulation for aircraft propulsion systems. The simulation system, operating under the LeRC-developed Application Portable Parallel Library (APPL), closely coupled a supersonic inlet with a gas turbine engine. The supersonic inlet was modeled using the Large Perturbation Inlet (LAPIN) computer code, and the gas turbine engine was modeled using the Aerodynamic Turbine Engine Code (ATEC). Both LAPIN and ATEC provide a one dimensional, compressible, time dependent flow solution by solving the one dimensional Euler equations for the conservation of mass, momentum, and energy. Source terms are used to model features such as bleed flows, turbomachinery component characteristics, and inlet subsonic spillage while unstarted. High frequency events, such as compressor surge and inlet unstart, can be simulated with a high degree of fidelity. The simulation system was exercised using a supersonic inlet with sixty percent of the supersonic area contraction occurring internally, and a GE J85-13 turbojet engine.
Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang
2018-01-01
A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies. PMID:29466394
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
1997-01-01
The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790
Cold spray nozzle mach number limitation
NASA Astrophysics Data System (ADS)
Jodoin, B.
2002-12-01
The classic one-dimensional isentropic flow approach is used along with a two-dimensional axisymmetric numerical model to show that the exit Mach number of a cold spray nozzle should be limited due to two factors. To show this, the two-dimensional model is validated with experimental data. Although both models show that the stagnation temperature is an important limiting factor, the one-dimensional approach fails to show how important the shock-particle interactions are at limiting the nozzle Mach number. It is concluded that for an air nozzle spraying solid powder particles, the nozzle Mach number should be set between 1.5 and 3 to limit the negative effects of the high stagnation temperature and of the shock-particle interactions.
Lessons learned in the analysis of high-dimensional data in vaccinomics
Oberg, Ann L.; McKinney, Brett A.; Schaid, Daniel J.; Pankratz, V. Shane; Kennedy, Richard B.; Poland, Gregory A.
2015-01-01
The field of vaccinology is increasingly moving toward the generation, analysis, and modeling of extremely large and complex high-dimensional datasets. We have used data such as these in the development and advancement of the field of vaccinomics to enable prediction of vaccine responses and to develop new vaccine candidates. However, the application of systems biology to what has been termed “big data,” or “high-dimensional data,” is not without significant challenges—chief among them a paucity of gold standard analysis and modeling paradigms with which to interpret the data. In this article, we relate some of the lessons we have learned over the last decade of working with high-dimensional, high-throughput data as applied to the field of vaccinomics. The value of such efforts, however, is ultimately to better understand the immune mechanisms by which protective and non-protective responses to vaccines are generated, and to use this information to support a personalized vaccinology approach in creating better, and safer, vaccines for the public health. PMID:25957070
Lessons learned in the analysis of high-dimensional data in vaccinomics.
Oberg, Ann L; McKinney, Brett A; Schaid, Daniel J; Pankratz, V Shane; Kennedy, Richard B; Poland, Gregory A
2015-09-29
The field of vaccinology is increasingly moving toward the generation, analysis, and modeling of extremely large and complex high-dimensional datasets. We have used data such as these in the development and advancement of the field of vaccinomics to enable prediction of vaccine responses and to develop new vaccine candidates. However, the application of systems biology to what has been termed "big data," or "high-dimensional data," is not without significant challenges-chief among them a paucity of gold standard analysis and modeling paradigms with which to interpret the data. In this article, we relate some of the lessons we have learned over the last decade of working with high-dimensional, high-throughput data as applied to the field of vaccinomics. The value of such efforts, however, is ultimately to better understand the immune mechanisms by which protective and non-protective responses to vaccines are generated, and to use this information to support a personalized vaccinology approach in creating better, and safer, vaccines for the public health. Copyright © 2015 Elsevier Ltd. All rights reserved.
Echocardiography derived three-dimensional printing of normal and abnormal mitral annuli.
Mahmood, Feroze; Owais, Khurram; Montealegre-Gallegos, Mario; Matyal, Robina; Panzica, Peter; Maslow, Andrew; Khabbaz, Kamal R
2014-01-01
The objective of this study was to assess the clinical feasibility of using echocardiographic data to generate three-dimensional models of normal and pathologic mitral valve annuli before and after repair procedures. High-resolution transesophageal echocardiographic data from five patients was analyzed to delineate and track the mitral annulus (MA) using Tom Tec Image-Arena software. Coordinates representing the annulus were imported into Solidworks software for constructing solid models. These solid models were converted to stereolithographic (STL) file format and three-dimensionally printed by a commercially available Maker Bot Replicator 2 three-dimensional printer. Total time from image acquisition to printing was approximately 30 min. Models created were highly reflective of known geometry, shape and size of normal and pathologic mitral annuli. Post-repair models also closely resembled shapes of the rings they were implanted with. Compared to echocardiographic images of annuli seen on a computer screen, physical models were able to convey clinical information more comprehensively, making them helpful in appreciating pathology, as well as post-repair changes. Three-dimensional printing of the MA is possible and clinically feasible using routinely obtained echocardiographic images. Given the short turn-around time and the lack of need for additional imaging, a technique we describe here has the potential for rapid integration into clinical practice to assist with surgical education, planning and decision-making.
High dimensional biological data retrieval optimization with NoSQL technology.
Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike
2014-01-01
High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data.
High dimensional biological data retrieval optimization with NoSQL technology
2014-01-01
Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data. PMID:25435347
Rebar: Reinforcing a Matching Estimator with Predictions from High-Dimensional Covariates
ERIC Educational Resources Information Center
Sales, Adam C.; Hansen, Ben B.; Rowan, Brian
2018-01-01
In causal matching designs, some control subjects are often left unmatched, and some covariates are often left unmodeled. This article introduces "rebar," a method using high-dimensional modeling to incorporate these commonly discarded data without sacrificing the integrity of the matching design. After constructing a match, a researcher…
Hierarchical Protein Free Energy Landscapes from Variationally Enhanced Sampling.
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-12-13
In recent work, we demonstrated that it is possible to obtain approximate representations of high-dimensional free energy surfaces with variationally enhanced sampling ( Shaffer, P.; Valsson, O.; Parrinello, M. Proc. Natl. Acad. Sci. , 2016 , 113 , 17 ). The high-dimensional spaces considered in that work were the set of backbone dihedral angles of a small peptide, Chignolin, and the high-dimensional free energy surface was approximated as the sum of many two-dimensional terms plus an additional term which represents an initial estimate. In this paper, we build on that work and demonstrate that we can calculate high-dimensional free energy surfaces of very high accuracy by incorporating additional terms. The additional terms apply to a set of collective variables which are more coarse than the base set of collective variables. In this way, it is possible to build hierarchical free energy surfaces, which are composed of terms that act on different length scales. We test the accuracy of these free energy landscapes for the proteins Chignolin and Trp-cage by constructing simple coarse-grained models and comparing results from the coarse-grained model to results from atomistic simulations. The approach described in this paper is ideally suited for problems in which the free energy surface has important features on different length scales or in which there is some natural hierarchy.
An intermediate-scale model for thermal hydrology in low-relief permafrost-affected landscapes
Jan, Ahmad; Coon, Ethan T.; Painter, Scott L.; ...
2017-07-10
Integrated surface/subsurface models for simulating the thermal hydrology of permafrost-affected regions in a warming climate have recently become available, but computational demands of those new process-rich simu- lation tools have thus far limited their applications to one-dimensional or small two-dimensional simulations. We present a mixed-dimensional model structure for efficiently simulating surface/subsurface thermal hydrology in low-relief permafrost regions at watershed scales. The approach replaces a full three-dimensional system with a two-dimensional overland thermal hydrology system and a family of one-dimensional vertical columns, where each column represents a fully coupled surface/subsurface thermal hydrology system without lateral flow. The system is then operatormore » split, sequentially updating the overland flow system without sources and the one-dimensional columns without lateral flows. We show that the app- roach is highly scalable, supports subcycling of different processes, and compares well with the corresponding fully three-dimensional representation at significantly less computational cost. Those advances enable recently developed representations of freezing soil physics to be coupled with thermal overland flow and surface energy balance at scales of 100s of meters. Furthermore developed and demonstrated for permafrost thermal hydrology, the mixed-dimensional model structure is applicable to integrated surface/subsurface thermal hydrology in general.« less
Three-Dimensional Modeling of Aircraft High-Lift Components with Vehicle Sketch Pad
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2016-01-01
Vehicle Sketch Pad (OpenVSP) is a parametric geometry modeler that has been used extensively for conceptual design studies of aircraft, including studies using higher-order analysis. OpenVSP can model flap and slat surfaces using simple shearing of the airfoil coordinates, which is an appropriate level of complexity for lower-order aerodynamic analysis methods. For three-dimensional analysis, however, there is not a built-in method for defining the high-lift components in OpenVSP in a realistic manner, or for controlling their complex motions in a parametric manner that is intuitive to the designer. This paper seeks instead to utilize OpenVSP's existing capabilities, and establish a set of best practices for modeling high-lift components at a level of complexity suitable for higher-order analysis methods. Techniques are described for modeling the flap and slat components as separate three-dimensional surfaces, and for controlling their motion using simple parameters defined in the local hinge-axis frame of reference. To demonstrate the methodology, an OpenVSP model for the Energy-Efficient Transport (EET) AR12 wind-tunnel model has been created, taking advantage of OpenVSP's Advanced Parameter Linking capability to translate the motions of the high-lift components from the hinge-axis coordinate system to a set of transformations in OpenVSP's frame of reference.
Adaptation of an articulated fetal skeleton model to three-dimensional fetal image data
NASA Astrophysics Data System (ADS)
Klinder, Tobias; Wendland, Hannes; Wachter-Stehle, Irina; Roundhill, David; Lorenz, Cristian
2015-03-01
The automatic interpretation of three-dimensional fetal images poses specific challenges compared to other three-dimensional diagnostic data, especially since the orientation of the fetus in the uterus and the position of the extremities is highly variable. In this paper, we present a comprehensive articulated model of the fetal skeleton and the adaptation of the articulation for pose estimation in three-dimensional fetal images. The model is composed out of rigid bodies where the articulations are represented as rigid body transformations. Given a set of target landmarks, the model constellation can be estimated by optimization of the pose parameters. Experiments are carried out on 3D fetal MRI data yielding an average error per case of 12.03+/-3.36 mm between target and estimated landmark positions.
Simulating Effects of High Angle of Attack on Turbofan Engine Performance
NASA Technical Reports Server (NTRS)
Liu, Yuan; Claus, Russell W.; Litt, Jonathan S.; Guo, Ten-Huei
2013-01-01
A method of investigating the effects of high angle of attack (AOA) flight on turbofan engine performance is presented. The methodology involves combining a suite of diverse simulation tools. Three-dimensional, steady-state computational fluid dynamics (CFD) software is used to model the change in performance of a commercial aircraft-type inlet and fan geometry due to various levels of AOA. Parallel compressor theory is then applied to assimilate the CFD data with a zero-dimensional, nonlinear, dynamic turbofan engine model. The combined model shows that high AOA operation degrades fan performance and, thus, negatively impacts compressor stability margins and engine thrust. In addition, the engine response to high AOA conditions is shown to be highly dependent upon the type of control system employed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ben Hamida, M. B.; Charrada, K.
2012-06-15
This paper is devoted to study the dynamics of a discharge lamp with high intensity in a horizontal position. As an example of application, we chose the high-pressure mercury lamp. For this, we realized a three-dimensional model, a stable and powered DC. After the validation of this model, we used it to reproduce the influence of some parameters that have appeared on major transport phenomena of mass and energy in studying the lamp operating in a horizontal position. Indeed, the mass of mercury and the electric current are modified and the effect of convective transport is studied.
Zhang, Bo; Chen, Zhen; Albert, Paul S
2012-01-01
High-dimensional biomarker data are often collected in epidemiological studies when assessing the association between biomarkers and human disease is of interest. We develop a latent class modeling approach for joint analysis of high-dimensional semicontinuous biomarker data and a binary disease outcome. To model the relationship between complex biomarker expression patterns and disease risk, we use latent risk classes to link the 2 modeling components. We characterize complex biomarker-specific differences through biomarker-specific random effects, so that different biomarkers can have different baseline (low-risk) values as well as different between-class differences. The proposed approach also accommodates data features that are common in environmental toxicology and other biomarker exposure data, including a large number of biomarkers, numerous zero values, and complex mean-variance relationship in the biomarkers levels. A Monte Carlo EM (MCEM) algorithm is proposed for parameter estimation. Both the MCEM algorithm and model selection procedures are shown to work well in simulations and applications. In applying the proposed approach to an epidemiological study that examined the relationship between environmental polychlorinated biphenyl (PCB) exposure and the risk of endometriosis, we identified a highly significant overall effect of PCB concentrations on the risk of endometriosis.
Zhang, Yifan; Gao, Xunzhang; Peng, Xuan; Ye, Jiaqi; Li, Xiang
2018-05-16
The High Resolution Range Profile (HRRP) recognition has attracted great concern in the field of Radar Automatic Target Recognition (RATR). However, traditional HRRP recognition methods failed to model high dimensional sequential data efficiently and have a poor anti-noise ability. To deal with these problems, a novel stochastic neural network model named Attention-based Recurrent Temporal Restricted Boltzmann Machine (ARTRBM) is proposed in this paper. RTRBM is utilized to extract discriminative features and the attention mechanism is adopted to select major features. RTRBM is efficient to model high dimensional HRRP sequences because it can extract the information of temporal and spatial correlation between adjacent HRRPs. The attention mechanism is used in sequential data recognition tasks including machine translation and relation classification, which makes the model pay more attention to the major features of recognition. Therefore, the combination of RTRBM and the attention mechanism makes our model effective for extracting more internal related features and choose the important parts of the extracted features. Additionally, the model performs well with the noise corrupted HRRP data. Experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset show that our proposed model outperforms other traditional methods, which indicates that ARTRBM extracts, selects, and utilizes the correlation information between adjacent HRRPs effectively and is suitable for high dimensional data or noise corrupted data.
Social Inferences from Faces: Ambient Images Generate a Three-Dimensional Model
ERIC Educational Resources Information Center
Sutherland, Clare A. M.; Oldmeadow, Julian A.; Santos, Isabel M.; Towler, John; Burt, D. Michael; Young, Andrew W.
2013-01-01
Three experiments are presented that investigate the two-dimensional valence/trustworthiness by dominance model of social inferences from faces (Oosterhof & Todorov, 2008). Experiment 1 used image averaging and morphing techniques to demonstrate that consistent facial cues subserve a range of social inferences, even in a highly variable sample of…
NASA Astrophysics Data System (ADS)
Turkin, Yaroslav V.; Kuptsov, Pavel V.
2018-04-01
A quantum model of spin dynamics of spin-orbit coupled two-dimensional electron gas in the presence of strong high- frequency electromagnetic field is suggested. Interaction of electrons with optical phonons is taken into account in the second order of perturbation theory.
Band gaps in grid structure with periodic local resonator subsystems
NASA Astrophysics Data System (ADS)
Zhou, Xiaoqin; Wang, Jun; Wang, Rongqi; Lin, Jieqiong
2017-09-01
The grid structure is widely used in architectural and mechanical field for its high strength and saving material. This paper will present a study on an acoustic metamaterial beam (AMB) based on the normal square grid structure with local resonators owning both flexible band gaps and high static stiffness, which have high application potential in vibration control. Firstly, the AMB with variable cross-section frame is analytically modeled by the beam-spring-mass model that is provided by using the extended Hamilton’s principle and Bloch’s theorem. The above model is used for computing the dispersion relation of the designed AMB in terms of the design parameters, and the influences of relevant parameters on band gaps are discussed. Then a two-dimensional finite element model of the AMB is built and analyzed in COMSOL Multiphysics, both the dispersion properties of unit cell and the wave attenuation in a finite AMB have fine agreement with the derived model. The effects of design parameters of the two-dimensional model in band gaps are further examined, and the obtained results can well verify the analytical model. Finally, the wave attenuation performances in three-dimensional AMBs with equal and unequal thickness are presented and discussed.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
Community Sediment Transport Modeling, National Ocean Partnership Program
2009-12-01
delta . A high-resolution, one-dimensional model that resolves the phase of the forcing gravity waves is being used to test the hypothesized mechanisms...dimensional process models to operational elements in the CSTMS framework. Sherwood and Ferre modified the existing algorithms for tracking stratigraphy ...Verdes shelf, California. Continental Shelf Research ( revised manuscript submitted), [refereed] Frank, D. P., D. L. Foster, and C. R. Sherwood
NASA Astrophysics Data System (ADS)
Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried
2017-02-01
We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.
Cairoli, Andrea; Piovani, Duccio; Jensen, Henrik Jeldtoft
2014-12-31
We propose a new procedure to monitor and forecast the onset of transitions in high-dimensional complex systems. We describe our procedure by an application to the tangled nature model of evolutionary ecology. The quasistable configurations of the full stochastic dynamics are taken as input for a stability analysis by means of the deterministic mean-field equations. Numerical analysis of the high-dimensional stability matrix allows us to identify unstable directions associated with eigenvalues with a positive real part. The overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean-field approximation is found to be a good early warning of the transitions occurring intermittently.
A numerical code for a three-dimensional magnetospheric MHD equilibrium model
NASA Technical Reports Server (NTRS)
Voigt, G.-H.
1992-01-01
Two dimensional and three dimensional MHD equilibrium models were begun for Earth's magnetosphere. The original proposal was motivated by realizing that global, purely data based models of Earth's magnetosphere are inadequate for studying the underlying plasma physical principles according to which the magnetosphere evolves on the quasi-static convection time scale. Complex numerical grid generation schemes were established for a 3-D Poisson solver, and a robust Grad-Shafranov solver was coded for high beta MHD equilibria. Thus, the effects were calculated of both the magnetopause geometry and boundary conditions on the magnetotail current distribution.
Design applications for supercomputers
NASA Technical Reports Server (NTRS)
Studerus, C. J.
1987-01-01
The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.
Numerical aerodynamic simulation facility. [for flows about three-dimensional configurations
NASA Technical Reports Server (NTRS)
Bailey, F. R.; Hathaway, A. W.
1978-01-01
Critical to the advancement of computational aerodynamics capability is the ability to simulate flows about three-dimensional configurations that contain both compressible and viscous effects, including turbulence and flow separation at high Reynolds numbers. Analyses were conducted of two solution techniques for solving the Reynolds averaged Navier-Stokes equations describing the mean motion of a turbulent flow with certain terms involving the transport of turbulent momentum and energy modeled by auxiliary equations. The first solution technique is an implicit approximate factorization finite-difference scheme applied to three-dimensional flows that avoids the restrictive stability conditions when small grid spacing is used. The approximate factorization reduces the solution process to a sequence of three one-dimensional problems with easily inverted matrices. The second technique is a hybrid explicit/implicit finite-difference scheme which is also factored and applied to three-dimensional flows. Both methods are applicable to problems with highly distorted grids and a variety of boundary conditions and turbulence models.
Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.
Balfer, Jenny; Hu, Ye; Bajorath, Jürgen
2014-08-01
Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis
Daye, Z. John; Chen, Jinbo; Li, Hongzhe
2011-01-01
Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833
Spertus, Jacob V; Normand, Sharon-Lise T
2018-04-23
High-dimensional data provide many potential confounders that may bolster the plausibility of the ignorability assumption in causal inference problems. Propensity score methods are powerful causal inference tools, which are popular in health care research and are particularly useful for high-dimensional data. Recent interest has surrounded a Bayesian treatment of propensity scores in order to flexibly model the treatment assignment mechanism and summarize posterior quantities while incorporating variance from the treatment model. We discuss methods for Bayesian propensity score analysis of binary treatments, focusing on modern methods for high-dimensional Bayesian regression and the propagation of uncertainty. We introduce a novel and simple estimator for the average treatment effect that capitalizes on conjugacy of the beta and binomial distributions. Through simulations, we show the utility of horseshoe priors and Bayesian additive regression trees paired with our new estimator, while demonstrating the importance of including variance from the treatment regression model. An application to cardiac stent data with almost 500 confounders and 9000 patients illustrates approaches and facilitates comparison with existing alternatives. As measured by a falsifiability endpoint, we improved confounder adjustment compared with past observational research of the same problem. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets
Nowicka, Malgorzata; Krieg, Carsten; Weber, Lukas M.; Hartmann, Felix J.; Guglietta, Silvia; Becher, Burkhard; Levesque, Mitchell P.; Robinson, Mark D.
2017-01-01
High dimensional mass and flow cytometry (HDCyto) experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots), reporting of clustering results (dimensionality reduction, heatmaps with dendrograms) and differential analyses (e.g. plots of aggregated signals). PMID:28663787
Gentry, Amanda Elswick; Jackson-Cook, Colleen K; Lyon, Debra E; Archer, Kellie J
2015-01-01
The pathological description of the stage of a tumor is an important clinical designation and is considered, like many other forms of biomedical data, an ordinal outcome. Currently, statistical methods for predicting an ordinal outcome using clinical, demographic, and high-dimensional correlated features are lacking. In this paper, we propose a method that fits an ordinal response model to predict an ordinal outcome for high-dimensional covariate spaces. Our method penalizes some covariates (high-throughput genomic features) without penalizing others (such as demographic and/or clinical covariates). We demonstrate the application of our method to predict the stage of breast cancer. In our model, breast cancer subtype is a nonpenalized predictor, and CpG site methylation values from the Illumina Human Methylation 450K assay are penalized predictors. The method has been made available in the ordinalgmifs package in the R programming environment.
Evaluation of HFIR LEU Fuel Using the COMSOL Multiphysics Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Primm, Trent; Ruggles, Arthur; Freels, James D
2009-03-01
A finite element computational approach to simulation of the High Flux Isotope Reactor (HFIR) Core Thermal-Fluid behavior is developed. These models were developed to facilitate design of a low enriched core for the HFIR, which will have different axial and radial flux profiles from the current HEU core and thus will require fuel and poison load optimization. This report outlines a stepwise implementation of this modeling approach using the commercial finite element code, COMSOL, with initial assessment of fuel, poison and clad conduction modeling capability, followed by assessment of mating of the fuel conduction models to a one dimensional fluidmore » model typical of legacy simulation techniques for the HFIR core. The model is then extended to fully couple 2-dimensional conduction in the fuel to a 2-dimensional thermo-fluid model of the coolant for a HFIR core cooling sub-channel with additional assessment of simulation outcomes. Finally, 3-dimensional simulations of a fuel plate and cooling channel are presented.« less
Development of an Unstructured, Three-Dimensional Material Response Design Tool
NASA Technical Reports Server (NTRS)
Schulz, Joseph; Stern, Eric; Palmer, Grant; Muppidi, Suman; Schroeder, Olivia
2017-01-01
A preliminary verification and validation of a new material response model is presented. This model, Icarus, is intended to serve as a design tool for the thermal protection systems of re-entry vehicles. Currently, the capability of the model is limited to simulating the pyrolysis of a material as a result of the radiative and convective surface heating imposed on the material from the surrounding high enthalpy gas. Since the major focus behind the development of Icarus has been model extensibility, the hope is that additional physics can be quickly added. The extensibility is critical since thermal protection systems are becoming increasing complex, e.g. woven carbon polymers. Additionally, as a three-dimensional, unstructured, finite-volume model, Icarus is capable of modeling complex geometries as well as multi-dimensional physics, which have been shown to be important in some scenarios and are not captured by one-dimensional models. In this paper, the mathematical and numerical formulation is presented followed by a discussion of the software architecture and some preliminary verification and validation studies.
Flow Analysis of a Gas Turbine Low- Pressure Subsystem
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1997-01-01
The NASA Lewis Research Center is coordinating a project to numerically simulate aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The numerical model solves the three-dimensional Navier-Stokes flow equations through all components within the low-pressure subsystem as well as the external flow around the engine nacelle. The Advanced Ducted Propfan Analysis Code (ADPAC), which is being developed jointly by Allison Engine Company and NASA, is the Navier-Stokes flow code being used for LPS simulation. The majority of the LPS project is being done under a NASA Lewis contract with Allison. Other contributors to the project are NYMA and the University of Toledo. For this project, the Energy Efficient Engine designed by GE Aircraft Engines is being modeled. This engine includes a low-pressure system and a high-pressure system. An inlet, a fan, a booster stage, a bypass duct, a lobed mixer, a low-pressure turbine, and a jet nozzle comprise the low-pressure subsystem within this engine. The tightly coupled flow analysis evaluates aerodynamic interactions between all components of the LPS. The high-pressure core engine of this engine is simulated with a one-dimensional thermodynamic cycle code in order to provide boundary conditions to the detailed LPS model. This core engine consists of a high-pressure compressor, a combustor, and a high-pressure turbine. The three-dimensional LPS flow model is coupled to the one-dimensional core engine model to provide a "hybrid" flow model of the complete gas turbine Energy Efficient Engine. The resulting hybrid engine model evaluates the detailed interaction between the LPS components at design and off-design engine operating conditions while considering the lumped-parameter performance of the core engine.
MURI: Adaptive Waveform Design for Full Spectral Dominance
2011-03-11
a three- dimensional urban tracking model, based on the nonlinear measurement model (that uses the urban multipath geometry with different types of ... the time evolution of the scattering function with a high dimensional dynamic system; a multiple particle filter technique is used to sequentially...integration of space -time coding with a fixed set of beams. It complements the
Three-dimensional cell culture models for investigating human viruses.
He, Bing; Chen, Guomin; Zeng, Yi
2016-10-01
Three-dimensional (3D) culture models are physiologically relevant, as they provide reproducible results, experimental flexibility and can be adapted for high-throughput experiments. Moreover, these models bridge the gap between traditional two-dimensional (2D) monolayer cultures and animal models. 3D culture systems have significantly advanced basic cell science and tissue engineering, especially in the fields of cell biology and physiology, stem cell research, regenerative medicine, cancer research, drug discovery, and gene and protein expression studies. In addition, 3D models can provide unique insight into bacteriology, virology, parasitology and host-pathogen interactions. This review summarizes and analyzes recent progress in human virological research with 3D cell culture models. We discuss viral growth, replication, proliferation, infection, virus-host interactions and antiviral drugs in 3D culture models.
Extended frequency turbofan model
NASA Technical Reports Server (NTRS)
Mason, J. R.; Park, J. W.; Jaekel, R. F.
1980-01-01
The fan model was developed using two dimensional modeling techniques to add dynamic radial coupling between the core stream and the bypass stream of the fan. When incorporated into a complete TF-30 engine simulation, the fan model greatly improved compression system frequency response to planar inlet pressure disturbances up to 100 Hz. The improved simulation also matched engine stability limits at 15 Hz, whereas the one dimensional fan model required twice the inlet pressure amplitude to stall the simulation. With verification of the two dimensional fan model, this program formulated a high frequency F-100(3) engine simulation using row by row compression system characteristics. In addition to the F-100(3) remote splitter fan, the program modified the model fan characteristics to simulate a proximate splitter version of the F-100(3) engine.
Decomposition and model selection for large contingency tables.
Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter
2010-04-01
Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.
3D surface pressure measurement with single light-field camera and pressure-sensitive paint
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth
2018-05-01
A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.
Tao, Chenyang; Nichols, Thomas E.; Hua, Xue; Ching, Christopher R.K.; Rolls, Edmund T.; Thompson, Paul M.; Feng, Jianfeng
2017-01-01
We propose a generalized reduced rank latent factor regression model (GRRLF) for the analysis of tensor field responses and high dimensional covariates. The model is motivated by the need from imaging-genetic studies to identify genetic variants that are associated with brain imaging phenotypes, often in the form of high dimensional tensor fields. GRRLF identifies from the structure in the data the effective dimensionality of the data, and then jointly performs dimension reduction of the covariates, dynamic identification of latent factors, and nonparametric estimation of both covariate and latent response fields. After accounting for the latent and covariate effects, GRLLF performs a nonparametric test on the remaining factor of interest. GRRLF provides a better factorization of the signals compared with common solutions, and is less susceptible to overfitting because it exploits the effective dimensionality. The generality and the flexibility of GRRLF also allow various statistical models to be handled in a unified framework and solutions can be efficiently computed. Within the field of neuroimaging, it improves the sensitivity for weak signals and is a promising alternative to existing approaches. The operation of the framework is demonstrated with both synthetic datasets and a real-world neuroimaging example in which the effects of a set of genes on the structure of the brain at the voxel level were measured, and the results compared favorably with those from existing approaches. PMID:27666385
The craniomandibular mechanics of being human
Wroe, Stephen; Ferrara, Toni L.; McHenry, Colin R.; Curnoe, Darren; Chamoli, Uphar
2010-01-01
Diminished bite force has been considered a defining feature of modern Homo sapiens, an interpretation inferred from the application of two-dimensional lever mechanics and the relative gracility of the human masticatory musculature and skull. This conclusion has various implications with regard to the evolution of human feeding behaviour. However, human dental anatomy suggests a capacity to withstand high loads and two-dimensional lever models greatly simplify muscle architecture, yielding less accurate results than three-dimensional modelling using multiple lines of action. Here, to our knowledge, in the most comprehensive three-dimensional finite element analysis performed to date for any taxon, we ask whether the traditional view that the bite of H. sapiens is weak and the skull too gracile to sustain high bite forces is supported. We further introduce a new method for reconstructing incomplete fossil material. Our findings show that the human masticatory apparatus is highly efficient, capable of producing a relatively powerful bite using low muscle forces. Thus, relative to other members of the superfamily Hominoidea, humans can achieve relatively high bite forces, while overall stresses are reduced. Our findings resolve apparently discordant lines of evidence, i.e. the presence of teeth well adapted to sustain high loads within a lightweight cranium and mandible. PMID:20554545
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
Equation of State of the Two-Dimensional Hubbard Model
NASA Astrophysics Data System (ADS)
Cocchi, Eugenio; Miller, Luke A.; Drewes, Jan H.; Koschorreck, Marco; Pertot, Daniel; Brennecke, Ferdinand; Köhl, Michael
2016-04-01
The subtle interplay between kinetic energy, interactions, and dimensionality challenges our comprehension of strongly correlated physics observed, for example, in the solid state. In this quest, the Hubbard model has emerged as a conceptually simple, yet rich model describing such physics. Here we present an experimental determination of the equation of state of the repulsive two-dimensional Hubbard model over a broad range of interactions 0 ≲U /t ≲20 and temperatures, down to kBT /t =0.63 (2 ) using high-resolution imaging of ultracold fermionic atoms in optical lattices. We show density profiles, compressibilities, and double occupancies over the whole doping range, and, hence, our results constitute benchmarks for state-of-the-art theoretical approaches.
A SIMPLE CELLULAR AUTOMATON MODEL FOR HIGH-LEVEL VEGETATION DYNAMICS
We have produced a simple two-dimensional (ground-plan) cellular automata model of vegetation dynamics specifically to investigate high-level community processes. The model is probabilistic, with individual plant behavior determined by physiologically-based rules derived from a w...
NASA Astrophysics Data System (ADS)
Guo, Guifang; Long, Bo; Cheng, Bo; Zhou, Shiqiong; Xu, Peng; Cao, Binggang
In order to better understand the thermal abuse behavior of high capacities and large power lithium-ion batteries for electric vehicle application, a three-dimensional thermal model has been developed for analyzing the temperature distribution under abuse conditions. The model takes into account the effects of heat generation, internal conduction and convection, and external heat dissipation to predict the temperature distribution in a battery. Three-dimensional model also considers the geometrical features to simulate oven test, which are significant in larger cells for electric vehicle application. The model predictions are compared to oven test results for VLP 50/62/100S-Fe (3.2 V/55 Ah) LiFePO 4/graphite cells and shown to be in great agreement.
Multi-Material Closure Model for High-Order Finite Element Lagrangian Hydrodynamics
Dobrev, V. A.; Kolev, T. V.; Rieben, R. N.; ...
2016-04-27
We present a new closure model for single fluid, multi-material Lagrangian hydrodynamics and its application to high-order finite element discretizations of these equations [1]. The model is general with respect to the number of materials, dimension and space and time discretizations. Knowledge about exact material interfaces is not required. Material indicator functions are evolved by a closure computation at each quadrature point of mixed cells, which can be viewed as a high-order variational generalization of the method of Tipton [2]. This computation is defined by the notion of partial non-instantaneous pressure equilibration, while the full pressure equilibration is achieved bymore » both the closure model and the hydrodynamic motion. Exchange of internal energy between materials is derived through entropy considerations, that is, every material produces positive entropy, and the total entropy production is maximized in compression and minimized in expansion. Results are presented for standard one-dimensional two-material problems, followed by two-dimensional and three-dimensional multi-material high-velocity impact arbitrary Lagrangian–Eulerian calculations. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.« less
Multi-Material Closure Model for High-Order Finite Element Lagrangian Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobrev, V. A.; Kolev, T. V.; Rieben, R. N.
We present a new closure model for single fluid, multi-material Lagrangian hydrodynamics and its application to high-order finite element discretizations of these equations [1]. The model is general with respect to the number of materials, dimension and space and time discretizations. Knowledge about exact material interfaces is not required. Material indicator functions are evolved by a closure computation at each quadrature point of mixed cells, which can be viewed as a high-order variational generalization of the method of Tipton [2]. This computation is defined by the notion of partial non-instantaneous pressure equilibration, while the full pressure equilibration is achieved bymore » both the closure model and the hydrodynamic motion. Exchange of internal energy between materials is derived through entropy considerations, that is, every material produces positive entropy, and the total entropy production is maximized in compression and minimized in expansion. Results are presented for standard one-dimensional two-material problems, followed by two-dimensional and three-dimensional multi-material high-velocity impact arbitrary Lagrangian–Eulerian calculations. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.« less
NASA Astrophysics Data System (ADS)
Benjankar, R. M.; Sohrabi, M.; Tonina, D.; McKean, J. A.
2013-12-01
Aquatic habitat models utilize flow variables which may be predicted with one-dimensional (1D) or two-dimensional (2D) hydrodynamic models to simulate aquatic habitat quality. Studies focusing on the effects of hydrodynamic model dimensionality on predicted aquatic habitat quality are limited. Here we present the analysis of the impact of flow variables predicted with 1D and 2D hydrodynamic models on simulated spatial distribution of habitat quality and Weighted Usable Area (WUA) for fall-spawning Chinook salmon. Our study focuses on three river systems located in central Idaho (USA), which are a straight and pool-riffle reach (South Fork Boise River), small pool-riffle sinuous streams in a large meadow (Bear Valley Creek) and a steep-confined plane-bed stream with occasional deep forced pools (Deadwood River). We consider low and high flows in simple and complex morphologic reaches. Results show that 1D and 2D modeling approaches have effects on both the spatial distribution of the habitat and WUA for both discharge scenarios, but we did not find noticeable differences between complex and simple reaches. In general, the differences in WUA were small, but depended on stream type. Nevertheless, spatially distributed habitat quality difference is considerable in all streams. The steep-confined plane bed stream had larger differences between aquatic habitat quality defined with 1D and 2D flow models compared to results for streams with well defined macro-topographies, such as pool-riffle bed forms. KEY WORDS: one- and two-dimensional hydrodynamic models, habitat modeling, weighted usable area (WUA), hydraulic habitat suitability, high and low discharges, simple and complex reaches
Suenaga, Hideyuki; Hoang Tran, Huy; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Mori, Yoshiyuki; Takato, Tsuyoshi
2013-01-01
To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye. PMID:23703710
NASA Technical Reports Server (NTRS)
Zeng, Xiping; Tao, Wei-Kuo; Lang, Stephen; Hou, Arthur Y.; Zhang, Minghua; Simpson, Joanne
2008-01-01
Month-long large-scale forcing data from two field campaigns are used to drive a cloud-resolving model (CRM) and produce ensemble simulations of clouds and precipitation. Observational data are then used to evaluate the model results. To improve the model results, a new parameterization of the Bergeron process is proposed that incorporates the number concentration of ice nuclei (IN). Numerical simulations reveal that atmospheric ensembles are sensitive to IN concentration and ice crystal multiplication. Two- (2D) and three-dimensional (3D) simulations are carried out to address the sensitivity of atmospheric ensembles to model dimensionality. It is found that the ensembles with high IN concentration are more sensitive to dimensionality than those with low IN concentration. Both the analytic solutions of linear dry models and the CRM output show that there are more convective cores with stronger updrafts in 3D simulations than in 2D, which explains the differing sensitivity of the ensembles to dimensionality at different IN concentrations.
Diaz-Ruelas, Alvaro; Jeldtoft Jensen, Henrik; Piovani, Duccio; Robledo, Alberto
2016-12-01
It is well known that low-dimensional nonlinear deterministic maps close to a tangent bifurcation exhibit intermittency and this circumstance has been exploited, e.g., by Procaccia and Schuster [Phys. Rev. A 28, 1210 (1983)], to develop a general theory of 1/f spectra. This suggests it is interesting to study the extent to which the behavior of a high-dimensional stochastic system can be described by such tangent maps. The Tangled Nature (TaNa) Model of evolutionary ecology is an ideal candidate for such a study, a significant model as it is capable of reproducing a broad range of the phenomenology of macroevolution and ecosystems. The TaNa model exhibits strong intermittency reminiscent of punctuated equilibrium and, like the fossil record of mass extinction, the intermittency in the model is found to be non-stationary, a feature typical of many complex systems. We derive a mean-field version for the evolution of the likelihood function controlling the reproduction of species and find a local map close to tangency. This mean-field map, by our own local approximation, is able to describe qualitatively only one episode of the intermittent dynamics of the full TaNa model. To complement this result, we construct a complete nonlinear dynamical system model consisting of successive tangent bifurcations that generates time evolution patterns resembling those of the full TaNa model in macroscopic scales. The switch from one tangent bifurcation to the next in the sequences produced in this model is stochastic in nature, based on criteria obtained from the local mean-field approximation, and capable of imitating the changing set of types of species and total population in the TaNa model. The model combines full deterministic dynamics with instantaneous parameter random jumps at stochastically drawn times. In spite of the limitations of our approach, which entails a drastic collapse of degrees of freedom, the description of a high-dimensional model system in terms of a low-dimensional one appears to be illuminating.
Pairing phase diagram of three holes in the generalized Hubbard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, O.; Espinosa, J.E.
Investigations of high-{Tc} superconductors suggest that the electronic correlation may play a significant role in the formation of pairs. Although the main interest is on the physic of two-dimensional highly correlated electron systems, the one-dimensional models related to high temperature superconductivity are very popular due to the conjecture that properties of the 1D and 2D variants of certain models have common aspects. Within the models for correlated electron systems, that attempt to capture the essential physics of high-temperature superconductors and parent compounds, the Hubbard model is one of the simplest. Here, the pairing problem of a three electrons system hasmore » been studied by using a real-space method and the generalized Hubbard Hamiltonian. This method includes the correlated hopping interactions as an extension of the previously proposed mapping method, and is based on mapping the correlated many body problem onto an equivalent site- and bond-impurity tight-binding one in a higher dimensional space, where the problem was solved in a non-perturbative way. In a linear chain, the authors analyzed the pairing phase diagram of three correlated holes for different values of the Hamiltonian parameters. For some value of the hopping parameters they obtain an analytical solution for all kind of interactions.« less
Yiu, Sean; Tom, Brian Dm
2017-01-01
Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
Resonant Zener tunneling in two-dimensional periodic photonic lattices.
Desyatnikov, Anton S; Kivshar, Yuri S; Shchesnovich, Valery S; Cavalcanti, Solange B; Hickmann, Jandir M
2007-02-15
We study Zener tunneling in two-dimensional photonic lattices and derive, for the case of hexagonal symmetry, the generalized Landau-Zener-Majorana model describing resonant interaction between high-symmetry points of the photonic spectral bands. We demonstrate that this effect can be employed for the generation of Floquet-Bloch modes and verify the model by direct numerical simulations of the tunneling effect.
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
NASA Astrophysics Data System (ADS)
Liu, Wei; Li, Ying-jun; Jia, Zhen-yuan; Zhang, Jun; Qian, Min
2011-01-01
In working process of huge heavy-load manipulators, such as the free forging machine, hydraulic die-forging press, forging manipulator, heavy grasping manipulator, large displacement manipulator, measurement of six-dimensional heavy force/torque and real-time force feedback of the operation interface are basis to realize coordinate operation control and force compliance control. It is also an effective way to raise the control accuracy and achieve highly efficient manufacturing. Facing to solve dynamic measurement problem on six-dimensional time-varying heavy load in extremely manufacturing process, the novel principle of parallel load sharing on six-dimensional heavy force/torque is put forward. The measuring principle of six-dimensional force sensor is analyzed, and the spatial model is built and decoupled. The load sharing ratios are analyzed and calculated in vertical and horizontal directions. The mapping relationship between six-dimensional heavy force/torque value to be measured and output force value is built. The finite element model of parallel piezoelectric six-dimensional heavy force/torque sensor is set up, and its static characteristics are analyzed by ANSYS software. The main parameters, which affect load sharing ratio, are analyzed. The experiments for load sharing with different diameters of parallel axis are designed. The results show that the six-dimensional heavy force/torque sensor has good linearity. Non-linearity errors are less than 1%. The parallel axis makes good effect of load sharing. The larger the diameter is, the better the load sharing effect is. The results of experiments are in accordance with the FEM analysis. The sensor has advantages of large measuring range, good linearity, high inherent frequency, and high rigidity. It can be widely used in extreme environments for real-time accurate measurement of six-dimensional time-varying huge loads on manipulators.
Perceptual integration of kinematic components in the recognition of emotional facial expressions.
Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin
2018-04-01
According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.
Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling.
Wu, Hulin; Lu, Tao; Xue, Hongqi; Liang, Hua
2014-04-02
The gene regulation network (GRN) is a high-dimensional complex system, which can be represented by various mathematical or statistical models. The ordinary differential equation (ODE) model is one of the popular dynamic GRN models. High-dimensional linear ODE models have been proposed to identify GRNs, but with a limitation of the linear regulation effect assumption. In this article, we propose a sparse additive ODE (SA-ODE) model, coupled with ODE estimation methods and adaptive group LASSO techniques, to model dynamic GRNs that could flexibly deal with nonlinear regulation effects. The asymptotic properties of the proposed method are established and simulation studies are performed to validate the proposed approach. An application example for identifying the nonlinear dynamic GRN of T-cell activation is used to illustrate the usefulness of the proposed method.
On the role of radiation and dimensionality in predicting flow opposed flame spread over thin fuels
NASA Astrophysics Data System (ADS)
Kumar, Chenthil; Kumar, Amit
2012-06-01
In this work a flame-spread model is formulated in three dimensions to simulate opposed flow flame spread over thin solid fuels. The flame-spread model is coupled to a three-dimensional gas radiation model. The experiments [1] on downward spread and zero gravity quiescent spread over finite width thin fuel are simulated by flame-spread models in both two and three dimensions to assess the role of radiation and effect of dimensionality on the prediction of the flame-spread phenomena. It is observed that while radiation plays only a minor role in normal gravity downward spread, in zero gravity quiescent spread surface radiation loss holds the key to correct prediction of low oxygen flame spread rate and quenching limit. The present three-dimensional simulations show that even in zero gravity gas radiation affects flame spread rate only moderately (as much as 20% at 100% oxygen) as the heat feedback effect exceeds the radiation loss effect only moderately. However, the two-dimensional model with the gas radiation model badly over-predicts the zero gravity flame spread rate due to under estimation of gas radiation loss to the ambient surrounding. The two-dimensional model was also found to be inadequate for predicting the zero gravity flame attributes, like the flame length and the flame width, correctly. The need for a three-dimensional model was found to be indispensable for consistently describing the zero gravity flame-spread experiments [1] (including flame spread rate and flame size) especially at high oxygen levels (>30%). On the other hand it was observed that for the normal gravity downward flame spread for oxygen levels up to 60%, the two-dimensional model was sufficient to predict flame spread rate and flame size reasonably well. Gas radiation is seen to increase the three-dimensional effect especially at elevated oxygen levels (>30% for zero gravity and >60% for normal gravity flames).
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen
2018-01-25
Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Modeling Three-Dimensional Shock Initiation of PBX 9501 in ALE3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leininger, L; Springer, H K; Mace, J
A recent SMIS (Specific Munitions Impact Scenario) experimental series performed at Los Alamos National Laboratory has provided 3-dimensional shock initiation behavior of the HMX-based heterogeneous high explosive, PBX 9501. A series of finite element impact calculations have been performed in the ALE3D [1] hydrodynamic code and compared to the SMIS results to validate and study code predictions. These SMIS tests used a powder gun to shoot scaled NATO standard fragments into a cylinder of PBX 9501, which has a PMMA case and a steel impact cover. This SMIS real-world shot scenario creates a unique test-bed because (1) SMIS tests facilitatemore » the investigation of 3D Shock to Detonation Transition (SDT) within the context of a considerable suite of diagnostics, and (2) many of the fragments arrive at the impact plate off-center and at an angle of impact. A particular goal of these model validation experiments is to demonstrate the predictive capability of the ALE3D implementation of the Tarver-Lee Ignition and Growth reactive flow model [2] within a fully 3-dimensional regime of SDT. The 3-dimensional Arbitrary Lagrange Eulerian (ALE) hydrodynamic model in ALE3D applies the Ignition and Growth (I&G) reactive flow model with PBX 9501 parameters derived from historical 1-dimensional experimental data. The model includes the off-center and angle of impact variations seen in the experiments. Qualitatively, the ALE3D I&G calculations reproduce observed 'Go/No-Go' 3D Shock to Detonation Transition (SDT) reaction in the explosive, as well as the case expansion recorded by a high-speed optical camera. Quantitatively, the calculations show good agreement with the shock time of arrival at internal and external diagnostic pins. This exercise demonstrates the utility of the Ignition and Growth model applied for the response of heterogeneous high explosives in the SDT regime.« less
NASA Astrophysics Data System (ADS)
Liu, Zexi; Cohen, Fernand
2017-11-01
We describe an approach for synthesizing a three-dimensional (3-D) face structure from an image or images of a human face taken at a priori unknown poses using gender and ethnicity specific 3-D generic models. The synthesis process starts with a generic model, which is personalized as images of the person become available using preselected landmark points that are tessellated to form a high-resolution triangular mesh. From a single image, two of the three coordinates of the model are reconstructed in accordance with the given image of the person, while the third coordinate is sampled from the generic model, and the appearance is made in accordance with the image. With multiple images, all coordinates and appearance are reconstructed in accordance with the observed images. This method allows for accurate pose estimation as well as face identification in 3-D rendering of a difficult two-dimensional (2-D) face recognition problem into a much simpler 3-D surface matching problem. The estimation of the unknown pose is achieved using the Levenberg-Marquardt optimization process. Encouraging experimental results are obtained in a controlled environment with high-resolution images under a good illumination condition, as well as for images taken in an uncontrolled environment under arbitrary illumination with low-resolution cameras.
Model based LV-reconstruction in bi-plane x-ray angiography
NASA Astrophysics Data System (ADS)
Backfrieder, Werner; Carpella, Martin; Swoboda, Roland; Steinwender, Clemens; Gabriel, Christian; Leisch, Franz
2005-04-01
Interventional x-ray angiography is state of the art in diagnosis and therapy of severe diseases of the cardiovascular system. Diagnosis is based on contrast enhanced dynamic projection images of the left ventricle. A new model based algorithm for three dimensional reconstruction of the left ventricle from bi-planar angiograms was developed. Parametric super ellipses are deformed until their projection profiles optimally fit measured ventricular projections. Deformation is controlled by a simplex optimization procedure. A resulting optimized parameter set builds the initial guess for neighboring slices. A three dimensional surface model of the ventricle is built from stacked contours. The accuracy of the algorithm has been tested with mathematical phantom data and clinical data. Results show conformance with provided projection data and high convergence speed makes the algorithm useful for clinical application. Fully three dimensional reconstruction of the left ventricle has a high potential for improvements of clinical findings in interventional cardiology.
NASA Astrophysics Data System (ADS)
Song, Yunquan; Lin, Lu; Jian, Ling
2016-07-01
Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.
Two-dimensional numerical model for the high electron mobility transistor
NASA Astrophysics Data System (ADS)
Loret, Dany
1987-11-01
A two-dimensional numerical drift-diffusion model for the High Electron Mobility Transistor (HEMT) is presented. Special attention is paid to the modeling of the current flow over the heterojunction. A finite difference scheme is used to solve the equations, and a variable mesh spacing was implemented to cope with the strong variations of functions near the heterojunction. Simulation results are compared to experimental data for a 0.7 μm gate length device. Small-signal transconductances and cut-off frequency obtained from the 2-D model agree well with the experimental values from S-parameter measurements. It is shown that the numerical models give good insight into device behaviour, including important parasitic effects such as electron injection into the bulk GaAs.
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Rosen, I. G.
1988-01-01
In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.
NASA Astrophysics Data System (ADS)
Özaydın, Sinan; Bülent Tank, Sabri; Karaş, Mustafa; Sandvol, Eric
2017-04-01
Wide-band magnetotelluric (MT) (360 Hz - 1860 sec) data were acquired at 25 sites along a north - south aligned profile cutting across the Central Pontides, which are made up of highly metamorphosed formations and their tectonic boundaries including: a Lower Cretaceous-aged turbidite sequence, Central Pontides Metamorphic Supercomplex (CPMS), North Anatolian Fault Zone (NAFZ) and Izmir-Ankara-Erzincan Suture Zone (IAESZ). Dimensionality analyses over all observation points demonstrated high electrical anisotropy, which indicates complex geological and tectonic structures. This dimensional complexity and presence of the electrically conductive Black Sea augmented the requirement for a three-dimensional analysis. Inverse modeling routines, ModEM (Egbert and Kelbert, 2012) and WSINV3DMT (Siripunvaraporn et al., 2005) were utilized to reveal the geo-electrical implications over this unusually complicated region. Interpretations of the resultant models are summarized as follows: (i) Çangaldaǧ and Domuzdaǧ complexes appear as highly resistive bodies bounded by north dipping faults. (ii) Highly conductive Tosya Basin sediments overlain the ophiolitic materials as a thin cover located at the south of the NAFZ. (iii) North Anatolian Fault and some auxiliary faults within the system exhibit conductive-resistive interfaces that reach to lower crustal levels. (iv) IAESZ is a clear feature marked by the resistivity contrast between NAFZ-related sedimentary basins and Neo-Tethyan ophiolites.
Hypersonic Combustor Model Inlet CFD Simulations and Experimental Comparisons
NASA Technical Reports Server (NTRS)
Venkatapathy, E.; TokarcikPolsky, S.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)
1995-01-01
Numerous two-and three-dimensional computational simulations were performed for the inlet associated with the combustor model for the hypersonic propulsion experiment in the NASA Ames 16-Inch Shock Tunnel. The inlet was designed to produce a combustor-inlet flow that is nearly two-dimensional and of sufficient mass flow rate for large scale combustor testing. The three-dimensional simulations demonstrated that the inlet design met all the design objectives and that the inlet produced a very nearly two-dimensional combustor inflow profile. Numerous two-dimensional simulations were performed with various levels of approximations such as in the choice of chemical and physical models, as well as numerical approximations. Parametric studies were conducted to better understand and to characterize the inlet flow. Results from the two-and three-dimensional simulations were used to predict the mass flux entering the combustor and a mass flux correlation as a function of facility stagnation pressure was developed. Surface heat flux and pressure measurements were compared with the computed results and good agreement was found. The computational simulations helped determine the inlet low characteristics in the high enthalpy environment, the important parameters that affect the combustor-inlet flow, and the sensitivity of the inlet flow to various modeling assumptions.
Engineering three-dimensional cardiac microtissues for potential drug screening applications.
Wang, L; Huang, G; Sha, B; Wang, S; Han, Y L; Wu, J; Li, Y; Du, Y; Lu, T J; Xu, F
2014-01-01
Heart disease is one of the major global health issues. Despite rapid advances in cardiac tissue engineering, limited successful strategies have been achieved to cure cardiovascular diseases. This situation is mainly due to poor understanding of the mechanism of diverse heart diseases and unavailability of effective in vitro heart tissue models for cardiovascular drug screening. With the development of microengineering technologies, three-dimensional (3D) cardiac microtissue (CMT) models, mimicking 3D architectural microenvironment of native heart tissues, have been developed. The engineered 3D CMT models hold greater potential to be used for assessing effective drugs candidates than traditional two-dimensional cardiomyocyte culture models. This review discusses the development of 3D CMT models and highlights their potential applications for high-throughput screening of cardiovascular drug candidates.
Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.
Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros
2018-05-01
We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.
Measurement of the Equation of State of the Two-Dimensional Hubbard Model
NASA Astrophysics Data System (ADS)
Miller, Luke; Cocchi, Eugenio; Drewes, Jan; Koschorreck, Marco; Pertot, Daniel; Brennecke, Ferdinand; Koehl, Michael
2016-05-01
The subtle interplay between kinetic energy, interactions and dimensionality challenges our comprehension of strongly-correlated physics observed, for example, in the solid state. In this quest, the Hubbard model has emerged as a conceptually simple, yet rich model describing such physics. Here we present an experimental determination of the equation of state of the repulsive two-dimensional Hubbard model over a broad range of interactions, 0 <= U / t <= 20 , and temperatures, down to kB T / t = 0 . 63(2) using high-resolution imaging of ultracold fermionic atoms in optical lattices. We show density profiles, compressibilities and double occupancies over the whole doping range, and hence our results constitute benchmarks for state-of-the-art theoretical approaches.
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
Stirling Analysis Comparison of Commercial vs. High-Order Methods
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako
2007-01-01
Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/ proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's Compact scheme and Dyson s Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model although sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.
Stirling Analysis Comparison of Commercial Versus High-Order Methods
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako
2005-01-01
Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's compact scheme and Dyson's Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model with sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.
Berzak, L; Jones, A D; Kaita, R; Kozub, T; Logan, N; Majeski, R; Menard, J; Zakharov, L
2010-10-01
The lithium tokamak experiment (LTX) is a modest-sized spherical tokamak (R(0)=0.4 m and a=0.26 m) designed to investigate the low-recycling lithium wall operating regime for magnetically confined plasmas. LTX will reach this regime through a lithium-coated shell internal to the vacuum vessel, conformal to the plasma last-closed-flux surface, and heated to 300-400 °C. This structure is highly conductive and not axisymmetric. The three-dimensional nature of the shell causes the eddy currents and magnetic fields to be three-dimensional as well. In order to analyze the plasma equilibrium in the presence of three-dimensional eddy currents, an extensive array of unique magnetic diagnostics has been implemented. Sensors are designed to survive high temperatures and incidental contact with lithium and provide data on toroidal asymmetries as well as full coverage of the poloidal cross-section. The magnetic array has been utilized to determine the effects of nonaxisymmetric eddy currents and to model the start-up phase of LTX. Measurements from the magnetic array, coupled with two-dimensional field component modeling, have allowed a suitable field null and initial plasma current to be produced. For full magnetic reconstructions, a three-dimensional electromagnetic model of the vacuum vessel and shell is under development.
Incremental online learning in high dimensions.
Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan
2005-12-01
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.
Three-dimensional analysis of tubular permanent magnet machines
NASA Astrophysics Data System (ADS)
Chai, J.; Wang, J.; Howe, D.
2006-04-01
This paper presents results from a three-dimensional finite element analysis of a tubular permanent magnet machine, and quantifies the influence of the laminated modules from which the stator core is assembled on the flux linkage and thrust force capability as well as on the self- and mutual inductances. The three-dimensional finite element (FE) model accounts for the nonlinear, anisotropic magnetization characteristic of the laminated stator structure, and for the voids which exist between the laminated modules. Predicted results are compared with those deduced from an axisymmetric FE model. It is shown that the emf and thrust force deduced from the three-dimensional model are significantly lower than those which are predicted from an axisymmetric field analysis, primarily as a consequence of the teeth and yoke being more highly saturated due to the presence of the voids in the laminated stator core.
Zheng, X; Xue, Q; Mittal, R; Beilamowicz, S
2010-11-01
A new flow-structure interaction method is presented, which couples a sharp-interface immersed boundary method flow solver with a finite-element method based solid dynamics solver. The coupled method provides robust and high-fidelity solution for complex flow-structure interaction (FSI) problems such as those involving three-dimensional flow and viscoelastic solids. The FSI solver is used to simulate flow-induced vibrations of the vocal folds during phonation. Both two- and three-dimensional models have been examined and qualitative, as well as quantitative comparisons, have been made with established results in order to validate the solver. The solver is used to study the onset of phonation in a two-dimensional laryngeal model and the dynamics of the glottal jet in a three-dimensional model and results from these studies are also presented.
A manifold learning approach to target detection in high-resolution hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ziemann, Amanda K.
Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying "targets" such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data; this is particularly true when implementing traditional target detection approaches, and the limitations of these models are well-documented. With manifold learning based approaches, the only assumption is that the data reside on an underlying manifold that can be discretely modeled by a graph. The research presented here focuses on the use of graph theory and manifold learning in hyperspectral imagery. Early work explored various graph-building techniques with application to the background model of the Topological Anomaly Detection (TAD) algorithm, which is a graph theory based approach to anomaly detection. This led towards a focus on target detection, and in the development of a specific graph-based model of the data and subsequent dimensionality reduction using manifold learning. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, lower-dimensional manifold coordinates. Then, target detection is performed in the manifold space.
Aeroacoustic theory for noncompact wing-gust interaction
NASA Technical Reports Server (NTRS)
Martinez, R.; Widnall, S. E.
1981-01-01
Three aeroacoustic models for noncompact wing-gust interaction were developed for subsonic flow. The first is that for a two dimensional (infinite span) wing passing through an oblique gust. The unsteady pressure field was obtained by the Wiener-Hopf technique; the airfoil loading and the associated acoustic field were calculated, respectively, by allowing the field point down on the airfoil surface, or by letting it go to infinity. The second model is a simple spanwise superposition of two dimensional solutions to account for three dimensional acoustic effects of wing rotation (for a helicopter blade, or some other rotating planform) and of finiteness of wing span. A three dimensional theory for a single gust was applied to calculate the acoustic signature in closed form due to blade vortex interaction in helicopters. The third model is that of a quarter infinite plate with side edge through a gust at high subsonic speed. An approximate solution for the three dimensional loading and the associated three dimensional acoustic field in closed form was obtained. The results reflected the acoustic effect of satisfying the correct loading condition at the side edge.
Haro, Alexander J.; Chelminski, Michael; Dudley, Robert W.
2015-01-01
We developed two-dimensional computational fluid hydraulics-habitat suitability index (CFD-HSI) models to identify and qualitatively assess potential zones of shallow water depth and high water velocity that may present passage challenges for five major anadromous fish species in a 2.63-km reach of the main stem Penobscot River, Maine, as a result of a dam removal downstream of the reach. Suitability parameters were based on distribution of fish lengths and body depths and transformed to cruising, maximum sustained and sprint swimming speeds. Zones of potential depth and velocity challenges were calculated based on the hydraulic models; ability of fish to pass a challenge zone was based on the percent of river channel that the contiguous zone spanned and its maximum along-current length. Three river flows (low: 99.1 m3 sec-1; normal: 344.9 m3 sec-1; and high: 792.9 m3 sec-1) were modelled to simulate existing hydraulic conditions and hydraulic conditions simulating removal of a dam at the downstream boundary of the reach. Potential depth challenge zones were nonexistent for all low-flow simulations of existing conditions for deeper-bodied fishes. Increasing flows for existing conditions and removal of the dam under all flow conditions increased the number and size of potential velocity challenge zones, with the effects of zones being more pronounced for smaller species. The two-dimensional CFD-HSI model has utility in demonstrating gross effects of flow and hydraulic alteration, but may not be as precise a predictive tool as a three-dimensional model. Passability of the potential challenge zones cannot be precisely quantified for two-dimensional or three-dimensional models due to untested assumptions and incomplete data on fish swimming performance and behaviours.
Integrated Aeromechanics with Three-Dimensional Solid-Multibody Structures
NASA Technical Reports Server (NTRS)
Datta, Anubhav; Johnson, Wayne
2014-01-01
A full three-dimensional finite element-multibody structural dynamic solver is coupled to a three-dimensional Reynolds-averaged Navier-Stokes solver for the prediction of integrated aeromechanical stresses and strains on a rotor blade in forward flight. The objective is to lay the foundations of all major pieces of an integrated three-dimensional rotor dynamic analysis - from model construction to aeromechanical solution to stress/strain calculation. The primary focus is on the aeromechanical solution. Two types of three-dimensional CFD/CSD interfaces are constructed for this purpose with an emphasis on resolving errors from geometry mis-match so that initial-stage approximate structural geometries can also be effectively analyzed. A three-dimensional structural model is constructed as an approximation to a UH-60A-like fully articulated rotor. The aerodynamic model is identical to the UH-60A rotor. For preliminary validation measurements from a UH-60A high speed flight is used where CFD coupling is essential to capture the advancing side tip transonic effects. The key conclusion is that an integrated aeromechanical analysis is indeed possible with three-dimensional structural dynamics but requires a careful description of its geometry and discretization of its parts.
NASA Technical Reports Server (NTRS)
Covey, Curt; Ghan, Steven J.; Walton, John J.; Weissman, Paul R.
1989-01-01
Interception of sunlight by the high altitude worldwide dust cloud generated by impact of a large asteroid or comet would lead to substantial land surface cooling, according to our three-dimensional atmospheric general circulation model (GCM). This result is qualitatively similar to conclusions drawn from an earlier study that employed a one-dimensional atmospheric model, but in the GCM simulation the heat capacity of the oceans substantially mitigates land surface cooling, an effect that one-dimensional models cannot quantify. On the other hand, the low heat capacity of the GCM's land surface allows temperatures to drop more rapidly in the initial stage of cooling than in the one-dimensional model study. These two differences between three-dimensional and one-dimensional model simulations were noted previously in studies of nuclear winter; GCM-simulated climatic changes in the Alvarez-inspired scenario of asteroid/comet winter, however, are more severe than in nuclear winter because the assumed aerosol amount is large enough to intercept all sunlight falling on earth. Impacts of smaller objects could also lead to dramatic, though less severe, climatic changes, according to our GCM. Our conclusion is that it is difficult to imagine an asteroid or comet impact leading to anything approaching complete global freezing, but quite reasonable to assume that impacts at the Alvarez level, or even smaller, dramatically alter the climate in at least a patchy sense.
Hydrogen recycling in graphite at higher fluxes
NASA Astrophysics Data System (ADS)
Larsson, D.; Bergsåker, H.; Hedqvist, A.
Understanding hydrogen recycling is essential for particle control in fusion devices with a graphite wall. At Extrap T2 three different models have been used. A zero-dimensional (0D) recycling model reproduces the density behavior in plasma discharges as well as in helium glow discharge. A more sophisticated one-dimensional (1D) model is used along with a simple mixing model to explain the results in isotopic exchange experiments. Due to high fluxes some changes in the models were needed. In the paper, the three models are discussed and the results are compared with experimental data.
SABRINA - an interactive geometry modeler for MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.; Murphy, J.
One of the most difficult tasks when analyzing a complex three-dimensional system with Monte Carlo is geometry model development. SABRINA attempts to make the modeling process more user-friendly and less of an obstacle. It accepts both combinatorial solid bodies and MCNP surfaces and produces MCNP cells. The model development process in SABRINA is highly interactive and gives the user immediate feedback on errors. Users can view their geometry from arbitrary perspectives while the model is under development and interactively find and correct modeling errors. An example of a SABRINA display is shown. It represents a complex three-dimensional shape.
Modeling The Shock Initiation of PBX-9501 in ALE3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leininger, L; Springer, H K; Mace, J
The SMIS (Specific Munitions Impact Scenario) experimental series performed at Los Alamos National Laboratory has determined the 3-dimensional shock initiation behavior of the HMX-based heterogeneous high explosive, PBX 9501. A series of finite element impact calculations have been performed in the ALE3D [1] hydrodynamic code and compared to the SMIS results to validate the code predictions. The SMIS tests use a powder gun to shoot scaled NATO standard fragments at a cylinder of PBX 9501, which has a PMMA case and a steel impact cover. The SMIS real-world shot scenario creates a unique test-bed because many of the fragments arrivemore » at the impact plate off-center and at an angle of impact. The goal of this model validation experiments is to demonstrate the predictive capability of the Tarver-Lee Ignition and Growth (I&G) reactive flow model [2] in this fully 3-dimensional regime of Shock to Detonation Transition (SDT). The 3-dimensional Arbitrary Lagrange Eulerian hydrodynamic model in ALE3D applies the Ignition and Growth (I&G) reactive flow model with PBX 9501 parameters derived from historical 1-dimensional experimental data. The model includes the off-center and angle of impact variations seen in the experiments. Qualitatively, the ALE3D I&G calculations accurately reproduce the 'Go/No-Go' threshold of the Shock to Detonation Transition (SDT) reaction in the explosive, as well as the case expansion recorded by a high-speed optical camera. Quantitatively, the calculations show good agreement with the shock time of arrival at internal and external diagnostic pins. This exercise demonstrates the utility of the Ignition and Growth model applied in a predictive fashion for the response of heterogeneous high explosives in the SDT regime.« less
NASA Astrophysics Data System (ADS)
Baek, Seung Ki; Um, Jaegon; Yi, Su Do; Kim, Beom Jun
2011-11-01
In a number of classical statistical-physical models, there exists a characteristic dimensionality called the upper critical dimension above which one observes the mean-field critical behavior. Instead of constructing high-dimensional lattices, however, one can also consider infinite-dimensional structures, and the question is whether this mean-field character extends to quantum-mechanical cases as well. We therefore investigate the transverse-field quantum Ising model on the globally coupled network and on the Watts-Strogatz small-world network by means of quantum Monte Carlo simulations and the finite-size scaling analysis. We confirm that both of the structures exhibit critical behavior consistent with the mean-field description. In particular, we show that the existing cumulant method has difficulty in estimating the correct dynamic critical exponent and suggest that an order parameter based on the quantum-mechanical expectation value can be a practically useful numerical observable to determine critical behavior when there is no well-defined dimensionality.
Model-based reinforcement learning with dimension reduction.
Tangkaratt, Voot; Morimoto, Jun; Sugiyama, Masashi
2016-12-01
The goal of reinforcement learning is to learn an optimal policy which controls an agent to acquire the maximum cumulative reward. The model-based reinforcement learning approach learns a transition model of the environment from data, and then derives the optimal policy using the transition model. However, learning an accurate transition model in high-dimensional environments requires a large amount of data which is difficult to obtain. To overcome this difficulty, in this paper, we propose to combine model-based reinforcement learning with the recently developed least-squares conditional entropy (LSCE) method, which simultaneously performs transition model estimation and dimension reduction. We also further extend the proposed method to imitation learning scenarios. The experimental results show that policy search combined with LSCE performs well for high-dimensional control tasks including real humanoid robot control. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiao, Huiqing; Zhao, Chengyi; Sheng, Yu; Chen, Yan; Shi, Jianchu; Li, Baoguo
2017-04-01
Water shortage and soil salinization increasingly become the main constraints for sustainable development of agriculture in Southern Xinjiang, China. Mulched drip irrigation, as a high-efficient water-saving irrigation method, has been widely applied in Southern Xinjiang for cotton production. In order to analyze the reasonability of describing the three-dimensional soil water and salt transport processes under mulched drip irrigation with a relatively simple two-dimensional model, a field experiment was conducted from 2007 to 2015 at Aksu of Southern Xinjiang, and soil water and salt transport processes were simulated through the three-dimensional and two-dimensional models based on COMSOL. Obvious differences were found between three-dimensional and two-dimensional simulations for soil water flow within the early 12 h of irrigation event and for soil salt transport in the area within 15 cm away from drip tubes during the whole irrigation event. The soil water and salt contents simulated by the two-dimensional model, however, agreed well with the mean values between two adjacent emitters simulated by the three-dimensional model, and also coincided with the measurements as corresponding RMSE less than 0.037 cm3 cm-3 and 1.80 g kg-1, indicating that the two-dimensional model was reliable for field irrigation management. Subsequently, the two-dimensional model was applied to simulate the dynamics of soil salinity for five numerical situations and for a widely adopted irrigation pattern in Southern Xinjiang (about 350 mm through mulched drip irrigation during growing season of cotton and total 400 mm through flooding irrigations before sowing and after harvesting). The simulation results indicated that the contribution of transpiration to salt accumulation in root layer was about 75% under mulched drip irrigation. Moreover, flooding irrigations before sowing and after harvesting were of great importance for salt leaching of arable layer, especially in bare strip where drip irrigation water hardly reached, and thus providing suitable root zone environment for cotton. Nevertheless, flooding irrigation should be further optimized to enhance water use efficiency.
Field-scale and wellbore modeling of compaction-induced casing failures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilbert, L.B. Jr.; Gwinn, R.L.; Moroney, T.A.
1999-06-01
Presented in this paper are the results and verification of field- and wellbore-scale large deformation, elasto-plastic, geomechanical finite element models of reservoir compaction and associated casing damage. The models were developed as part of a multidisciplinary team project to reduce the number of costly well failures in the diatomite reservoir of the South Belridge Field near Bakersfield, California. Reservoir compaction of high porosity diatomite rock induces localized shearing deformations on horizontal weak-rock layers and geologic unconformities. The localized shearing deformations result in casing damage or failure. Two-dimensional, field-scale finite element models were used to develop relationships between field operations, surfacemore » subsidence, and shear-induced casing damage. Pore pressures were computed for eighteen years of simulated production and water injection, using a three-dimensional reservoir simulator. The pore pressures were input to the two-dimensional geomechanical field-scale model. Frictional contact surfaces were used to model localized shear deformations. To capture the complex casing-cement-rock interaction that governs casing damage and failure, three-dimensional models of a wellbore were constructed, including a frictional sliding surface to model localized shear deformation. Calculations were compared to field data for verification of the models.« less
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta
2009-07-01
Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.
Prediction of Incident Diabetes in the Jackson Heart Study Using High-Dimensional Machine Learning
Casanova, Ramon; Saldana, Santiago; Simpson, Sean L.; Lacy, Mary E.; Subauste, Angela R.; Blackshear, Chad; Wagenknecht, Lynne; Bertoni, Alain G.
2016-01-01
Statistical models to predict incident diabetes are often based on limited variables. Here we pursued two main goals: 1) investigate the relative performance of a machine learning method such as Random Forests (RF) for detecting incident diabetes in a high-dimensional setting defined by a large set of observational data, and 2) uncover potential predictors of diabetes. The Jackson Heart Study collected data at baseline and in two follow-up visits from 5,301 African Americans. We excluded those with baseline diabetes and no follow-up, leaving 3,633 individuals for analyses. Over a mean 8-year follow-up, 584 participants developed diabetes. The full RF model evaluated 93 variables including demographic, anthropometric, blood biomarker, medical history, and echocardiogram data. We also used RF metrics of variable importance to rank variables according to their contribution to diabetes prediction. We implemented other models based on logistic regression and RF where features were preselected. The RF full model performance was similar (AUC = 0.82) to those more parsimonious models. The top-ranked variables according to RF included hemoglobin A1C, fasting plasma glucose, waist circumference, adiponectin, c-reactive protein, triglycerides, leptin, left ventricular mass, high-density lipoprotein cholesterol, and aldosterone. This work shows the potential of RF for incident diabetes prediction while dealing with high-dimensional data. PMID:27727289
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zabaras, Nicolas J.
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
NASA Astrophysics Data System (ADS)
Evans, Conor
2015-03-01
Three dimensional, in vitro spheroid cultures offer considerable utility for the development and testing of anticancer photodynamic therapy regimens. More complex than monolayer cultures, three-dimensional spheroid systems replicate many of the important cell-cell and cell-matrix interactions that modulate treatment response in vivo. Simple enough to be grown by the thousands and small enough to be optically interrogated, spheroid cultures lend themselves to high-content and high-throughput imaging approaches. These advantages have enabled studies investigating photosensitizer uptake, spatiotemporal patterns of therapeutic response, alterations in oxygen diffusion and consumption during therapy, and the exploration of mechanisms that underlie therapeutic synergy. The use of quantitative imaging methods, in particular, has accelerated the pace of three-dimensional in vitro photodynamic therapy studies, enabling the rapid compilation of multiple treatment response parameters in a single experiment. Improvements in model cultures, the creation of new molecular probes of cell state and function, and innovations in imaging toolkits will be important for the advancement of spheroid culture systems for future photodynamic therapy studies.
Kimizuka, Hajime; Kurokawa, Shu; Yamaguchi, Akihiro; Sakai, Akira; Ogata, Shigenobu
2014-01-01
Predicting the equilibrium ordered structures at internal interfaces, especially in the case of nanometer-scale chemical heterogeneities, is an ongoing challenge in materials science. In this study, we established an ab-initio coarse-grained modeling technique for describing the phase-like behavior of a close-packed stacking-fault-type interface containing solute nanoclusters, which undergo a two-dimensional disorder-order transition, depending on the temperature and composition. Notably, this approach can predict the two-dimensional medium-range ordering in the nanocluster arrays realized in Mg-based alloys, in a manner consistent with scanning tunneling microscopy-based measurements. We predicted that the repulsively interacting solute-cluster system undergoes a continuous evolution into a highly ordered densely packed morphology while maintaining a high degree of six-fold orientational order, which is attributable mainly to an entropic effect. The uncovered interaction-dependent ordering properties may be useful for the design of nanostructured materials utilizing the self-organization of two-dimensional nanocluster arrays in the close-packed interfaces. PMID:25471232
A three-dimensional model of Tangential YORP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golubov, O.; Scheeres, D. J.; Krugly, Yu. N., E-mail: golubov@astron.kharkov.ua
2014-10-10
Tangential YORP, or TYORP, has recently been demonstrated to be an important factor in the evolution of an asteroid's rotation state. It is complementary to normal YORP, or NYORP, which used to be considered previously. While NYORP is produced by non-symmetry in the large-scale geometry of an asteroid, TYORP is due to heat conductivity in stones on the surface of the asteroid. To date, TYORP has been studied only in a simplified one-dimensional model, substituting stones with high long walls. This article for the first time considers TYORP in a realistic three-dimensional model, also including shadowing and self-illumination effects viamore » ray tracing. TYORP is simulated for spherical stones lying on regolith. The model includes only five free parameters and the dependence of the TYORP on each of them is studied. The TYORP torque appears to be smaller than previous estimates from the one-dimensional model, but is still comparable to the NYORP torques. These results can be used to estimate TYORP of different asteroids and also as a basis for more sophisticated models of TYORP.« less
NASA Astrophysics Data System (ADS)
Sharma, Neetika; Verma, Neha; Jogi, Jyotika
2017-11-01
This paper models the scattering limited electron transport in a nano-dimensional In0.52Al0.48As/In0.53Ga0.47As/InP heterostructure. An analytical model for temperature dependent sheet carrier concentration and carrier mobility in a two dimensional electron gas, confined in a triangular potential well has been developed. The model accounts for all the major scattering process including ionized impurity scattering and lattice scattering. Quantum mechanical variational technique is employed for studying the intrasubband scattering mechanism in the two dimensional electron gas. Results of various scattering limited structural parameters such as energy band-gap and functional parameters such as sheet carrier concentration, scattering rate and mobility are presented. The model corroborates the dominance of ionized impurity scattering mechanism at low temperatures and that of lattice scattering at high temperatures, both in turn limiting the carrier mobility. Net mobility obtained taking various scattering mechanisms into account has been found in agreement with earlier reported results, thus validating the model.
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.
Gong, Xiajing; Hu, Meng; Zhao, Liang
2018-05-01
Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Conformational Sampling in Template-Free Protein Loop Structure Modeling: An Overview
Li, Yaohang
2013-01-01
Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a “mini protein folding problem” under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized. PMID:24688696
Conformational sampling in template-free protein loop structure modeling: an overview.
Li, Yaohang
2013-01-01
Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a "mini protein folding problem" under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized.
Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data
NASA Astrophysics Data System (ADS)
Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun
2014-11-01
Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.
NASA Astrophysics Data System (ADS)
Franck, I. M.; Koutsourelakis, P. S.
2017-01-01
This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.
Thermal History and Mantle Dynamics of Venus
NASA Technical Reports Server (NTRS)
Hsui, Albert T.
1997-01-01
One objective of this research proposal is to develop a 3-D thermal history model for Venus. The basis of our study is a finite-element computer model to simulate thermal convection of fluids with highly temperature- and pressure-dependent viscosities in a three-dimensional spherical shell. A three-dimensional model for thermal history studies is necessary for the following reasons. To study planetary thermal evolution, one needs to consider global heat budgets of a planet throughout its evolution history. Hence, three-dimensional models are necessary. This is in contrasts to studies of some local phenomena or local structures where models of lower dimensions may be sufficient. There are different approaches to treat three-dimensional thermal convection problems. Each approach has its own advantages and disadvantages. Therefore, the choice of the various approaches is subjective and dependent on the problem addressed. In our case, we are interested in the effects of viscosities that are highly temperature dependent and that their magnitudes within the computing domain can vary over many orders of magnitude. In order to resolve the rapid change of viscosities, small grid spacings are often necessary. To optimize the amount of computing, variable grids become desirable. Thus, the finite-element numerical approach is chosen for its ability to place grid elements of different sizes over the complete computational domain. For this research proposal, we did not start from scratch and develop the finite element codes from the beginning. Instead, we adopted a finite-element model developed by Baumgardner, a collaborator of this research proposal, for three-dimensional thermal convection with constant viscosity. Over the duration supported by this research proposal, a significant amount of advancements have been accomplished.
High Reynolds number turbulence model of rotating shear flows
NASA Astrophysics Data System (ADS)
Masuda, S.; Ariga, I.; Koyama, H. S.
1983-09-01
A Reynolds stress closure model for rotating turbulent shear flows is developed. Special attention is paid to keeping the model constants independent of rotation. First, general forms of the model of a Reynolds stress equation and a dissipation rate equation are derived, the only restrictions of which are high Reynolds number and incompressibility. The model equations are then applied to two-dimensional equilibrium boundary layers and the effects of Coriolis acceleration on turbulence structures are discussed. Comparisons with the experimental data and with previous results in other external force fields show that there exists a very close analogy between centrifugal, buoyancy and Coriolis force fields. Finally, the model is applied to predict the two-dimensional boundary layers on rotating plane walls. Comparisons with existing data confirmed its capability of predicting mean and turbulent quantities without employing any empirical relations in rotating fields.
Sufficient Forecasting Using Factor Models
Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei
2017-01-01
We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537
Preparation of a Three-Dimensional Full Thickness Skin Equivalent.
Reuter, Christian; Walles, Heike; Groeber, Florian
2017-01-01
In vitro test systems are a promising alternative to animal models. Due to the use of human cells in a three-dimensional arrangement that allows cell-cell or cell-matrix interactions these models may be more predictive for the human situation compared to animal models or two-dimensional cell culture systems. Especially for dermatological research, skin models such as epidermal or full-thickness skin equivalents (FTSE) are used for different applications. Although epidermal models provide highly standardized conditions for risk assessment, FTSE facilitate a cellular crosstalk between the dermal and epidermal layer and thus can be used as more complex models for the investigation of processes such as wound healing, skin development, or infectious diseases. In this chapter, we describe the generation and culture of an FTSE, based on a collagen type I matrix and provide troubleshooting tips for commonly encountered technical problems.
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; ...
2017-10-10
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less
Method and apparatus for multiple-projection, dual-energy x-ray absorptiometry scanning
NASA Technical Reports Server (NTRS)
Feldmesser, Howard S. (Inventor); Magee, Thomas C. (Inventor); Charles, Jr., Harry K. (Inventor); Beck, Thomas J. (Inventor)
2007-01-01
Methods and apparatuses for advanced, multiple-projection, dual-energy X-ray absorptiometry scanning systems include combinations of a conical collimator; a high-resolution two-dimensional detector; a portable, power-capped, variable-exposure-time power supply; an exposure-time control element; calibration monitoring; a three-dimensional anti-scatter-grid; and a gantry-gantry base assembly that permits up to seven projection angles for overlapping beams. Such systems are capable of high precision bone structure measurements that can support three dimensional bone modeling and derivations of bone strength, risk of injury, and efficacy of countermeasures among other properties.
Harris, Sharon
2013-01-01
Abstract Appropriately constructed health promotions can improve population health. The authors developed a practical model for designing, evaluating, and improving initiatives to provide optimal value. Three independent model dimensions (impact, engagement, and sustainability) and the resultant three-dimensional paradigm were described using hypothetical case studies, including a walking challenge, a health risk assessment survey, and an individual condition management program. The 3-dimensional model is illustrated and the dimensions are defined. Calculation of a 3-dimensional score for program comparisons, refinements, and measurement is explained. Program 1, the walking challenge, had high engagement and impact, but limited sustainability. Program 2, the health risk assessment survey, had high engagement and sustainability but limited impact. Program 3, the on-site condition management program, had measurable impact and sustainability but limited engagement, because of a lack of program capacity. Each initiative, though successful in 2 dimensions, lacked sufficient evolution along the third axis for optimal value. Calculation of a 3-dimensional score is useful for health promotion program development comparison and refinements, and overall measurement of program success. (Population Health Management 2013;16:291–295) PMID:23869538
Shah, Sinal; Sundaram, Geeta; Bartlett, David; Sherriff, Martyn
2004-11-01
Several studies have made comparisons in the dimensional accuracy of different elastomeric impression materials. Most have used two-dimensional measuring devices, which neglect to account for the dimensional changes that exist along a three-dimensional surface. The aim of this study was to compare the dimensional accuracy of an impression technique using a polyether material (Impregum) and a vinyl poly siloxane material (President) using a laser scanner with three-dimensional superimpositional software. Twenty impressions, 10 with a polyether and 10 with addition silicone, of a stone master model that resembled a dental arch containing three acrylic posterior teeth were cast in orthodontic stone. One plastic tooth was prepared for a metal crown. The master model and the casts were digitised with the non-contacting laser scanner to produce a 3D image. 3D surface viewer software superimposed the master model to the stone replica and the difference between the images analysed. The mean difference between the model and the stone replica made from Impregum was 0.072mm (SD 0.006) and that for the silicone 0.097mm (SD 0.005) and this difference was statistically significantly, p=0.001. Both impression materials provided an accurate replica of the prepared teeth supporting the view that these materials are highly accurate.
High dimensional model representation method for fuzzy structural dynamics
NASA Astrophysics Data System (ADS)
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
NASA Astrophysics Data System (ADS)
Perillo, Evan P.; Liu, Yen-Liang; Huynh, Khang; Liu, Cong; Chou, Chao-Kai; Hung, Mien-Chie; Yeh, Hsin-Chih; Dunn, Andrew K.
2015-07-01
Molecular trafficking within cells, tissues and engineered three-dimensional multicellular models is critical to the understanding of the development and treatment of various diseases including cancer. However, current tracking methods are either confined to two dimensions or limited to an interrogation depth of ~15 μm. Here we present a three-dimensional tracking method capable of quantifying rapid molecular transport dynamics in highly scattering environments at depths up to 200 μm. The system has a response time of 1 ms with a temporal resolution down to 50 μs in high signal-to-noise conditions, and a spatial localization precision as good as 35 nm. Built on spatiotemporally multiplexed two-photon excitation, this approach requires only one detector for three-dimensional particle tracking and allows for two-photon, multicolour imaging. Here we demonstrate three-dimensional tracking of epidermal growth factor receptor complexes at a depth of ~100 μm in tumour spheroids.
A memory-efficient staining algorithm in 3D seismic modelling and imaging
NASA Astrophysics Data System (ADS)
Jia, Xiaofeng; Yang, Lu
2017-08-01
The staining algorithm has been proven to generate high signal-to-noise ratio (S/N) images in poorly illuminated areas in two-dimensional cases. In the staining algorithm, the stained wavefield relevant to the target area and the regular source wavefield forward propagate synchronously. Cross-correlating these two wavefields with the backward propagated receiver wavefield separately, we obtain two images: the local image of the target area and the conventional reverse time migration (RTM) image. This imaging process costs massive computer memory for wavefield storage, especially in large scale three-dimensional cases. To make the staining algorithm applicable to three-dimensional RTM, we develop a method to implement the staining algorithm in three-dimensional acoustic modelling in a standard staggered grid finite difference (FD) scheme. The implementation is adaptive to the order of spatial accuracy of the FD operator. The method can be applied to elastic, electromagnetic, and other wave equations. Taking the memory requirement into account, we adopt a random boundary condition (RBC) to backward extrapolate the receiver wavefield and reconstruct it by reverse propagation using the final wavefield snapshot only. Meanwhile, we forward simulate the stained wavefield and source wavefield simultaneously using the nearly perfectly matched layer (NPML) boundary condition. Experiments on a complex geologic model indicate that the RBC-NPML collaborative strategy not only minimizes the memory consumption but also guarantees high quality imaging results. We apply the staining algorithm to three-dimensional RTM via the proposed strategy. Numerical results show that our staining algorithm can produce high S/N images in the target areas with other structures effectively muted.
NASA Astrophysics Data System (ADS)
Yao, Bing; Yang, Hui
2016-12-01
This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.
NASA Astrophysics Data System (ADS)
Galperin, Boris; Mellor, George L.
1990-09-01
The three-dimensional model of Delaware Bay, River and adjacent continental shelf was described in Part 1. Here, Part 2 of this two-part paper demonstrates that the model is capable of realistic simulation of current and salinity distributions, tidal cycle variability, events of strong mixing caused by high winds and rapid salinity changes due to high river runoff. The 25-h average subtidal circulation strongly depends on the wind forcing. Monthly residual currents and salinity distributions demonstrate a classical two-layer estuarine circulation wherein relatively low salinity water flows out at the surface and compensating high salinity water from the shelf flows at the bottom. The salinity intrusion is most vigorous along deep channels in the Bay. Winds can generate salinity fronts inside and outside the Bay and enhance or weaken the two-layer circulation pattern. Since the portion of the continental shelf included in the model is limited, the model shelf circulation is locally wind-driven and excludes such effects as coastally trapped waves and interaction with Gulf Stream rings; nevertheless, a significant portion of the coastal elevation variability is hindcast by the model. Also, inclusion of the shelf improves simulation of salinity inside the Bay compared with simulations where the salinity boundary condition is specified at the mouth of the Bay.
NASA Technical Reports Server (NTRS)
Dorsey, D. R., Jr.
1975-01-01
A mathematical model was developed of the three-dimensional dynamics of a high-altitude scientific research balloon system perturbed from its equilibrium configuration by an arbitrary gust loading. The platform is modelled as a system of four coupled pendula, and the equations of motion were developed in the Lagrangian formalism assuming a small-angle approximation. Three-dimensional pendulation, torsion, and precessional motion due to Coriolis forces are considered. Aerodynamic and viscous damping effects on the pendulatory and torsional motions are included. A general model of the gust field incident upon the balloon system was developed. The digital computer simulation program is described, and a guide to its use is given.
NASA Astrophysics Data System (ADS)
Vorobiev, Dmitry; Ninkov, Zoran
2017-11-01
Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
LI, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
Sparsity enabled cluster reduced-order models for control
NASA Astrophysics Data System (ADS)
Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.
2018-01-01
Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.
Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor
Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng
2016-01-01
In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor. PMID:27649194
Creating 3D Physical Models to Probe Student Understanding of Macromolecular Structure
ERIC Educational Resources Information Center
Cooper, A. Kat; Oliver-Hoyo, M. T.
2017-01-01
The high degree of complexity of macromolecular structure is extremely difficult for students to process. Students struggle to translate the simplified two-dimensional representations commonly used in biochemistry instruction to three-dimensional aspects crucial in understanding structure-property relationships. We designed four different physical…
Geostatistical models are appropriate for spatially distributed data measured at irregularly spaced locations. We propose an efficient Markov chain Monte Carlo (MCMC) algorithm for fitting Bayesian geostatistical models with substantial numbers of unknown parameters to sizable...
Flow through three-dimensional arrangements of cylinders with alternating streamwise planar tilt
NASA Astrophysics Data System (ADS)
Sahraoui, M.; Marshall, H.; Kaviany, M.
1993-09-01
In this report, fluid flow through a three-dimensional model for the fibrous filters is examined. In this model, the three-dimensional Stokes equation with the appropriate periodic boundary conditions is solved using the finite volume method. In addition to the numerical solution, we attempt to model this flow analytically by using the two-dimensional extended analytic solution in each of the unit cells of the three-dimensional structure. Particle trajectories computed using the superimposed analytic solution of the flow field are closed to those computed using the numerical solution of the flow field. The numerical results show that the pressure drop is not affected significantly by the relative angle of rotation of the cylinders for the high porosity used in this study (epsilon = 0.8 and epsilon = 0.95). The numerical solution and the superimposed analytic solution are also compared in terms of the particle capture efficiency. The results show that the efficiency predictions using the two methods are within 10% for St = 0.01 and 5% for St = 100. As the the porosity decreases, the three-dimensional effect becomes more significant and a difference of 35% is obtained for epsilon = 0.8.
Prototype design based on NX subdivision modeling application
NASA Astrophysics Data System (ADS)
Zhan, Xianghui; Li, Xiaoda
2018-04-01
Prototype design is an important part of the product design, through a quick and easy way to draw a three-dimensional product prototype. Combined with the actual production, the prototype could be modified several times, resulting in a highly efficient and reasonable design before the formal design. Subdivision modeling is a common method of modeling product prototypes. Through Subdivision modeling, people can in a short time with a simple operation to get the product prototype of the three-dimensional model. This paper discusses the operation method of Subdivision modeling for geometry. Take a vacuum cleaner as an example, the NX Subdivision modeling functions are applied. Finally, the development of Subdivision modeling is forecasted.
Computer aided photographic engineering
NASA Technical Reports Server (NTRS)
Hixson, Jeffrey A.; Rieckhoff, Tom
1988-01-01
High speed photography is an excellent source of engineering data but only provides a two-dimensional representation of a three-dimensional event. Multiple cameras can be used to provide data for the third dimension but camera locations are not always available. A solution to this problem is to overlay three-dimensional CAD/CAM models of the hardware being tested onto a film or photographic image, allowing the engineer to measure surface distances, relative motions between components, and surface variations.
Stripe order in the underdoped region of the two-dimensional Hubbard model
NASA Astrophysics Data System (ADS)
Zheng, Bo-Xiao; Chung, Chia-Min; Corboz, Philippe; Ehlers, Georg; Qin, Ming-Pu; Noack, Reinhard M.; Shi, Hao; White, Steven R.; Zhang, Shiwei; Chan, Garnet Kin-Lic
2017-12-01
Competing inhomogeneous orders are a central feature of correlated electron materials, including the high-temperature superconductors. The two-dimensional Hubbard model serves as the canonical microscopic physical model for such systems. Multiple orders have been proposed in the underdoped part of the phase diagram, which corresponds to a regime of maximum numerical difficulty. By combining the latest numerical methods in exhaustive simulations, we uncover the ordering in the underdoped ground state. We find a stripe order that has a highly compressible wavelength on an energy scale of a few kelvin, with wavelength fluctuations coupled to pairing order. The favored filled stripe order is different from that seen in real materials. Our results demonstrate the power of modern numerical methods to solve microscopic models, even in challenging settings.
Aluja, Anton; Rolland, Jean-Pierre; García, Luis F; Rossier, Jérôme
2007-04-01
We investigated the dimensionality of the French version of the Rosenberg Self-Esteem Scale (RSES; Rosenberg, 1965) using confirmatory factor analysis. We tested models of 1 or 2 factors. Results suggest the RSES is a 1-dimensional scale with 3 highly correlated items. Comparison with the Revised NEO-Personality Inventory (NEO-PI-R; Costa, McCrae, & Rolland, 1998) demonstrated that Neuroticism correlated strongly and Extraversion and Conscientiousness moderately with the RSES. Depression accounted for 47% of the variance of the RSES. Other NEO-PI-R facets were also moderately related with self-esteem.
NASA Astrophysics Data System (ADS)
Brdar, S.; Seifert, A.
2018-01-01
We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.
Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA
NASA Astrophysics Data System (ADS)
Messer, O. E. B.; Harris, J. A.; Hix, W. R.; Lentz, E. J.; Bruenn, S. W.; Mezzacappa, A.
2018-04-01
Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport, and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.
NASA Astrophysics Data System (ADS)
Pan, Jian-Song; Zhang, Wei; Yi, Wei; Guo, Guang-Can
2016-10-01
In a recent experiment (Z. Wu, L. Zhang, W. Sun, X.-T. Xu, B.-Z. Wang, S.-C. Ji, Y. Deng, S. Chen, X.-J. Liu, and J.-W. Pan, arXiv:1511.08170 [cond-mat.quant-gas]), a Raman-assisted two-dimensional spin-orbit coupling has been realized for a Bose-Einstein condensate in an optical lattice potential. In light of this exciting progress, we study in detail key properties of the system. As the Raman lasers inevitably couple atoms to high-lying bands, the behaviors of the system in both the single- and many-particle sectors are significantly affected. In particular, the high-band effects enhance the plane-wave phase and lead to the emergence of "roton" gaps at low Zeeman fields. Furthermore, we identify high-band-induced topological phase boundaries in both the single-particle and the quasiparticle spectra. We then derive an effective two-band model, which captures the high-band physics in the experimentally relevant regime. Our results not only offer valuable insights into the two-dimensional lattice spin-orbit coupling, but also provide a systematic formalism to model high-band effects in lattice systems with Raman-assisted spin-orbit couplings.
High-order shock-fitted detonation propagation in high explosives
NASA Astrophysics Data System (ADS)
Romick, Christopher M.; Aslam, Tariq D.
2017-03-01
A highly accurate numerical shock and material interface fitting scheme composed of fifth-order spatial and third- or fifth-order temporal discretizations is applied to the two-dimensional reactive Euler equations in both slab and axisymmetric geometries. High rates of convergence are not typically possible with shock-capturing methods as the Taylor series analysis breaks down in the vicinity of discontinuities. Furthermore, for typical high explosive (HE) simulations, the effects of material interfaces at the charge boundary can also cause significant computational errors. Fitting a computational boundary to both the shock front and material interface (i.e. streamline) alleviates the computational errors associated with captured shocks and thus opens up the possibility of high rates of convergence for multi-dimensional shock and detonation flows. Several verification tests, including a Sedov blast wave, a Zel'dovich-von Neumann-Döring (ZND) detonation wave, and Taylor-Maccoll supersonic flow over a cone, are utilized to demonstrate high rates of convergence to nontrivial shock and reaction flows. Comparisons to previously published shock-capturing multi-dimensional detonations in a polytropic fluid with a constant adiabatic exponent (PF-CAE) are made, demonstrating significantly lower computational error for the present shock and material interface fitting method. For an error on the order of 10 m /s, which is similar to that observed in experiments, shock-fitting offers a computational savings on the order of 1000. In addition, the behavior of the detonation phase speed is examined for several slab widths to evaluate the detonation performance of PBX 9501 while utilizing the Wescott-Stewart-Davis (WSD) model, which is commonly used in HE modeling. It is found that the thickness effect curve resulting from this equation of state and reaction model using published values is dramatically more steep than observed in recent experiments. Utilizing the present fitting strategy, in conjunction with a nonlinear optimizer, a new set of reaction rate parameters improves the correlation of the model to experimental results. Finally, this new model is tested against two dimensional slabs as a validation test.
ERIC Educational Resources Information Center
Chen, Jian; Smith, Andrew D.; Khan, Majid A.; Sinning, Allan R.; Conway, Marianne L.; Cui, Dongmei
2017-01-01
Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal…
Multiensemble Markov models of molecular thermodynamics and kinetics.
Wu, Hao; Paul, Fabian; Wehmeyer, Christoph; Noé, Frank
2016-06-07
We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models-clustering of high-dimensional spaces and modeling of complex many-state systems-with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein-ligand binding model.
Spectral properties near the Mott transition in the two-dimensional Hubbard model
NASA Astrophysics Data System (ADS)
Kohno, Masanori
2013-03-01
Single-particle excitations near the Mott transition in the two-dimensional (2D) Hubbard model are investigated by using cluster perturbation theory. The Mott transition is characterized by the loss of the spectral weight from the dispersing mode that leads continuously to the spin-wave excitation of the Mott insulator. The origins of the dominant modes of the 2D Hubbard model near the Mott transition can be traced back to those of the one-dimensional Hubbard model. Various anomalous spectral features observed in cuprate high-temperature superconductors, such as the pseudogap, Fermi arc, flat band, doping-induced states, hole pockets, and spinon-like and holon-like branches, as well as giant kink and waterfall in the dispersion relation, are explained in a unified manner as properties near the Mott transition in a 2D system.
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
Numerical modeling of consolidation processes in hydraulically deposited soils
NASA Astrophysics Data System (ADS)
Brink, Nicholas Robert
Hydraulically deposited soils are encountered in many common engineering applications including mine tailing and geotextile tube fills, though the consolidation process for such soils is highly nonlinear and requires the use of advanced numerical techniques to provide accurate predictions. Several commercially available finite element codes poses the ability to model soil consolidation, and it was the goal of this research to assess the ability of two of these codes, ABAQUS and PLAXIS, to model the large-strain, two-dimensional consolidation processes which occur in hydraulically deposited soils. A series of one- and two-dimensionally drained rectangular models were first created to assess the limitations of ABAQUS and PLAXIS when modeling consolidation of highly compressible soils. Then, geotextile tube and TSF models were created to represent actual scenarios which might be encountered in engineering practice. Several limitations were discovered, including the existence of a minimum preconsolidation stress below which numerical solutions become unstable.
NASA Astrophysics Data System (ADS)
Tiguercha, Djlalli; Bennis, Anne-claire; Ezersky, Alexander
2015-04-01
The elliptical motion in surface waves causes an oscillating motion of the sand grains leading to the formation of ripple patterns on the bottom. Investigation how the grains with different properties are distributed inside the ripples is a difficult task because of the segration of particle. The work of Fernandez et al. (2003) was extended from one-dimensional to two-dimensional case. A new numerical model, based on these non-linear diffusion equations, was developed to simulate the grain distribution inside the marine sand ripples. The one and two-dimensional models are validated on several test cases where segregation appears. Starting from an homogeneous mixture of grains, the two-dimensional simulations demonstrate different segregation patterns: a) formation of zones with high concentration of light and heavy particles, b) formation of «cat's eye» patterns, c) appearance of inverse Brazil nut effect. Comparisons of numerical results with the new set of field data and wave flume experiments show that the two-dimensional non-linear diffusion equations allow us to reproduce qualitatively experimental results on particles segregation.
Rapid Prototyping Integrated With Nondestructive Evaluation and Finite Element Analysis
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Baaklini, George Y.
2001-01-01
Most reverse engineering approaches involve imaging or digitizing an object then creating a computerized reconstruction that can be integrated, in three dimensions, into a particular design environment. Rapid prototyping (RP) refers to the practical ability to build high-quality physical prototypes directly from computer aided design (CAD) files. Using rapid prototyping, full-scale models or patterns can be built using a variety of materials in a fraction of the time required by more traditional prototyping techniques (refs. 1 and 2). Many software packages have been developed and are being designed to tackle the reverse engineering and rapid prototyping issues just mentioned. For example, image processing and three-dimensional reconstruction visualization software such as Velocity2 (ref. 3) are being used to carry out the construction process of three-dimensional volume models and the subsequent generation of a stereolithography file that is suitable for CAD applications. Producing three-dimensional models of objects from computed tomography (CT) scans is becoming a valuable nondestructive evaluation methodology (ref. 4). Real components can be rendered and subjected to temperature and stress tests using structural engineering software codes. For this to be achieved, accurate high-resolution images have to be obtained via CT scans and then processed, converted into a traditional file format, and translated into finite element models. Prototyping a three-dimensional volume of a composite structure by reading in a series of two-dimensional images generated via CT and by using and integrating commercial software (e.g. Velocity2, MSC/PATRAN (ref. 5), and Hypermesh (ref. 6)) is being applied successfully at the NASA Glenn Research Center. The building process from structural modeling to the analysis level is outlined in reference 7. Subsequently, a stress analysis of a composite cooling panel under combined thermomechanical loading conditions was performed to validate this process.
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2016-12-01
Surrogate construction has become a routine procedure when facing computationally intensive studies requiring multiple evaluations of complex models. In particular, surrogate models, otherwise called emulators or response surfaces, replace complex models in uncertainty quantification (UQ) studies, including uncertainty propagation (forward UQ) and parameter estimation (inverse UQ). Further, surrogates based on Polynomial Chaos (PC) expansions are especially convenient for forward UQ and global sensitivity analysis, also known as variance-based decomposition. However, the PC surrogate construction strongly suffers from the curse of dimensionality. With a large number of input parameters, the number of model simulations required for accurate surrogate construction is prohibitively large. Relatedly, non-adaptive PC expansions typically include infeasibly large number of basis terms far exceeding the number of available model evaluations. We develop Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth and PC surrogate construction leading to a sparse, high-dimensional PC surrogate with a very few model evaluations. The surrogate is then readily employed for global sensitivity analysis leading to further dimensionality reduction. Besides numerical tests, we demonstrate the construction on the example of Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Percolation and epidemics in a two-dimensional small world
NASA Astrophysics Data System (ADS)
Newman, M. E.; Jensen, I.; Ziff, R. M.
2002-02-01
Percolation on two-dimensional small-world networks has been proposed as a model for the spread of plant diseases. In this paper we give an analytic solution of this model using a combination of generating function methods and high-order series expansion. Our solution gives accurate predictions for quantities such as the position of the percolation threshold and the typical size of disease outbreaks as a function of the density of ``shortcuts'' in the small-world network. Our results agree with scaling hypotheses and numerical simulations for the same model.
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
NASA Astrophysics Data System (ADS)
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.
Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach
NASA Astrophysics Data System (ADS)
Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.
2016-09-01
The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.
NASA Technical Reports Server (NTRS)
Stordal, Frode; Garcia, Rolando R.
1987-01-01
The 1-1/2-D model of Holton (1986), which is actually a highly truncated two-dimensional model, describes latitudinal variations of tracer mixing ratios in terms of their projections onto second-order Legendre polynomials. The present study extends the work of Holton by including tracers with photochemical production in the stratosphere (O3 and NOy). It also includes latitudinal variations in the photochemical sources and sinks, improving slightly the calculated global mean profiles for the long-lived tracers studied by Holton and improving substantially the latitudinal behavior of ozone. Sensitivity tests of the dynamical parameters in the model are performed, showing that the response of the model to changes in vertical residual meridional winds and horizontal diffusion coefficients is similar to that of a full two-dimensional model. A simple ozone perturbation experiment shows the model's ability to reproduce large-scale latitudinal variations in total ozone column depletions as well as ozone changes in the chemically controlled upper stratosphere.
NASA Technical Reports Server (NTRS)
Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard
1991-01-01
Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/ mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.
NASA Technical Reports Server (NTRS)
Shalkhauser, Kurt A.; Bartos, Karen F.; Fite, E. B.; Sharp, G. R.
1992-01-01
Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.
Low-dimensional manifold of actin polymerization dynamics
NASA Astrophysics Data System (ADS)
Floyd, Carlos; Jarzynski, Christopher; Papoian, Garegin
2017-12-01
Actin filaments are critical components of the eukaryotic cytoskeleton, playing important roles in a number of cellular functions, such as cell migration, organelle transport, and mechanosensation. They are helical polymers with a well-defined polarity, composed of globular subunits that bind nucleotides in one of three hydrolysis states (ATP, ADP-Pi, or ADP). Mean-field models of the dynamics of actin polymerization have succeeded in, among other things, determining the nucleotide profile of an average filament and resolving the mechanisms of accessory proteins. However, these models require numerical solution of a high-dimensional system of nonlinear ordinary differential equations. By truncating a set of recursion equations, the Brooks-Carlsson (BC) model reduces dimensionality to 11, but it still remains nonlinear and does not admit an analytical solution, hence, significantly hindering understanding of its resulting dynamics. In this work, by taking advantage of the fast timescales of the hydrolysis states of the filament tips, we propose two model reduction schemes: the quasi steady-state approximation model is five-dimensional and nonlinear, whereas the constant tip (CT) model is five-dimensional and linear, resulting from the approximation that the tip states are not dynamic variables. We provide an exact solution of the CT model and use it to shed light on the dynamical behaviors of the full BC model, highlighting the relative ordering of the timescales of various collective processes, and explaining some unusual dependence of the steady-state behavior on initial conditions.
Three-dimensional drift kinetic response of high- β plasmas in the DIII-D tokamak
Wang, Zhirui R.; Lanctot, Matthew J.; Liu, Y. Q.; ...
2015-04-07
A quantitative interpretation of the experimentally measured high pressure plasma response to externally applied three-dimensional (3D) magnetic field perturbations, across the no-wall Troyon limit, is achieved. The key to success is the self-consistent inclusion of the drift kinetic resonance effects in numerical modeling using the MARS-K code. This resolves an outstanding issue of ideal magneto-hydrodynamic model, which signi cantly over-predicts the plasma induced field ampli fication near the no-wall limit, as compared to experiments. The self-consistent drift kinetic model leads to quantitative agreement not only for the measured 3D field amplitude and toroidal phase, but also for the measured internalmore » 3D displacement of the plasma.« less
Dynamic Fracture Simulations of Explosively Loaded Cylinders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Carly W.; Goto, D. M.
2015-11-30
This report documents the modeling results of high explosive experiments investigating dynamic fracture of steel (AerMet® 100 alloy) cylinders. The experiments were conducted at Lawrence Livermore National Laboratory (LLNL) during 2007 to 2008 [10]. A principal objective of this study was to gain an understanding of dynamic material failure through the analysis of hydrodynamic computer code simulations. Two-dimensional and three-dimensional computational cylinder models were analyzed using the ALE3D multi-physics computer code.
Using maximum topology matching to explore differences in species distribution models
Poco, Jorge; Doraiswamy, Harish; Talbert, Marian; Morisette, Jeffrey; Silva, Claudio
2015-01-01
Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.
Patra, Sarbani; Keshavamurthy, Srihari
2018-02-14
It has been known for sometime now that isomerization reactions, classically, are mediated by phase space structures called reactive islands (RI). RIs provide one possible route to correct for the nonstatistical effects in the reaction dynamics. In this work, we map out the reactive islands for the two dimensional Müller-Brown model potential and show that the reactive islands are intimately linked to the issue of rare event sampling. In particular, we establish the sensitivity of the so called committor probabilities, useful quantities in the transition path sampling technique, to the hierarchical RI structures. Mapping out the RI structure for high dimensional systems, however, is a challenging task. Here, we show that the technique of Lagrangian descriptors is able to effectively identify the RI hierarchy in the model system. Based on our results, we suggest that the Lagrangian descriptors can be useful for detecting RIs in high dimensional systems.
Selected topics in high energy physics: Flavon, neutrino and extra-dimensional models
NASA Astrophysics Data System (ADS)
Dorsner, Ilja
There is already significant evidence, both experimental and theoretical, that the Standard Model of elementary particle physics is just another effective physical theory. Thus, it is crucial (a) to anticipate the experiments in search for signatures of the physics beyond the Standard Model, and (b) whether some theoretically preferred structure can reproduce the low-energy signature of the Standard Model. This work pursues these two directions by investigating various extensions of the Standard Model. One of them is a simple flavon model that accommodates the observed hierarchy of the charged fermion masses and mixings. We show that flavor changing and CP violating signatures of this model are equally near the present experimental limits. We find that, for a significant range of parameters, mu-e conversion can be the most sensitive place to look for such signatures. We then propose two variants of an SO(10) model in five-dimensional framework. The first variant demonstrates that one can embed a four-dimensional flipped SU(5) model into a five-dimensional SO(10) model. This allows one to maintain the advantages of flipped SU(5) while avoiding its well-known drawbacks. The second variant shows that exact unification of the gauge couplings is possible even in the higher dimensional setting. This unification yields low-energy values of the gauge couplings that are in a perfect agreement with experimental values. We show that the corrections to the usual four-dimensional running, due to the Kaluza-Klein towers of states, can be unambiguously and systematically evaluated. We also consider the various main types of models of neutrino masses and mixings from the point of view of how naturally they give the large mixing angle MSW solution to the solar neutrino problem. Special attention is given to one particular "lopsided" SU(5) model, which is then analyzed in a completely statistical manner. We suggest that this sort of statistical analysis should be applicable to other models of neutrino mixing.
Reduced-Order Models Based on POD-Tpwl for Compositional Subsurface Flow Simulation
NASA Astrophysics Data System (ADS)
Durlofsky, L. J.; He, J.; Jin, L. Z.
2014-12-01
A reduced-order modeling procedure applicable for compositional subsurface flow simulation will be described and applied. The technique combines trajectory piecewise linearization (TPWL) and proper orthogonal decomposition (POD) to provide highly efficient surrogate models. The method is based on a molar formulation (which uses pressure and overall component mole fractions as the primary variables) and is applicable for two-phase, multicomponent systems. The POD-TPWL procedure expresses new solutions in terms of linearizations around solution states generated and saved during previously simulated 'training' runs. High-dimensional states are projected into a low-dimensional subspace using POD. Thus, at each time step, only a low-dimensional linear system needs to be solved. Results will be presented for heterogeneous three-dimensional simulation models involving CO2 injection. Both enhanced oil recovery and carbon storage applications (with horizontal CO2 injectors) will be considered. Reasonably close agreement between full-order reference solutions and compositional POD-TPWL simulations will be demonstrated for 'test' runs in which the well controls differ from those used for training. Construction of the POD-TPWL model requires preprocessing overhead computations equivalent to about 3-4 full-order runs. Runtime speedups using POD-TPWL are, however, very significant - typically O(100-1000). The use of POD-TPWL for well control optimization will also be illustrated. For this application, some amount of retraining during the course of the optimization is required, which leads to smaller, but still significant, speedup factors.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso
Kong, Shengchun; Nan, Bin
2013-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328
Dimensionality reduction in epidemic spreading models
NASA Astrophysics Data System (ADS)
Frasca, M.; Rizzo, A.; Gallo, L.; Fortuna, L.; Porfiri, M.
2015-09-01
Complex dynamical systems often exhibit collective dynamics that are well described by a reduced set of key variables in a low-dimensional space. Such a low-dimensional description offers a privileged perspective to understand the system behavior across temporal and spatial scales. In this work, we propose a data-driven approach to establish low-dimensional representations of large epidemic datasets by using a dimensionality reduction algorithm based on isometric features mapping (ISOMAP). We demonstrate our approach on synthetic data for epidemic spreading in a population of mobile individuals. We find that ISOMAP is successful in embedding high-dimensional data into a low-dimensional manifold, whose topological features are associated with the epidemic outbreak. Across a range of simulation parameters and model instances, we observe that epidemic outbreaks are embedded into a family of closed curves in a three-dimensional space, in which neighboring points pertain to instants that are close in time. The orientation of each curve is unique to a specific outbreak, and the coordinates correlate with the number of infected individuals. A low-dimensional description of epidemic spreading is expected to improve our understanding of the role of individual response on the outbreak dynamics, inform the selection of meaningful global observables, and, possibly, aid in the design of control and quarantine procedures.
NASA Astrophysics Data System (ADS)
Kobayashi, H.; Ryu, Y.; Ustin, S.; Baldocchi, D. D.
2009-12-01
B15: Remote Characterization of Vegetation Structure: Including Research to Inform the Planned NASA DESDynI and ESA BIOMASS Missions Title: Spatial radiation environment in a heterogeneous oak woodland using a three-dimensional radiative transfer model and multiple constraints from observations Hideki Kobayashi, Youngryel Ryu, Susan Ustin, and Dennis Baldocchi Abstract Accurate evaluations of radiation environments of visible, near infrared, and thermal infrared wavebands in forest canopies are important to estimate energy, water, and carbon fluxes. Californian oak woodlands are sparse and highly clumped so that radiation environments are extremely heterogeneous spatially. The heterogeneity of radiation environments also varies with wavebands which depend on scattering and emission properties. So far, most of modeling studies have been performed in one dimensional radiative transfer models with (or without) clumping effect in the forest canopies. While some studies have been performed by using three dimensional radiative transfer models, several issues are still unresolved. For example, some 3D models calculate the radiation field with individual tree basis, and radiation interactions among trees are not considered. This interaction could be important in the highly scattering waveband such as near infrared. The objective of this study is to quantify the radiation field in the oak woodland. We developed a three dimensional radiative transfer model, which includes the thermal waveband. Soil/canopy energy balances and canopy physiology models, CANOAK, are incorporated in the radiative transfer model to simulate the diurnal patterns of thermal radiation fields and canopy physiology. Airborne LiDAR and canopy gap data measured by the several methods (digital photographs and plant canopy analyzer) were used to constrain the forest structures such as tree positions, crown sizes and leaf area density. Modeling results were tested by a traversing radiometer system that measured incoming photosynthetically active radiation and net radiation at forest floor and spatial variations in canopy reflectances taken by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). In this study, we show how the model with available measurements can reproduce the spatially heterogeneous radiation environments in the oak woodland.
Multiensemble Markov models of molecular thermodynamics and kinetics
Wu, Hao; Paul, Fabian; Noé, Frank
2016-01-01
We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models—clustering of high-dimensional spaces and modeling of complex many-state systems—with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein–ligand binding model. PMID:27226302
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabiti, Cristian; Alfonsi, Andrea; Huang, Dongli
This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.
GLOBALLY ADAPTIVE QUANTILE REGRESSION WITH ULTRA-HIGH DIMENSIONAL DATA
Zheng, Qi; Peng, Limin; He, Xuming
2015-01-01
Quantile regression has become a valuable tool to analyze heterogeneous covaraite-response associations that are often encountered in practice. The development of quantile regression methodology for high dimensional covariates primarily focuses on examination of model sparsity at a single or multiple quantile levels, which are typically prespecified ad hoc by the users. The resulting models may be sensitive to the specific choices of the quantile levels, leading to difficulties in interpretation and erosion of confidence in the results. In this article, we propose a new penalization framework for quantile regression in the high dimensional setting. We employ adaptive L1 penalties, and more importantly, propose a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantile levels. Our proposed approach achieves consistent shrinkage of regression quantile estimates across a continuous range of quantiles levels, enhancing the flexibility and robustness of the existing penalized quantile regression methods. Our theoretical results include the oracle rate of uniform convergence and weak convergence of the parameter estimators. We also use numerical studies to confirm our theoretical findings and illustrate the practical utility of our proposal. PMID:26604424
NASA Astrophysics Data System (ADS)
Deen, David A.; Miller, Ross A.; Osinsky, Andrei V.; Downey, Brian P.; Storm, David F.; Meyer, David J.; Scott Katzer, D.; Nepal, Neeraj
2016-12-01
A dual-channel AlN/GaN/AlN/GaN high electron mobility transistor (HEMT) architecture is proposed, simulated, and demonstrated that suppresses gate lag due to surface-originated trapped charge. Dual two-dimensional electron gas (2DEG) channels are utilized such that the top 2DEG serves as an equipotential that screens potential fluctuations resulting from surface trapped charge. The bottom channel serves as the transistor's modulated channel. Two device modeling approaches have been performed as a means to guide the device design and to elucidate the relationship between the design and performance metrics. The modeling efforts include a self-consistent Poisson-Schrodinger solution for electrostatic simulation as well as hydrodynamic three-dimensional device modeling for three-dimensional electrostatics, steady-state, and transient simulations. Experimental results validated the HEMT design whereby homo-epitaxial growth on free-standing GaN substrates and fabrication of the same-wafer dual-channel and recessed-gate AlN/GaN HEMTs have been demonstrated. Notable pulsed-gate performance has been achieved by the fabricated HEMTs through a gate lag ratio of 0.86 with minimal drain current collapse while maintaining high levels of dc and rf performance.
Ionospheric hot spot at high latitudes
NASA Technical Reports Server (NTRS)
Schunk, R. W.; Sojka, J. J.
1982-01-01
Schunk and Raitt (1980) and Sojka et al. (1981) have developed a model of the convecting high-latitude ionosphere in order to determine the extent to which various chemical and transport processes affect the ion composition and electron density at F-region altitudes. The numerical model produces time-dependent, three-dimensional ion density distributions for the ions NO(+), O2(+), N2(+), O(+), N(+), and He(+). Recently, the high-latitude ionospheric model has been improved by including thermal conduction and diffusion-thermal heat flow terms. Schunk and Sojka (1982) have studied the ion temperature variations in the daytime high-latitude F-region. In the present study, a time-dependent three-dimensional ion temperature distribution is obtained for the high-latitude ionosphere for an asymmetric convection electric field pattern with enhanced flow in the dusk sector of the polar region. It is shown that such a convection pattern produces a hot spot in the ion temperature distribution which coincides with the location of the strong convection cell.
Electron transfer from a carbon nanotube into vacuum under high electric fields
NASA Astrophysics Data System (ADS)
Filip, L. D.; Smith, R. C.; Carey, J. D.; Silva, S. R. P.
2009-05-01
The transfer of an electron from a carbon nanotube (CNT) tip into vacuum under a high electric field is considered beyond the usual one-dimensional semi-classical approach. A model of the potential energy outside the CNT cap is proposed in order to show the importance of the intrinsic CNT parameters such as radius, length and vacuum barrier height. This model also takes into account set-up parameters such as the shape of the anode and the anode-to-cathode distance, which are generically portable to any modelling study of electron emission from a tip emitter. Results obtained within our model compare well to experimental data. Moreover, in contrast to the usual one-dimensional Wentzel-Kramers-Brillouin description, our model retains the ability to explain non-standard features of the process of electron field emission from CNTs that arise as a result of the quantum behaviour of electrons on the surface of the CNT.
Data-Driven Modeling of Complex Systems by means of a Dynamical ANN
NASA Astrophysics Data System (ADS)
Seleznev, A.; Mukhin, D.; Gavrilov, A.; Loskutov, E.; Feigin, A.
2017-12-01
The data-driven methods for modeling and prognosis of complex dynamical systems become more and more popular in various fields due to growth of high-resolution data. We distinguish the two basic steps in such an approach: (i) determining the phase subspace of the system, or embedding, from available time series and (ii) constructing an evolution operator acting in this reduced subspace. In this work we suggest a novel approach combining these two steps by means of construction of an artificial neural network (ANN) with special topology. The proposed ANN-based model, on the one hand, projects the data onto a low-dimensional manifold, and, on the other hand, models a dynamical system on this manifold. Actually, this is a recurrent multilayer ANN which has internal dynamics and capable of generating time series. Very important point of the proposed methodology is the optimization of the model allowing us to avoid overfitting: we use Bayesian criterion to optimize the ANN structure and estimate both the degree of evolution operator nonlinearity and the complexity of nonlinear manifold which the data are projected on. The proposed modeling technique will be applied to the analysis of high-dimensional dynamical systems: Lorenz'96 model of atmospheric turbulence, producing high-dimensional space-time chaos, and quasi-geostrophic three-layer model of the Earth's atmosphere with the natural orography, describing the dynamics of synoptical vortexes as well as mesoscale blocking systems. The possibility of application of the proposed methodology to analyze real measured data is also discussed. The study was supported by the Russian Science Foundation (grant #16-12-10198).
Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messer, Bronson; Harris, James Austin; Hix, William Raphael
Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport,more » and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.« less
Identification of aerodynamic models for maneuvering aircraft
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Hu, C. C.
1992-01-01
A Fourier analysis method was developed to analyze harmonic forced-oscillation data at high angles of attack as functions of the angle of attack and its time rate of change. The resulting aerodynamic responses at different frequencies are used to build up the aerodynamic models involving time integrals of the indicial type. An efficient numerical method was also developed to evaluate these time integrals for arbitrary motions based on a concept of equivalent harmonic motion. The method was verified by first using results from two-dimensional and three-dimensional linear theories. The developed models for C sub L, C sub D, and C sub M based on high-alpha data for a 70 deg delta wing in harmonic motions showed accurate results in reproducing hysteresis. The aerodynamic models are further verified by comparing with test data using ramp-type motions.
State estimation and prediction using clustered particle filters.
Lee, Yoonsang; Majda, Andrew J
2016-12-20
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.
State estimation and prediction using clustered particle filters
Lee, Yoonsang; Majda, Andrew J.
2016-01-01
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332
Four-dimensional (4D) tracking of high-temperature microparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui, E-mail: zwang@lanl.gov; Liu, Q.; Waganaar, W.
High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.
Four-dimensional (4D) tracking of high-temperature microparticles
NASA Astrophysics Data System (ADS)
Wang, Zhehui; Liu, Q.; Waganaar, W.; Fontanese, J.; James, D.; Munsat, T.
2016-11-01
High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.
Four-dimensional (4D) tracking of high-temperature microparticles
Wang, Zhehui; Liu, Qiuguang; Waganaar, Bill; ...
2016-07-08
High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. As a result, velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.
Four-dimensional (4D) tracking of high-temperature microparticles.
Wang, Zhehui; Liu, Q; Waganaar, W; Fontanese, J; James, D; Munsat, T
2016-11-01
High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.
Ghanegolmohammadi, Farzan; Yoshida, Mitsunori; Ohnuki, Shinsuke; Sukegawa, Yuko; Okada, Hiroki; Obara, Keisuke; Kihara, Akio; Suzuki, Kuninori; Kojima, Tetsuya; Yachie, Nozomu; Hirata, Dai; Ohya, Yoshikazu
2017-01-01
We investigated the global landscape of Ca2+ homeostasis in budding yeast based on high-dimensional chemical-genetic interaction profiles. The morphological responses of 62 Ca2+-sensitive (cls) mutants were quantitatively analyzed with the image processing program CalMorph after exposure to a high concentration of Ca2+. After a generalized linear model was applied, an analysis of covariance model was used to detect significant Ca2+–cls interactions. We found that high-dimensional, morphological Ca2+–cls interactions were mixed with positive (86%) and negative (14%) chemical-genetic interactions, whereas one-dimensional fitness Ca2+–cls interactions were all negative in principle. Clustering analysis with the interaction profiles revealed nine distinct gene groups, six of which were functionally associated. In addition, characterization of Ca2+–cls interactions revealed that morphology-based negative interactions are unique signatures of sensitized cellular processes and pathways. Principal component analysis was used to discriminate between suppression and enhancement of the Ca2+-sensitive phenotypes triggered by inactivation of calcineurin, a Ca2+-dependent phosphatase. Finally, similarity of the interaction profiles was used to reveal a connected network among the Ca2+ homeostasis units acting in different cellular compartments. Our analyses of high-dimensional chemical-genetic interaction profiles provide novel insights into the intracellular network of yeast Ca2+ homeostasis. PMID:28566553
High-density Two-Dimensional Small Polaron Gas in a Delta-Doped Mott Insulator
Ouellette, Daniel G.; Moetakef, Pouya; Cain, Tyler A.; Zhang, Jack Y.; Stemmer, Susanne; Emin, David; Allen, S. James
2013-01-01
Heterointerfaces in complex oxide systems open new arenas in which to test models of strongly correlated material, explore the role of dimensionality in metal-insulator-transitions (MITs) and small polaron formation. Close to the quantum critical point Mott MITs depend on band filling controlled by random disordered substitutional doping. Delta-doped Mott insulators are potentially free of random disorder and introduce a new arena in which to explore the effect of electron correlations and dimensionality. Epitaxial films of the prototypical Mott insulator GdTiO3 are delta-doped by substituting a single (GdO)+1 plane with a monolayer of charge neutral SrO to produce a two-dimensional system with high planar doping density. Unlike metallic SrTiO3 quantum wells in GdTiO3 the single SrO delta-doped layer exhibits thermally activated DC and optical conductivity that agree in a quantitative manner with predictions of small polaron transport but with an extremely high two-dimensional density of polarons, ~7 × 1014 cm−2. PMID:24257578
Bearing-Load Modeling and Analysis Study for Mechanically Connected Structures
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.
2006-01-01
Bearing-load response for a pin-loaded hole is studied within the context of two-dimensional finite element analyses. Pin-loaded-hole configurations are representative of mechanically connected structures, such as a stiffener fastened to a rib of an isogrid panel, that are idealized as part of a larger structural component. Within this context, the larger structural component may be idealized as a two-dimensional shell finite element model to identify load paths and high stress regions. Finite element modeling and analysis aspects of a pin-loaded hole are considered in the present paper including the use of linear and nonlinear springs to simulate the pin-bearing contact condition. Simulating pin-connected structures within a two-dimensional finite element analysis model using nonlinear spring or gap elements provides an effective way for accurate prediction of the local effective stress state and peak forces.
Tahir-Kheli, J; Goddard, W A
1993-01-01
The one-dimensional three-band Hubbard Hamiltonian is shown to be equivalent to an effective Hamiltonian that has independent spinon and holon quasiparticle excitations plus a weak coupling of the two. The spinon description includes both copper sites and oxygen hole sites leading to a one-dimensional antiferromagnet incommensurate with the copper lattice. The holons are spinless noninteracting fermions in a simple cosine band. Because the oxygen sites are in the Hamiltonian, the quasiparticles are much simpler than in the exact solution of the t-J model for 2t = +/- J. If a similar description is correct for two dimensions, then the holons will attract in a p-wave potential. PMID:11607436
NASA Technical Reports Server (NTRS)
Kennedy, Ronald; Padovan, Joe
1987-01-01
In a three-part series of papers, a generalized finite element solution strategy is developed to handle traveling load problems in rolling, moving and rotating structure. The main thrust of this section consists of the development of three-dimensional and shell type moving elements. In conjunction with this work, a compatible three-dimensional contact strategy is also developed. Based on these modeling capabilities, extensive analytical and experimental benchmarking is presented. Such testing includes traveling loads in rotating structure as well as low- and high-speed rolling contact involving standing wave-type response behavior. These point to the excellent modeling capabilities of moving element strategies.
NASA Astrophysics Data System (ADS)
Kunz, Robert; Haworth, Daniel; Dogan, Gulkiz; Kriete, Andres
2006-11-01
Three-dimensional, unsteady simulations of multiphase flow, gas exchange, and particle/aerosol deposition in the human lung are reported. Surface data for human tracheo-bronchial trees are derived from CT scans, and are used to generate three- dimensional CFD meshes for the first several generations of branching. One-dimensional meshes for the remaining generations down to the respiratory units are generated using branching algorithms based on those that have been proposed in the literature, and a zero-dimensional respiratory unit (pulmonary acinus) model is attached at the end of each terminal bronchiole. The process is automated to facilitate rapid model generation. The model is exercised through multiple breathing cycles to compute the spatial and temporal variations in flow, gas exchange, and particle/aerosol deposition. The depth of the 3D/1D transition (at branching generation n) is a key parameter, and can be varied. High-fidelity models (large n) are run on massively parallel distributed-memory clusters, and are used to generate physical insight and to calibrate/validate the 1D and 0D models. Suitably validated lower-order models (small n) can be run on single-processor PC’s with run times that allow model-based clinical intervention for individual patients.
NASA Astrophysics Data System (ADS)
Aksenova, Olesya; Nikolaeva, Evgenia; Cehlár, Michal
2017-11-01
This work aims to investigate the effectiveness of mathematical and three-dimensional computer modeling tools in the planning of processes of fuel and energy complexes at the planning and design phase of a thermal power plant (TPP). A solution for purification of gas emissions at the design development phase of waste treatment systems is proposed employing mathematical and three-dimensional computer modeling - using the E-nets apparatus and the development of a 3D model of the future gas emission purification system. Which allows to visualize the designed result, to select and scientifically prove economically feasible technology, as well as to ensure the high environmental and social effect of the developed waste treatment system. The authors present results of a treatment of planned technological processes and the system for purifying gas emissions in terms of E-nets. using mathematical modeling in the Simulink application. What allowed to create a model of a device from the library of standard blocks and to perform calculations. A three-dimensional model of a system for purifying gas emissions has been constructed. It allows to visualize technological processes and compare them with the theoretical calculations at the design phase of a TPP and. if necessary, make adjustments.
Feasibility of High Energy Lasers for Interdiction Activities
2017-12-01
2.3.2 Power in the Bucket Another parameter we will use in this study is the power-in-the-bucket. The “bucket” is defined as the area on the target we...the heat diffusion equation for a one -dimensional case (where the x-direction is into the target) and assuming a semi-infinite slab of material. The... studied and modeled. One of the approaches to describe these interactions is by making a one -dimensional mathematical model assuming [8]: 1. A semi
Mattei, Lorenza; Di Puccio, Francesca; Joyce, Thomas J; Ciulli, Enrico
2016-08-01
Although huge research efforts have been devoted to wear analysis of ultra-high molecular weight polyethylene (UHMWPE) in hip and knee implants, shoulder prostheses have been studied only marginally. Recently, the authors presented a numerical wear model of reverse total shoulder arthroplasties (RTSAs), and its application for estimating the wear coefficient k from experimental data according to different wear laws. In this study, such model and k expressions are exploited to investigate the sensitivity of UHMWPE wear to implant size and dimensional tolerance. A set of 10 different geometries was analysed, considering nominal diameters in the range 36-42mm, available on the market, and a cup dimensional tolerance of +0.2, -0.0mm (resulting in a diametrical clearance ranging between 0.04-0.24mm), estimated from measurements on RTSAs. Since the most reliable wear law and wear coefficient k for UHMWPE are still controversial in the literature, both the Archard law (AR) and the wear law of UHMWPE (PE), as well as four different k expressions were considered, carrying out a total of 40 simulations. Results showed that the wear volume increases with the implant size and decreases with the dimensional tolerance for both the wear laws. Interestingly, different trends were obtained for the maximum wear depth vs. clearance: the best performing implants should have a high conformity according to the AR law but low conformity for the PE law. However, according to both laws, wear is highly affected by both implant size and dimensional tolerance, although it is much more sensitive to the latter, with up to a twofold variation of wear predicted. Indeed, dimensional tolerance directly alters the clearance, and therefore the lubrication and contact pressure distribution in the implant. Rather surprisingly the role of dimensional tolerance has been completely disregarded in the literature, as well as in the standards. Furthermore, this study notes some important issues for future work, such as the validation of wear laws and predictive wear models and the sensitivity of k to implant geometry. Copyright © 2016 Elsevier Ltd. All rights reserved.
Computational unsteady aerodynamics for lifting surfaces
NASA Technical Reports Server (NTRS)
Edwards, John W.
1988-01-01
Two dimensional problems are solved using numerical techniques. Navier-Stokes equations are studied both in the vorticity-stream function formulation which appears to be the optimal choice for two dimensional problems, using a storage approach, and in the velocity pressure formulation which minimizes the number of unknowns in three dimensional problems. Analysis shows that compact centered conservative second order schemes for the vorticity equation are the most robust for high Reynolds number flows. Serious difficulties remain in the choice of turbulent models, to keep reasonable CPU efficiency.
Global environmental effects of impact-generated aerosols: Results from a general circulation model
NASA Technical Reports Server (NTRS)
Covey, Curt; Ghan, Steven J.; Walton, John J.; Weissman, Paul R.
1989-01-01
Interception of sunlight by the high altitude worldwide dust cloud generated by impact of a large asteroid or comet would lead to substantial land surface cooling, according to the three-dimensional atmospheric general circulation model (GCM). This result is qualitatively similar to conclusions drawn from an earlier study that employed a one-dimensional atmospheric model, but in the GCM simulation the heat capacity of the oceans, not included in the one-dimensional model, substantially mitigates land surface cooling. On the other hand, the low heat capacity of the GCM's land surface allows temperatures to drop more rapidly in the initial stages of cooling than in the one-dimensional model study. GCM-simulated climatic changes in the scenario of asteroid/comet winter are more severe than in nuclear winter because the assumed aerosol amount is large enough to intercept all sunlight falling on earth. Impacts of smaller objects could also lead to dramatic, though of course less severe, climatic changes, according to the GCM. An asteroid or comet impact would not lead to anything approaching complete global freezing, but quite reasonable to assume that impacts would dramatically alter the climate in at least a patchy sense.
Analytical modeling of circuit aerodynamics in the new NASA Lewis wind tunnel
NASA Technical Reports Server (NTRS)
Towne, C. E.; Povinelli, L. A.; Kunik, W. G.; Muramoto, K. K.; Hughes, C. E.; Levy, R.
1985-01-01
Rehabilitation and extention of the capability of the altitude wind tunnel (AWT) was analyzed. The analytical modeling program involves the use of advanced axisymmetric and three dimensional viscous analyses to compute the flow through the various AWT components. Results for the analytical modeling of the high speed leg aerodynamics are presented; these include: an evaluation of the flow quality at the entrance to the test section, an investigation of the effects of test section bleed for different model blockages, and an examination of three dimensional effects in the diffuser due to reentry flow and due to the change in cross sectional shape of the exhaust scoop.
Two-dimensional vocal tracts with three-dimensional behavior in the numerical generation of vowels.
Arnela, Marc; Guasch, Oriol
2014-01-01
Two-dimensional (2D) numerical simulations of vocal tract acoustics may provide a good balance between the high quality of three-dimensional (3D) finite element approaches and the low computational cost of one-dimensional (1D) techniques. However, 2D models are usually generated by considering the 2D vocal tract as a midsagittal cut of a 3D version, i.e., using the same radius function, wall impedance, glottal flow, and radiation losses as in 3D, which leads to strong discrepancies in the resulting vocal tract transfer functions. In this work, a four step methodology is proposed to match the behavior of 2D simulations with that of 3D vocal tracts with circular cross-sections. First, the 2D vocal tract profile becomes modified to tune the formant locations. Second, the 2D wall impedance is adjusted to fit the formant bandwidths. Third, the 2D glottal flow gets scaled to recover 3D pressure levels. Fourth and last, the 2D radiation model is tuned to match the 3D model following an optimization process. The procedure is tested for vowels /a/, /i/, and /u/ and the obtained results are compared with those of a full 3D simulation, a conventional 2D approach, and a 1D chain matrix model.
Ding, Jiarui; Condon, Anne; Shah, Sohrab P
2018-05-21
Single-cell RNA-sequencing has great potential to discover cell types, identify cell states, trace development lineages, and reconstruct the spatial organization of cells. However, dimension reduction to interpret structure in single-cell sequencing data remains a challenge. Existing algorithms are either not able to uncover the clustering structures in the data or lose global information such as groups of clusters that are close to each other. We present a robust statistical model, scvis, to capture and visualize the low-dimensional structures in single-cell gene expression data. Simulation results demonstrate that low-dimensional representations learned by scvis preserve both the local and global neighbor structures in the data. In addition, scvis is robust to the number of data points and learns a probabilistic parametric mapping function to add new data points to an existing embedding. We then use scvis to analyze four single-cell RNA-sequencing datasets, exemplifying interpretable two-dimensional representations of the high-dimensional single-cell RNA-sequencing data.
NASA Astrophysics Data System (ADS)
Po, Hoi Chun; Zhou, Qi
2015-08-01
Bosons have a natural instinct to condense at zero temperature. It is a long-standing challenge to create a high-dimensional quantum liquid that does not exhibit long-range order at the ground state, as either extreme experimental parameters or sophisticated designs of microscopic Hamiltonians are required for suppressing the condensation. Here we show that synthetic gauge fields for ultracold atoms, using either the Raman scheme or shaken lattices, provide physicists a simple and practical scheme to produce a two-dimensional algebraic quantum liquid at the ground state. This quantum liquid arises at a critical Lifshitz point, where a two-dimensional quartic dispersion emerges in the momentum space, and many fundamental properties of two-dimensional bosons are changed in its proximity. Such an ideal simulator of the quantum Lifshitz model allows experimentalists to directly visualize and explore the deconfinement transition of topological excitations, an intriguing phenomenon that is difficult to access in other systems.
The importance of spatial ability and mental models in learning anatomy
NASA Astrophysics Data System (ADS)
Chatterjee, Allison K.
As a foundational course in medical education, gross anatomy serves to orient medical and veterinary students to the complex three-dimensional nature of the structures within the body. Understanding such spatial relationships is both fundamental and crucial for achievement in gross anatomy courses, and is essential for success as a practicing professional. Many things contribute to learning spatial relationships; this project focuses on a few key elements: (1) the type of multimedia resources, particularly computer-aided instructional (CAI) resources, medical students used to study and learn; (2) the influence of spatial ability on medical and veterinary students' gross anatomy grades and their mental models; and (3) how medical and veterinary students think about anatomy and describe the features of their mental models to represent what they know about anatomical structures. The use of computer-aided instruction (CAI) by gross anatomy students at Indiana University School of Medicine (IUSM) was assessed through a questionnaire distributed to the regional centers of the IUSM. Students reported using internet browsing, PowerPoint presentation software, and email on a daily bases to study gross anatomy. This study reveals that first-year medical students at the IUSM make limited use of CAI to study gross anatomy. Such studies emphasize the importance of examining students' use of CAI to study gross anatomy prior to development and integration of electronic media into the curriculum and they may be important in future decisions regarding the development of alternative learning resources. In order to determine how students think about anatomical relationships and describe the features of their mental models, personal interviews were conducted with select students based on students' ROT scores. Five typologies of the characteristics of students' mental models were identified and described: spatial thinking, kinesthetic approach, identification of anatomical structures, problem solving strategies, and study methods. Students with different levels of spatial ability visualize and think about anatomy in qualitatively different ways, which is reflected by the features of their mental models. Low spatial ability students thought about and used two-dimensional images from the textbook. They possessed basic two-dimensional models of anatomical structures; they placed emphasis on diagrams and drawings in their studies; and they re-read anatomical problems many times before answering. High spatial ability students thought fully in three-dimensional and imagined rotation and movement of the structures; they made use of many types of images and text as they studied and solved problems. They possessed elaborate three-dimensional models of anatomical structures which they were able to manipulate to solve problems; and they integrated diagrams, drawings, and written text in their studies. Middle spatial ability students were a mix between both low and high spatial ability students. They imagined two-dimensional images popping out of the flat paper to become more three-dimensional, but still relied on drawings and diagrams. Additionally, high spatial ability students used a higher proportion of anatomical terminology than low spatial ability or middle spatial ability students. This provides additional support to the premise that high spatial students' mental models are a complex mixture of imagistic representations and propositional representations that incorporate correct anatomical terminology. Low spatial ability students focused on the function of structures and ways to group information primarily for the purpose of recall. This supports the theory that low spatial students' mental models will be characterized by more on imagistic representations that are general in nature. (Abstract shortened by UMI.)
Efficient parallel implementation of active appearance model fitting algorithm on GPU.
Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou
2014-01-01
The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.
Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU
Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou
2014-01-01
The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures. PMID:24723812
Thermoelastic damping in microrings with circular cross-section
NASA Astrophysics Data System (ADS)
Li, Pu; Fang, Yuming; Zhang, Jianrun
2016-01-01
Predicting thermoelastic damping (TED) is crucial in the design of high Q micro-resonators. Microrings are often critical components in many micro-resonators. Some analytical models for TED in microrings have already been developed in the past. However, the previous works are limited to the microrings with rectangular cross-section. The temperature field in the rectangular cross-section is one-dimensional. This paper deals with TED in the microrings with circular cross-section. The temperature field in the circular cross-section is two-dimensional. This paper first presents a 2-D analytical model for TED in the microrings with circular cross-section. Only the two-dimensional heat conduction in the circular cross-section is considered. The heat conduction along the circumferential direction of the microring is neglected in the 2-D model. Then the 2-D model has been extended to cover the circumferential heat conduction, and a 3-D analytical model for TED has been developed. The analytical results from the present 2-D and 3-D models show good agreement with the numerical results of FEM model. The limitations of the present 2-D analytical model are assessed.
Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.
Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin
We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.
Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification
Feng, Yang; Jiang, Jiancheng; Tong, Xin
2015-01-01
We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing. PMID:27185970
Nomura, Tsutomu; Ushio, Munetaka; Kondo, Kenji; Yamasoba, Tatsuya
2015-11-01
The purpose of this research is to determine the cause of nasal perforation symptoms and to predict post-operative function after nasal perforation repair surgery. A realistic three-dimensional (3D) model of the nose with a septal perforation was reconstructed using a computed tomography (CT) scan from a patient with nasal septal defect. The numerical simulation was carried out using ANSYS CFX V13.0. Pre- and post-operative models were compared by their velocity, pressure gradient (PG), wall shear (WS), shear strain rate (SSR) and turbulence kinetic energy in three plains. In the post-operative state, the crossflows had disappeared, and stream lines bound to the olfactory cleft area had appeared. After surgery, almost all of high-shear stress areas were disappeared comparing pre-operative model. In conclusion, the effects of surgery to correct nasal septal perforation were evaluated using a three-dimensional airflow evaluation. Following the surgery, crossflows disappeared, and WS, PG and SSR rate were decreased. A high WS.PG and SSR were suspected as causes of nasal perforation symptoms.
Studies on Manfred Eigen's model for the self-organization of information processing.
Ebeling, W; Feistel, R
2018-05-01
In 1971, Manfred Eigen extended the principles of Darwinian evolution to chemical processes, from catalytic networks to the emergence of information processing at the molecular level, leading to the emergence of life. In this paper, we investigate some very general characteristics of this scenario, such as the valuation process of phenotypic traits in a high-dimensional fitness landscape, the effect of spatial compartmentation on the valuation, and the self-organized transition from structural to symbolic genetic information of replicating chain molecules. In the first part, we perform an analysis of typical dynamical properties of continuous dynamical models of evolutionary processes. In particular, we study the mapping of genotype to continuous phenotype spaces following the ideas of Wright and Conrad. We investigate typical features of a Schrödinger-like dynamics, the consequences of the high dimensionality, the leading role of saddle points, and Conrad's extra-dimensional bypass. In the last part, we discuss in brief the valuation of compartment models and the self-organized emergence of molecular symbols at the beginning of life.
Verification and Validation of a Three-Dimensional Generalized Composite Material Model
NASA Technical Reports Server (NTRS)
Hoffarth, Canio; Harrington, Joseph; Subramaniam, D. Rajan; Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Blankenhorn, Gunther
2014-01-01
A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to improve predictions of the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and qualitative validation of the material model using the T800- F3900 fiber/resin composite material.
Verification and Validation of a Three-Dimensional Generalized Composite Material Model
NASA Technical Reports Server (NTRS)
Hoffarth, Canio; Harrington, Joseph; Rajan, Subramaniam D.; Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Blankenhorn, Gunther
2015-01-01
A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to improve predictions of the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and qualitative validation of the material model using the T800-F3900 fiber/resin composite material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aly, A.; Avramova, Maria; Ivanov, Kostadin
To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less
A two-state hysteresis model from high-dimensional friction
Biswas, Saurabh; Chatterjee, Anindya
2015-01-01
In prior work (Biswas & Chatterjee 2014 Proc. R. Soc. A 470, 20130817 (doi:10.1098/rspa.2013.0817)), we developed a six-state hysteresis model from a high-dimensional frictional system. Here, we use a more intuitively appealing frictional system that resembles one studied earlier by Iwan. The basis functions now have simple analytical description. The number of states required decreases further, from six to the theoretical minimum of two. The number of fitted parameters is reduced by an order of magnitude, to just six. An explicit and faster numerical solution method is developed. Parameter fitting to match different specified hysteresis loops is demonstrated. In summary, a new two-state model of hysteresis is presented that is ready for practical implementation. Essential Matlab code is provided. PMID:26587279
Protein Simulation Data in the Relational Model.
Simms, Andrew M; Daggett, Valerie
2012-10-01
High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.
Protein Simulation Data in the Relational Model
Simms, Andrew M.; Daggett, Valerie
2011-01-01
High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost—significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server. PMID:23204646
NASA Technical Reports Server (NTRS)
Ko, Malcolm K. W.; Weisenstein, Debra K.; Sze, Nein Dak; Shia, Run-Lie; Rodriguez, Jose M.; Heisey, Curtis
1991-01-01
The AER two-dimensional chemistry-transport model is used to study the effect of supersonic and subsonic aircraft operation in the 2010 atmosphere on stratospheric ozone (O3). The results show that: (1) the calculated O3 response is smaller in the 2010 atmosphere compared to previous calculations performed in the 1980 atmosphere; (2) with the emissions provided, the calculated decrease in O3 column is less than 1 percent; and (3) the effect of model grid resolution on O3 response is small provided that the physics is not modified.
Deterministic models for traffic jams
NASA Astrophysics Data System (ADS)
Nagel, Kai; Herrmann, Hans J.
1993-10-01
We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.
A three-dimensional Dirichlet-to-Neumann operator for water waves over topography
NASA Astrophysics Data System (ADS)
Andrade, D.; Nachbin, A.
2018-06-01
Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.
NASA Astrophysics Data System (ADS)
Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen
2015-04-01
This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive, nevertheless, the present GKUAs for kinetic model Boltzmann equations in conjunction with current available high-performance parallel computer power can provide a vital engineering tool for analyzing rarefied gas flows covering the whole range of flow regimes in aerospace engineering applications.
A Three-Dimensional Kinematic and Kinetic Study of the College-Level Female Softball Swing
Milanovich, Monica; Nesbit, Steven M.
2014-01-01
This paper quantifies and discusses the three-dimensional kinematic and kinetic characteristics of the female softball swing as performed by fourteen female collegiate amateur subjects. The analyses were performed using a three-dimensional computer model. The model was driven kinematically from subject swings data that were recorded with a multi-camera motion analysis system. Each subject used two distinct bats with significantly different inertial properties. Model output included bat trajectories, subject/bat interaction forces and torques, work, and power. These data formed the basis for a detailed analysis and description of fundamental swing kinematic and kinetic quantities. The analyses revealed that the softball swing is a highly coordinated and individual three-dimensional motion and subject-to-subject variations were significant in all kinematic and kinetic quantities. In addition, the potential effects of bat properties on swing mechanics are discussed. The paths of the hands and the centre-of-curvature of the bat relative to the horizontal plane appear to be important trajectory characteristics of the swing. Descriptions of the swing mechanics and practical implications are offered based upon these findings. Key Points The female softball swing is a highly coordinated and individual three-dimensional motion and subject-to-subject variations were significant in all kinematic and kinetic quantities. The paths of the grip point, bat centre-of-curvature, CG, and COP are complex yet reveal consistent patterns among subjects indicating that these patterns are fundamental components of the swing. The most important mechanical quantity relative to generating bat speed is the total work applied to the bat from the batter. Computer modeling of the softball swing is a viable means for study of the fundamental mechanics of the swing motion, the interactions between the batter and the bat, and the energy transfers between the two. PMID:24570623
Modeling change from large-scale high-dimensional spatio-temporal array data
NASA Astrophysics Data System (ADS)
Lu, Meng; Pebesma, Edzer
2014-05-01
The massive data that come from Earth observation satellite and other sensors provide significant information for modeling global change. At the same time, the high dimensionality of the data has brought challenges in data acquisition, management, effective querying and processing. In addition, the output of earth system modeling tends to be data intensive and needs methodologies for storing, validation, analyzing and visualization, e.g. as maps. An important proportion of earth system observations and simulated data can be represented as multi-dimensional array data, which has received increasingly attention in big data management and spatial-temporal analysis. Study cases will be developed in natural science such as climate change, hydrological modeling, sediment dynamics, from which the addressing of big data problems is necessary. Multi-dimensional array-based database management and analytics system such as Rasdaman, SciDB, and R will be applied to these cases. From these studies will hope to learn the strengths and weaknesses of these systems, how they might work together or how semantics of array operations differ, through addressing the problems associated with big data. Research questions include: • How can we reduce dimensions spatially and temporally, or thematically? • How can we extend existing GIS functions to work on multidimensional arrays? • How can we combine data sets of different dimensionality or different resolutions? • Can map algebra be extended to an intelligible array algebra? • What are effective semantics for array programming of dynamic data driven applications? • In which sense are space and time special, as dimensions, compared to other properties? • How can we make the analysis of multi-spectral, multi-temporal and multi-sensor earth observation data easy?
A three-dimensional kinematic and kinetic study of the college-level female softball swing.
Milanovich, Monica; Nesbit, Steven M
2014-01-01
This paper quantifies and discusses the three-dimensional kinematic and kinetic characteristics of the female softball swing as performed by fourteen female collegiate amateur subjects. The analyses were performed using a three-dimensional computer model. The model was driven kinematically from subject swings data that were recorded with a multi-camera motion analysis system. Each subject used two distinct bats with significantly different inertial properties. Model output included bat trajectories, subject/bat interaction forces and torques, work, and power. These data formed the basis for a detailed analysis and description of fundamental swing kinematic and kinetic quantities. The analyses revealed that the softball swing is a highly coordinated and individual three-dimensional motion and subject-to-subject variations were significant in all kinematic and kinetic quantities. In addition, the potential effects of bat properties on swing mechanics are discussed. The paths of the hands and the centre-of-curvature of the bat relative to the horizontal plane appear to be important trajectory characteristics of the swing. Descriptions of the swing mechanics and practical implications are offered based upon these findings. Key PointsThe female softball swing is a highly coordinated and individual three-dimensional motion and subject-to-subject variations were significant in all kinematic and kinetic quantities.The paths of the grip point, bat centre-of-curvature, CG, and COP are complex yet reveal consistent patterns among subjects indicating that these patterns are fundamental components of the swing.The most important mechanical quantity relative to generating bat speed is the total work applied to the bat from the batter.Computer modeling of the softball swing is a viable means for study of the fundamental mechanics of the swing motion, the interactions between the batter and the bat, and the energy transfers between the two.
Lie, Octavian V; van Mierlo, Pieter
2017-01-01
The visual interpretation of intracranial EEG (iEEG) is the standard method used in complex epilepsy surgery cases to map the regions of seizure onset targeted for resection. Still, visual iEEG analysis is labor-intensive and biased due to interpreter dependency. Multivariate parametric functional connectivity measures using adaptive autoregressive (AR) modeling of the iEEG signals based on the Kalman filter algorithm have been used successfully to localize the electrographic seizure onsets. Due to their high computational cost, these methods have been applied to a limited number of iEEG time-series (<60). The aim of this study was to test two Kalman filter implementations, a well-known multivariate adaptive AR model (Arnold et al. 1998) and a simplified, computationally efficient derivation of it, for their potential application to connectivity analysis of high-dimensional (up to 192 channels) iEEG data. When used on simulated seizures together with a multivariate connectivity estimator, the partial directed coherence, the two AR models were compared for their ability to reconstitute the designed seizure signal connections from noisy data. Next, focal seizures from iEEG recordings (73-113 channels) in three patients rendered seizure-free after surgery were mapped with the outdegree, a graph-theory index of outward directed connectivity. Simulation results indicated high levels of mapping accuracy for the two models in the presence of low-to-moderate noise cross-correlation. Accordingly, both AR models correctly mapped the real seizure onset to the resection volume. This study supports the possibility of conducting fully data-driven multivariate connectivity estimations on high-dimensional iEEG datasets using the Kalman filter approach.
Reducing the two-loop large-scale structure power spectrum to low-dimensional, radial integrals
Schmittfull, Marcel; Vlah, Zvonimir
2016-11-28
Modeling the large-scale structure of the universe on nonlinear scales has the potential to substantially increase the science return of upcoming surveys by increasing the number of modes available for model comparisons. One way to achieve this is to model nonlinear scales perturbatively. Unfortunately, this involves high-dimensional loop integrals that are cumbersome to evaluate. Here, trying to simplify this, we show how two-loop (next-to-next-to-leading order) corrections to the density power spectrum can be reduced to low-dimensional, radial integrals. Many of those can be evaluated with a one-dimensional fast Fourier transform, which is significantly faster than the five-dimensional Monte-Carlo integrals thatmore » are needed otherwise. The general idea of this fast fourier transform perturbation theory method is to switch between Fourier and position space to avoid convolutions and integrate over orientations, leaving only radial integrals. This reformulation is independent of the underlying shape of the initial linear density power spectrum and should easily accommodate features such as those from baryonic acoustic oscillations. We also discuss how to account for halo bias and redshift space distortions.« less
Restoration of dimensional reduction in the random-field Ising model at five dimensions
NASA Astrophysics Data System (ADS)
Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D <6 to their values in the pure Ising model at D -2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
Restoration of dimensional reduction in the random-field Ising model at five dimensions.
Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤D<6 to their values in the pure Ising model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
A Localized Ensemble Kalman Smoother
NASA Technical Reports Server (NTRS)
Butala, Mark D.
2012-01-01
Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.
Chen, Nan; Majda, Andrew J
2017-12-05
Solving the Fokker-Planck equation for high-dimensional complex dynamical systems is an important issue. Recently, the authors developed efficient statistically accurate algorithms for solving the Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures, which contain many strong non-Gaussian features such as intermittency and fat-tailed probability density functions (PDFs). The algorithms involve a hybrid strategy with a small number of samples [Formula: see text], where a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious Gaussian kernel density estimation in the remaining low-dimensional subspace. In this article, two effective strategies are developed and incorporated into these algorithms. The first strategy involves a judicious block decomposition of the conditional covariance matrix such that the evolutions of different blocks have no interactions, which allows an extremely efficient parallel computation due to the small size of each individual block. The second strategy exploits statistical symmetry for a further reduction of [Formula: see text] The resulting algorithms can efficiently solve the Fokker-Planck equation with strongly non-Gaussian PDFs in much higher dimensions even with orders in the millions and thus beat the curse of dimension. The algorithms are applied to a [Formula: see text]-dimensional stochastic coupled FitzHugh-Nagumo model for excitable media. An accurate recovery of both the transient and equilibrium non-Gaussian PDFs requires only [Formula: see text] samples! In addition, the block decomposition facilitates the algorithms to efficiently capture the distinct non-Gaussian features at different locations in a [Formula: see text]-dimensional two-layer inhomogeneous Lorenz 96 model, using only [Formula: see text] samples. Copyright © 2017 the Author(s). Published by PNAS.
Hyper-Spectral Image Analysis With Partially Latent Regression and Spatial Markov Dependencies
NASA Astrophysics Data System (ADS)
Deleforge, Antoine; Forbes, Florence; Ba, Sileye; Horaud, Radu
2015-09-01
Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.
The use of the bi-factor model to test the uni-dimensionality of a battery of reasoning tests.
Primi, Ricardo; Rocha da Silva, Marjorie Cristina; Rodrigues, Priscila; Muniz, Monalisa; Almeida, Leandro S
2013-02-01
The Battery of Reasoning Tests 5 (BPR-5) aims to assess the reasoning ability of individuals, using sub-tests with different formats and contents that require basic processes of inductive and deductive reasoning for their resolution. The BPR has three sequential forms: BPR-5i (for children from first to fifth grade), BPR-5 - Form A (for children from sixth to eighth grade) and BPR-5 - form B (for high school and undergraduate students). The present study analysed 412 questionnaires concerning BPR-5i, 603 questionnaires concerning BPR-5 - Form A and 1748 questionnaires concerning BPR-5 - Form B. The main goal was to test the uni-dimensionality of the battery and its tests in relation to items using the bi-factor model. Results suggest that the g factor loadings (extracted by the uni-dimensional model) do not change when the data is adjusted for a more flexible multi-factor model (bi-factor model). A general reasoning factor underlying different contents items is supported.
Calculation of flow about posts and powerhead model. [space shuttle main engine
NASA Technical Reports Server (NTRS)
Anderson, P. G.; Farmer, R. C.
1985-01-01
A three dimensional analysis of the non-uniform flow around the liquid oxygen (LOX) posts in the Space Shuttle Main Engine (SSME) powerhead was performed to determine possible factors contributing to the failure of the posts. Also performed was three dimensional numerical fluid flow analysis of the high pressure fuel turbopump (HPFTP) exhaust system, consisting of the turnaround duct (TAD), two-duct hot gas manifold (HGM), and the Version B transfer ducts. The analysis was conducted in the following manner: (1) modeling the flow around a single and small clusters (2 to 10) of posts; (2) modeling the velocity field in the cross plane; and (3) modeling the entire flow region with a three dimensional network type model. Shear stress functions which will permit viscous analysis without requiring excessive numbers of computational grid points were developed. These wall functions, laminar and turbulent, have been compared to standard Blasius solutions and are directly applicable to the cylinder in cross flow class of problems to which the LOX post problem belongs.
Structural zeros in high-dimensional data with applications to microbiome studies.
Kaul, Abhishek; Davidov, Ori; Peddada, Shyamal D
2017-07-01
This paper is motivated by the recent interest in the analysis of high-dimensional microbiome data. A key feature of these data is the presence of "structural zeros" which are microbes missing from an observation vector due to an underlying biological process and not due to error in measurement. Typical notions of missingness are unable to model these structural zeros. We define a general framework which allows for structural zeros in the model and propose methods of estimating sparse high-dimensional covariance and precision matrices under this setup. We establish error bounds in the spectral and Frobenius norms for the proposed estimators and empirically verify them with a simulation study. The proposed methodology is illustrated by applying it to the global gut microbiome data of Yatsunenko and others (2012. Human gut microbiome viewed across age and geography. Nature 486, 222-227). Using our methodology we classify subjects according to the geographical location on the basis of their gut microbiome. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Parvin, Salma; Sultana, Aysha
2017-06-01
The influence of High Intensity Focused Ultrasound (HIFU) on the obstacle through blood vessel is studied numerically. A three-dimensional acoustics-thermal-fluid coupling model is employed to compute the temperature field around the obstacle through blood vessel. The model construction is based on the linear Westervelt and conjugate heat transfer equations for the obstacle through blood vessel. The system of equations is solved using Finite Element Method (FEM). We found from this three-dimensional numerical study that the rate of heat transfer is increasing from the obstacle and both the convective cooling and acoustic streaming can considerably change the temperature field.
NASA Astrophysics Data System (ADS)
Banerjee, Pritha; Kumari, Tripty; Sarkar, Subir Kumar
2018-02-01
This paper presents the 2-D analytical modeling of a front high- K gate stack triple-material gate Schottky Barrier Silicon-On-Nothing MOSFET. Using the two-dimensional Poisson's equation and considering the popular parabolic potential approximation, expression for surface potential as well as the electric field has been considered. In addition, the response of the proposed device towards aggressive downscaling, that is, its extent of immunity towards the different short-channel effects, has also been considered in this work. The analytical results obtained have been validated using the simulated results obtained using ATLAS, a two-dimensional device simulator from SILVACO.
Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C
2009-01-01
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press. With a CD: data, software, guides. (2009). 2. Kanevski M. Spatial Predictions of Soil Contamination Using General Regression Neural Networks. Systems Research and Information Systems, Volume 8, number 4, 1999. 3. Robert S., Foresti L., Kanevski M. Spatial prediction of monthly wind speeds in complex terrain with adaptive general regression neural networks. International Journal of Climatology, 33 pp. 1793-1804, 2013.
Power Enhancement in High Dimensional Cross-Sectional Tests
Fan, Jianqing; Liao, Yuan; Yao, Jiawei
2016-01-01
We propose a novel technique to boost the power of testing a high-dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated only by a couple of components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high-dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme-value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component”, which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. As specific applications, the proposed methods are applied to testing the factor pricing models and validating the cross-sectional independence in panel data models. PMID:26778846
A discrete fracture model for two-phase flow in fractured porous media
NASA Astrophysics Data System (ADS)
Gläser, Dennis; Helmig, Rainer; Flemisch, Bernd; Class, Holger
2017-12-01
A discrete fracture model on the basis of a cell-centered finite volume scheme with multi-point flux approximation (MPFA) is presented. The fractures are included in a d-dimensional computational domain as (d - 1)-dimensional entities living on the element facets, which requires the grid to have the element facets aligned with the fracture geometries. However, the approach overcomes the problem of small cells inside the fractures when compared to equi-dimensional models. The system of equations considered is solved on both the matrix and the fracture domain, where on the prior the fractures are treated as interior boundaries and on the latter the exchange term between fracture and matrix appears as an additional source/sink. This exchange term is represented by the matrix-fracture fluxes, computed as functions of the unknowns in both domains by applying adequate modifications to the MPFA scheme. The method is applicable to both low-permeable as well as highly conductive fractures. The quality of the results obtained by the discrete fracture model is studied by comparison to an equi-dimensional discretization on a simple geometry for both single- and two-phase flow. For the case of two-phase flow in a highly conductive fracture, good agreement in the solution and in the matrix-fracture transfer fluxes could be observed, while for a low-permeable fracture the discrepancies were more pronounced. The method is then applied two-phase flow through a realistic fracture network in two and three dimensions.
High Resolution Global Topography of Eros from NEAR Imaging and LIDAR Data
NASA Technical Reports Server (NTRS)
Gaskell, Robert W.; Konopliv, A.; Barnouin-Jha, O.; Scheeres, D.
2006-01-01
Principal Data Products: Ensemble of L-maps from SPC, Spacecraft state, Asteroid pole and rotation. Secondary Products: Global topography model, inertia tensor, gravity. Composite high resolution topography. Three dimensional image maps.
Fujisaki, K; Yokota, H; Nakatsuchi, H; Yamagata, Y; Nishikawa, T; Udagawa, T; Makinouchi, A
2010-01-01
A three-dimensional (3D) internal structure observation system based on serial sectioning was developed from an ultrasonic elliptical vibration cutting device and an optical microscope combined with a high-precision positioning device. For bearing steel samples, the cutting device created mirrored surfaces suitable for optical metallography, even for long-cutting distances during serial sectioning of these ferrous materials. Serial sectioning progressed automatically by means of numerical control. The system was used to observe inclusions in steel materials on a scale of several tens of micrometers. Three specimens containing inclusions were prepared from bearing steels. These inclusions could be detected as two-dimensional (2D) sectional images with resolution better than 1 mum. A three-dimensional (3D) model of each inclusion was reconstructed from the 2D serial images. The microscopic 3D models had sharp edges and complicated surfaces.
NASA Technical Reports Server (NTRS)
Goodwin, T. J.; Coate-Li, L.; Linnehan, R. M.; Hammond, T. G.
2000-01-01
This study established two- and three-dimensional renal proximal tubular cell cultures of the endangered species bowhead whale (Balaena mysticetus), developed SV40-transfected cultures, and cloned the 61-amino acid open reading frame for the metallothionein protein, the primary binding site for heavy metal contamination in mammals. Microgravity research, modulations in mechanical culture conditions (modeled microgravity), and shear stress have spawned innovative approaches to understanding the dynamics of cellular interactions, gene expression, and differentiation in several cellular systems. These investigations have led to the creation of ex vivo tissue models capable of serving as physiological research analogs for three-dimensional cellular interactions. These models are enabling studies in immune function, tissue modeling for basic research, and neoplasia. Three-dimensional cellular models emulate aspects of in vivo cellular architecture and physiology and may facilitate environmental toxicological studies aimed at elucidating biological functions and responses at the cellular level. Marine mammals occupy a significant ecological niche (72% of the Earth's surface is water) in terms of the potential for information on bioaccumulation and transport of terrestrial and marine environmental toxins in high-order vertebrates. Few ex vivo models of marine mammal physiology exist in vitro to accomplish the aforementioned studies. Techniques developed in this investigation, based on previous tissue modeling successes, may serve to facilitate similar research in other marine mammals.
Guo, Qi; Shen, Shu-Ting
2016-04-29
There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.
NASA Technical Reports Server (NTRS)
Goodwin, T. J.; Coate-Li, L.; Linnehan, R. M.; Hammond, T. G.
2000-01-01
This study established two- and three-dimensional renal proximal tubular cell cultures of the endangered species bowhead whale (Balaena mysticetus), developed SV40-transfected cultures, and cloned the 61-amino acid open reading frame for the metallothionein protein, the primary binding site for heavy metal contamination in mammals. Microgravity research, modulations in mechanical culture conditions (modeled microgravity), and shear stress have spawned innovative approaches to understanding the dynamics of cellular interactions, gene expression, and differentiation in several cellular systems. These investigations have led to the creation of ex vivo tissue models capable of serving as physiological research analogs for three-dimensional cellular interactions. These models are enabling studies in immune function, tissue modeling for basic research, and neoplasia. Three-dimensional cellular models emulate aspects of in vivo cellular architecture and physiology and may facilitate environmental toxicological studies aimed at elucidating biological functions and responses at the cellular level. Marine mammals occupy a significant ecological niche (72% of the Earth's surface is water) in terms of the potential for information on bioaccumulation and transport of terrestrial and marine environmental toxins in high-order vertebrates. Few ex vivo models of marine mammal physiology exist in vitro to accomplish the aforementioned studies. Techniques developed in this investigation, based on previous tissue modeling successes, may serve to facilitate similar research in other marine mammals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozanov, V. B., E-mail: rozanov@sci.lebedev.ru; Vergunova, G. A., E-mail: verg@sci.lebedev.ru
2015-11-15
The possibility of the analysis and interpretation of the reported experiments with the megajoule National Ignition Facility (NIF) laser on the compression of capsules in indirect-irradiation targets by means of the one-dimensional RADIAN program in the spherical geometry has been studied. The problem of the energy balance in a target and the determination of the laser energy that should be used in the spherical model of the target has been considered. The results of action of pulses differing in energy and time profile (“low-foot” and “high-foot” regimes) have been analyzed. The parameters of the compression of targets with a high-densitymore » carbon ablator have been obtained. The results of the simulations are in satisfactory agreement with the measurements and correspond to the range of the observed parameters. The set of compared results can be expanded, in particular, for a more detailed determination of the parameters of a target near the maximum compression of the capsule. The physical foundation of the possibility of using the one-dimensional description is the necessity of the closeness of the last stage of the compression of the capsule to a one-dimensional process. The one-dimensional simulation of the compression of the capsule can be useful in establishing the boundary behind which two-dimensional and three-dimensional simulation should be used.« less
Computing a Comprehensible Model for Spam Filtering
NASA Astrophysics Data System (ADS)
Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael
In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.
Capsule modeling of high foot implosion experiments on the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, D. S.; Kritcher, A. L.; Milovich, J. L.
This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less
Capsule modeling of high foot implosion experiments on the National Ignition Facility
Clark, D. S.; Kritcher, A. L.; Milovich, J. L.; ...
2017-03-21
This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less
Population Coding of Visual Space: Modeling
Lehky, Sidney R.; Sereno, Anne B.
2011-01-01
We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation. PMID:21344012
Mapping morphological shape as a high-dimensional functional curve
Fu, Guifang; Huang, Mian; Bo, Wenhao; Hao, Han; Wu, Rongling
2018-01-01
Abstract Detecting how genes regulate biological shape has become a multidisciplinary research interest because of its wide application in many disciplines. Despite its fundamental importance, the challenges of accurately extracting information from an image, statistically modeling the high-dimensional shape and meticulously locating shape quantitative trait loci (QTL) affect the progress of this research. In this article, we propose a novel integrated framework that incorporates shape analysis, statistical curve modeling and genetic mapping to detect significant QTLs regulating variation of biological shape traits. After quantifying morphological shape via a radius centroid contour approach, each shape, as a phenotype, was characterized as a high-dimensional curve, varying as angle θ runs clockwise with the first point starting from angle zero. We then modeled the dynamic trajectories of three mean curves and variation patterns as functions of θ. Our framework led to the detection of a few significant QTLs regulating the variation of leaf shape collected from a natural population of poplar, Populus szechuanica var tibetica. This population, distributed at altitudes 2000–4500 m above sea level, is an evolutionarily important plant species. This is the first work in the quantitative genetic shape mapping area that emphasizes a sense of ‘function’ instead of decomposing the shape into a few discrete principal components, as the majority of shape studies do. PMID:28062411
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
On the Limiting Markov Process of Energy Exchanges in a Rarely Interacting Ball-Piston Gas
NASA Astrophysics Data System (ADS)
Bálint, Péter; Gilbert, Thomas; Nándori, Péter; Szász, Domokos; Tóth, Imre Péter
2017-02-01
We analyse the process of energy exchanges generated by the elastic collisions between a point-particle, confined to a two-dimensional cell with convex boundaries, and a `piston', i.e. a line-segment, which moves back and forth along a one-dimensional interval partially intersecting the cell. This model can be considered as the elementary building block of a spatially extended high-dimensional billiard modeling heat transport in a class of hybrid materials exhibiting the kinetics of gases and spatial structure of solids. Using heuristic arguments and numerical analysis, we argue that, in a regime of rare interactions, the billiard process converges to a Markov jump process for the energy exchanges and obtain the expression of its generator.
Kong, Xiangxue; Nie, Lanying; Zhang, Huijian; Wang, Zhanglin; Ye, Qiang; Tang, Lei; Li, Jianyi; Huang, Wenhua
2016-01-01
Hepatic segment anatomy is difficult for medical students to learn. Three-dimensional visualization (3DV) is a useful tool in anatomy teaching, but current models do not capture haptic qualities. However, three-dimensional printing (3DP) can produce highly accurate complex physical models. Therefore, in this study we aimed to develop a novel 3DP hepatic segment model and compare the teaching effectiveness of a 3DV model, a 3DP model, and a traditional anatomical atlas. A healthy candidate (female, 50-years old) was recruited and scanned with computed tomography. After three-dimensional (3D) reconstruction, the computed 3D images of the hepatic structures were obtained. The parenchyma model was divided into 8 hepatic segments to produce the 3DV hepatic segment model. The computed 3DP model was designed by removing the surrounding parenchyma and leaving the segmental partitions. Then, 6 experts evaluated the 3DV and 3DP models using a 5-point Likert scale. A randomized controlled trial was conducted to evaluate the educational effectiveness of these models compared with that of the traditional anatomical atlas. The 3DP model successfully displayed the hepatic segment structures with partitions. All experts agreed or strongly agreed that the 3D models provided good realism for anatomical instruction, with no significant differences between the 3DV and 3DP models in each index (p > 0.05). Additionally, the teaching effects show that the 3DV and 3DP models were significantly better than traditional anatomical atlas in the first and second examinations (p < 0.05). Between the first and second examinations, only the traditional method group had significant declines (p < 0.05). A novel 3DP hepatic segment model was successfully developed. Both the 3DV and 3DP models could improve anatomy teaching significantly. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Compound activity prediction using models of binding pockets or ligand properties in 3D
Kufareva, Irina; Chen, Yu-Chen; Ilatovskiy, Andrey V.; Abagyan, Ruben
2014-01-01
Transient interactions of endogenous and exogenous small molecules with flexible binding sites in proteins or macromolecular assemblies play a critical role in all biological processes. Current advances in high-resolution protein structure determination, database development, and docking methodology make it possible to design three-dimensional models for prediction of such interactions with increasing accuracy and specificity. Using the data collected in the Pocketome encyclopedia, we here provide an overview of two types of the three-dimensional ligand activity models, pocket-based and ligand property-based, for two important classes of proteins, nuclear and G-protein coupled receptors. For half the targets, the pocket models discriminate actives from property matched decoys with acceptable accuracy (the area under ROC curve, AUC, exceeding 84%) and for about one fifth of the targets with high accuracy (AUC > 95%). The 3D ligand property field models performed better than 95% in half of the cases. The high performance models can already become a basis of activity predictions for new chemicals. Family-wide benchmarking of the models highlights strengths of both approaches and helps identify their inherent bottlenecks and challenges. PMID:23116466
Multivariate time series analysis of neuroscience data: some challenges and opportunities.
Pourahmadi, Mohsen; Noorbaloochi, Siamak
2016-04-01
Neuroimaging data may be viewed as high-dimensional multivariate time series, and analyzed using techniques from regression analysis, time series analysis and spatiotemporal analysis. We discuss issues related to data quality, model specification, estimation, interpretation, dimensionality and causality. Some recent research areas addressing aspects of some recurring challenges are introduced. Copyright © 2015 Elsevier Ltd. All rights reserved.
Surrogate-Based Optimization of Biogeochemical Transport Models
NASA Astrophysics Data System (ADS)
Prieß, Malte; Slawig, Thomas
2010-09-01
First approaches towards a surrogate-based optimization method for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Oschlies and Garcon [1], simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. A key issue is to minimize the misfit between the model output and given observational data. Our aim is to reduce the overall optimization cost avoiding expensive function and derivative evaluations by using a surrogate model replacing the high-fidelity model in focus. This in particular becomes important for more complex three-dimensional models. We analyse a coarsening in the discretization of the model equations as one way to create such a surrogate. Here the numerical stability crucially depends upon the discrete stepsize in time and space and the biochemical terms. We show that for given model parameters the level of grid coarsening can be choosen accordingly yielding a stable and satisfactory surrogate. As one example of a surrogate-based optimization method we present results of the Aggressive Space Mapping technique (developed by John W. Bandler [2, 3]) applied to the optimization of this one-dimensional biogeochemical transport model.
Entanglement Entropy of the Six-Dimensional Horowitz-Strominger Black Hole
NASA Astrophysics Data System (ADS)
Li, Huai-Fan; Zhang, Sheng-Li; Wu, Yue-Qin; Ren, Zhao
By using the entanglement entropy method, the statistical entropy of the Bose and Fermi fields in a thin film is calculated and the Bekenstein-Hawking entropy of six-dimensional Horowitz-Strominger black hole is obtained. Here, the Bose and Fermi fields are entangled with the quantum states in six-dimensional Horowitz-Strominger black hole and the fields are outside of the horizon. The divergence of brick-wall model is avoided without any cutoff by the new equation of state density obtained with the generalized uncertainty principle. The calculation implies that the high density quantum states near the event horizon are strongly correlated with the quantum states in black hole. The black hole entropy is a quantum effect. It is an intrinsic characteristic of space-time. The ultraviolet cutoff in the brick-wall model is unreasonable. The generalized uncertainty principle should be considered in the high energy quantum field near the event horizon. Using the quantum statistical method, we directly calculate the partition function of the Bose and Fermi fields under the background of the six-dimensional black hole. The difficulty in solving the wave equations of various particles is overcome.
The initial value problem in Lagrangian drift kinetic theory
NASA Astrophysics Data System (ADS)
Burby, J. W.
2016-06-01
> Existing high-order variational drift kinetic theories contain unphysical rapidly varying modes that are not seen at low orders. These unphysical modes, which may be rapidly oscillating, damped or growing, are ushered in by a failure of conventional high-order drift kinetic theory to preserve the structure of its parent model's initial value problem. In short, the (infinite dimensional) system phase space is unphysically enlarged in conventional high-order variational drift kinetic theory. I present an alternative, `renormalized' variational approach to drift kinetic theory that manifestly respects the parent model's initial value problem. The basic philosophy underlying this alternate approach is that high-order drift kinetic theory ought to be derived by truncating the all-orders system phase-space Lagrangian instead of the usual `field particle' Lagrangian. For the sake of clarity, this story is told first through the lens of a finite-dimensional toy model of high-order variational drift kinetics; the analogous full-on drift kinetic story is discussed subsequently. The renormalized drift kinetic system, while variational and just as formally accurate as conventional formulations, does not support the troublesome rapidly varying modes.
High-frame-rate full-vocal-tract 3D dynamic speech imaging.
Fu, Maojing; Barlaz, Marissa S; Holtrop, Joseph L; Perry, Jamie L; Kuehn, David P; Shosted, Ryan K; Liang, Zhi-Pei; Sutton, Bradley P
2017-04-01
To achieve high temporal frame rate, high spatial resolution and full-vocal-tract coverage for three-dimensional dynamic speech MRI by using low-rank modeling and sparse sampling. Three-dimensional dynamic speech MRI is enabled by integrating a novel data acquisition strategy and an image reconstruction method with the partial separability model: (a) a self-navigated sparse sampling strategy that accelerates data acquisition by collecting high-nominal-frame-rate cone navigator sand imaging data within a single repetition time, and (b) are construction method that recovers high-quality speech dynamics from sparse (k,t)-space data by enforcing joint low-rank and spatiotemporal total variation constraints. The proposed method has been evaluated through in vivo experiments. A nominal temporal frame rate of 166 frames per second (defined based on a repetition time of 5.99 ms) was achieved for an imaging volume covering the entire vocal tract with a spatial resolution of 2.2 × 2.2 × 5.0 mm 3 . Practical utility of the proposed method was demonstrated via both validation experiments and a phonetics investigation. Three-dimensional dynamic speech imaging is possible with full-vocal-tract coverage, high spatial resolution and high nominal frame rate to provide dynamic speech data useful for phonetic studies. Magn Reson Med 77:1619-1629, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Ultra-high-frequency chaos in a time-delay electronic device with band-limited feedback.
Illing, Lucas; Gauthier, Daniel J
2006-09-01
We report an experimental study of ultra-high-frequency chaotic dynamics generated in a delay-dynamical electronic device. It consists of a transistor-based nonlinearity, commercially-available amplifiers, and a transmission-line for feedback. The feedback is band-limited, allowing tuning of the characteristic time-scales of both the periodic and high-dimensional chaotic oscillations that can be generated with the device. As an example, periodic oscillations ranging from 48 to 913 MHz are demonstrated. We develop a model and use it to compare the experimentally observed Hopf bifurcation of the steady-state to existing theory [Illing and Gauthier, Physica D 210, 180 (2005)]. We find good quantitative agreement of the predicted and the measured bifurcation threshold, bifurcation type and oscillation frequency. Numerical integration of the model yields quasiperiodic and high dimensional chaotic solutions (Lyapunov dimension approximately 13), which match qualitatively the observed device dynamics.
Bioprinted three dimensional human tissues for toxicology and disease modeling.
Nguyen, Deborah G; Pentoney, Stephen L
2017-03-01
The high rate of attrition among clinical-stage therapies, due largely to an inability to predict human toxicity and/or efficacy, underscores the need for in vitro models that better recapitulate in vivo human biology. In much the same way that additive manufacturing has revolutionized the production of solid objects, three-dimensional (3D) bioprinting is enabling the automated production of more architecturally and functionally accurate in vitro tissue culture models. Here, we provide an overview of the most commonly used bioprinting approaches and how they are being used to generate complex in vitro tissues for use in toxicology and disease modeling research. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parsimonious description for predicting high-dimensional dynamics
Hirata, Yoshito; Takeuchi, Tomoya; Horai, Shunsuke; Suzuki, Hideyuki; Aihara, Kazuyuki
2015-01-01
When we observe a system, we often cannot observe all its variables and may have some of its limited measurements. Under such a circumstance, delay coordinates, vectors made of successive measurements, are useful to reconstruct the states of the whole system. Although the method of delay coordinates is theoretically supported for high-dimensional dynamical systems, practically there is a limitation because the calculation for higher-dimensional delay coordinates becomes more expensive. Here, we propose a parsimonious description of virtually infinite-dimensional delay coordinates by evaluating their distances with exponentially decaying weights. This description enables us to predict the future values of the measurements faster because we can reuse the calculated distances, and more accurately because the description naturally reduces the bias of the classical delay coordinates toward the stable directions. We demonstrate the proposed method with toy models of the atmosphere and real datasets related to renewable energy. PMID:26510518
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance
Wang, Weichen
2017-01-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726
NASA Astrophysics Data System (ADS)
Kobor, J. S.; O'Connor, M. D.; Sherwood, M. N.
2013-12-01
Effective floodplain management and restoration requires a detailed understanding of floodplain processes not readily achieved using standard one-dimensional hydraulic modeling approaches. The application of more advanced numerical models is, however, often limited by the relatively high costs of acquiring the high-resolution topographic data needed for model development using traditional surveying methods. The increasing availability of LiDAR data has the potential to significantly reduce these costs and thus facilitate application of multi-dimensional hydraulic models where budget constraints would have otherwise prohibited their use. The accuracy and suitability of LiDAR data for supporting model development can vary widely depending on the resolution of channel and floodplain features, the data collection density, and the degree of vegetation canopy interference among other factors. More work is needed to develop guidelines for evaluating LiDAR accuracy and determining when and how best the data can be used to support numerical modeling activities. Here we present two recent case studies where LiDAR datasets were used to support floodplain and sediment transport modeling efforts. One LiDAR dataset was collected with a relatively low point density and used to study a small stream channel in coastal Marin County and a second dataset was collected with a higher point density and applied to a larger stream channel in western Sonoma County. Traditional topographic surveying was performed at both sites which provided a quantitative means of evaluating the LiDAR accuracy. We found that with the lower point density dataset, the accuracy of the LiDAR varied significantly between the active stream channel and floodplain whereas the accuracy across the channel/floodplain interface was more uniform with the higher density dataset. Accuracy also varied widely as a function of the density of the riparian vegetation canopy. We found that coupled 1- and 2-dimensional hydraulic models whereby the active channel is simulated in 1-dimension and the floodplain in 2-dimensions provided the best means of utilizing the LiDAR data to evaluate existing conditions and develop alternative flood hazard mitigation and habitat restoration strategies. Such an approach recognizes the limitations of the LiDAR data within active channel areas with dense riparian cover and is cost-effective in that it allows field survey efforts to focus primarily on characterizing active stream channel areas. The multi-dimensional modeling approach also conforms well to the physical realties of the stream system whereby in-channel flows can generally be well-described as a one-dimensional flow problem and floodplain flows are often characterized by multiple and often poorly understood flowpaths. The multi-dimensional modeling approach has the additional advantages of allowing for accurate simulation of the effects of hydraulic structures using well-tested one-dimensional formulae and minimizing the computational burden of the models by not requiring the small spatial resolutions necessary to resolve the geometries of small stream channels in two-dimensions.
COSMO-PAFOG: Three-dimensional fog forecasting with the high-resolution COSMO-model
NASA Astrophysics Data System (ADS)
Hacker, Maike; Bott, Andreas
2017-04-01
The presence of fog can have critical impact on shipping, aviation and road traffic increasing the risk of serious accidents. Besides these negative impacts of fog, in arid regions fog is explored as a supplementary source of water for human settlements. Thus the improvement of fog forecasts holds immense operational value. The aim of this study is the development of an efficient three-dimensional numerical fog forecast model based on a mesoscale weather prediction model for the application in the Namib region. The microphysical parametrization of the one-dimensional fog forecast model PAFOG (PArameterized FOG) is implemented in the three-dimensional nonhydrostatic mesoscale weather prediction model COSMO (COnsortium for Small-scale MOdeling) developed and maintained by the German Meteorological Service. Cloud water droplets are introduced in COSMO as prognostic variables, thus allowing a detailed description of droplet sedimentation. Furthermore, a visibility parametrization depending on the liquid water content and the droplet number concentration is implemented. The resulting fog forecast model COSMO-PAFOG is run with kilometer-scale horizontal resolution. In vertical direction, we use logarithmically equidistant layers with 45 of 80 layers in total located below 2000 m. Model results are compared to satellite observations and synoptic observations of the German Meteorological Service for a domain in the west of Germany, before the model is adapted to the geographical and climatological conditions in the Namib desert. COSMO-PAFOG is able to represent the horizontal structure of fog patches reasonably well. Especially small fog patches typical of radiation fog can be simulated in agreement with observations. Ground observations of temperature are also reproduced. Simulations without the PAFOG microphysics yield unrealistically high liquid water contents. This in turn reduces the radiative cooling of the ground, thus inhibiting nocturnal temperature decrease. The simulated visibility agrees with observations. However, fog tends to be dissolved earlier than in the observation. As a result of the investigated fog events, it is concluded that the three-dimensional fog forecast model COSMO-PAFOG is able to simulate these fog events in accordance with observations. After the successful application of COSMO-PAFOG for fog events in the west of Germany, model simulations will be performed for coastal desert fog in the Namib region.
Chaos in plasma simulation and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, C.; Newman, D.E.; Sprott, J.C.
1993-09-01
We investigate the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas using data from both numerical simulations and experiment. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos. These tools include phase portraits and Poincard sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulate the plasma dynamics. These are -the DEBS code, which models global RFPmore » dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low,dimensional chaos and simple determinism. Experimental data were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or other simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less
NASA Technical Reports Server (NTRS)
Douglass, Anne R.; Rood, Richard B.; Jackman, Charles H.; Weaver, Clark J.
1994-01-01
Two-dimensional (zonally averaged) photochemical models are commonly used for calculations of ozone changes due to various perturbations. These include calculating the ozone change expected as a result of change in the lower stratospheric composition due to the exhaust of a fleet of supersonic aircraft flying in the lower stratosphere. However, zonal asymmetries are anticipated to be important to this sort of calculation. The aircraft are expected to be restricted from flying over land at supersonic speed due to sonic booms, thus the pollutant source will not be zonally symmetric. There is loss of pollutant through stratosphere/troposphere exchange, but these processes are spatially and temporally inhomogeneous. Asymmetry in the pollutant distribution contributes to the uncertainty in the ozone changes calculated with two dimensional models. Pollutant distributions for integrations of at least 1 year of continuous pollutant emissions along flight corridors are calculated using a three dimensional chemistry and transport model. These distributions indicate the importance of asymmetry in the pollutant distributions to evaluation of the impact of stratospheric aircraft on ozone. The implications of such pollutant asymmetries to assessment calculations are discussed, considering both homogeneous and heterogeneous reactions.
Luan, Xiaoli; Chen, Qiang; Liu, Fei
2014-09-01
This article presents a new scheme to design full matrix controller for high dimensional multivariable processes based on equivalent transfer function (ETF). Differing from existing ETF method, the proposed ETF is derived directly by exploiting the relationship between the equivalent closed-loop transfer function and the inverse of open-loop transfer function. Based on the obtained ETF, the full matrix controller is designed utilizing the existing PI tuning rules. The new proposed ETF model can more accurately represent the original processes. Furthermore, the full matrix centralized controller design method proposed in this paper is applicable to high dimensional multivariable systems with satisfactory performance. Comparison with other multivariable controllers shows that the designed ETF based controller is superior with respect to design-complexity and obtained performance. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.
2017-12-01
At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3D PBL scheme in regions of complex terrain.
Eben N. Broadbent; Angélica M. Almeyda Zambrano; Gregory P. Asner; Christopher B. Field; Brad E. Rosenheim; Ty Kennedy-Bowdoin; David E. Knapp; David Burke; Christian Giardina; Susan Cordell
2014-01-01
We develop and validate a high-resolution three-dimensional model of light and air temperature for a tropical forest interior in Hawaii along an elevation gradient varying greatly in structure but maintaining a consistent species composition. Our microclimate models integrate high-resolution airborne waveform light detection and ranging data (LiDAR) and hyperspectral...
Hot Electrons Regain Coherence in Semiconducting Nanowires
NASA Astrophysics Data System (ADS)
Reiner, Jonathan; Nayak, Abhay Kumar; Avraham, Nurit; Norris, Andrew; Yan, Binghai; Fulga, Ion Cosma; Kang, Jung-Hyun; Karzig, Toesten; Shtrikman, Hadas; Beidenkopf, Haim
2017-04-01
The higher the energy of a particle is above equilibrium, the faster it relaxes because of the growing phase space of available electronic states it can interact with. In the relaxation process, phase coherence is lost, thus limiting high-energy quantum control and manipulation. In one-dimensional systems, high relaxation rates are expected to destabilize electronic quasiparticles. Here, we show that the decoherence induced by relaxation of hot electrons in one-dimensional semiconducting nanowires evolves nonmonotonically with energy such that above a certain threshold hot electrons regain stability with increasing energy. We directly observe this phenomenon by visualizing, for the first time, the interference patterns of the quasi-one-dimensional electrons using scanning tunneling microscopy. We visualize the phase coherence length of the one-dimensional electrons, as well as their phase coherence time, captured by crystallographic Fabry-Pèrot resonators. A remarkable agreement with a theoretical model reveals that the nonmonotonic behavior is driven by the unique manner in which one-dimensional hot electrons interact with the cold electrons occupying the Fermi sea. This newly discovered relaxation profile suggests a high-energy regime for operating quantum applications that necessitate extended coherence or long thermalization times, and may stabilize electronic quasiparticles in one dimension.
Multiframe super resolution reconstruction method based on light field angular images
NASA Astrophysics Data System (ADS)
Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao
2017-12-01
The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.
Bayesian Exploratory Factor Analysis
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. PMID:25431517
High-fidelity meshes from tissue samples for diffusion MRI simulations.
Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C
2010-01-01
This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.
Three-dimensional Monte Carlo model of pulsed-laser treatment of cutaneous vascular lesions
NASA Astrophysics Data System (ADS)
Milanič, Matija; Majaron, Boris
2011-12-01
We present a three-dimensional Monte Carlo model of optical transport in skin with a novel approach to treatment of side boundaries of the volume of interest. This represents an effective way to overcome the inherent limitations of ``escape'' and ``mirror'' boundary conditions and enables high-resolution modeling of skin inclusions with complex geometries and arbitrary irradiation patterns. The optical model correctly reproduces measured values of diffuse reflectance for normal skin. When coupled with a sophisticated model of thermal transport and tissue coagulation kinetics, it also reproduces realistic values of radiant exposure thresholds for epidermal injury and for photocoagulation of port wine stain blood vessels in various skin phototypes, with or without application of cryogen spray cooling.
a Probabilistic Embedding Clustering Method for Urban Structure Detection
NASA Astrophysics Data System (ADS)
Lin, X.; Li, H.; Zhang, Y.; Gao, L.; Zhao, L.; Deng, M.
2017-09-01
Urban structure detection is a basic task in urban geography. Clustering is a core technology to detect the patterns of urban spatial structure, urban functional region, and so on. In big data era, diverse urban sensing datasets recording information like human behaviour and human social activity, suffer from complexity in high dimension and high noise. And unfortunately, the state-of-the-art clustering methods does not handle the problem with high dimension and high noise issues concurrently. In this paper, a probabilistic embedding clustering method is proposed. Firstly, we come up with a Probabilistic Embedding Model (PEM) to find latent features from high dimensional urban sensing data by "learning" via probabilistic model. By latent features, we could catch essential features hidden in high dimensional data known as patterns; with the probabilistic model, we can also reduce uncertainty caused by high noise. Secondly, through tuning the parameters, our model could discover two kinds of urban structure, the homophily and structural equivalence, which means communities with intensive interaction or in the same roles in urban structure. We evaluated the performance of our model by conducting experiments on real-world data and experiments with real data in Shanghai (China) proved that our method could discover two kinds of urban structure, the homophily and structural equivalence, which means clustering community with intensive interaction or under the same roles in urban space.
NASA Technical Reports Server (NTRS)
Povinelli, L. A.
1984-01-01
An assessment of several three dimensional inviscid turbine aerodynamic computer codes and loss models used at the NASA Lewis Research Center is presented. Five flow situations are examined, for which both experimental data and computational results are available. The five flows form a basis for the evaluation of the computational procedures. It was concluded that stator flows may be calculated with a high degree of accuracy, whereas, rotor flow fields are less accurately determined. Exploitation of contouring, learning, bowing, and sweeping will require a three dimensional viscous analysis technique.
Numerical model of spray combustion in a single cylinder diesel engine
NASA Astrophysics Data System (ADS)
Acampora, Luigi; Sequino, Luigi; Nigro, Giancarlo; Continillo, Gaetano; Vaglieco, Bianca Maria
2017-11-01
A numerical model is developed for predicting the pressure cycle from Intake Valve Closing (IVC) to the Exhaust Valve Opening (EVO) events. The model is based on a modified one-dimensional (1D) Musculus and Kattke spray model, coupled with a zero-dimensional (0D) non-adiabatic transient Fed-Batch reactor model. The 1D spray model provides an estimate of the fuel evaporation rate during the injection phenomenon, as a function of time. The 0D Fed-Batch reactor model describes combustion. The main goal of adopting a 0D (perfectly stirred) model is to use highly detailed reaction mechanisms for Diesel fuel combustion in air, while keeping the computational cost as low as possible. The proposed model is validated by comparing its predictions with experimental data of pressure obtained from an optical single cylinder Diesel engine.
Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith
2018-01-02
Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.
Dimensional control of die castings
NASA Astrophysics Data System (ADS)
Karve, Aniruddha Ajit
The demand for net shape die castings, which require little or no machining, is steadily increasing. Stringent customer requirements are forcing die casters to deliver high quality castings in increasingly short lead times. Dimensional conformance to customer specifications is an inherent part of die casting quality. The dimensional attributes of a die casting are essentially dependent upon many factors--the quality of the die and the degree of control over the process variables being the two major sources of dimensional error in die castings. This study focused on investigating the nature and the causes of dimensional error in die castings. The two major components of dimensional error i.e., dimensional variability and die allowance were studied. The major effort of this study was to qualitatively and quantitatively study the effects of casting geometry and process variables on die casting dimensional variability and die allowance. This was accomplished by detailed dimensional data collection at production die casting sites. Robust feature characterization schemes were developed to describe complex casting geometry in quantitative terms. Empirical modeling was utilized to quantify the effects of the casting variables on dimensional variability and die allowance for die casting features. A number of casting geometry and process variables were found to affect dimensional variability in die castings. The dimensional variability was evaluated by comparisons with current published dimensional tolerance standards. The casting geometry was found to play a significant role in influencing the die allowance of the features measured. The predictive models developed for dimensional variability and die allowance were evaluated to test their effectiveness. Finally, the relative impact of all the components of dimensional error in die castings was put into perspective, and general guidelines for effective dimensional control in the die casting plant were laid out. The results of this study will contribute to enhancement of dimensional quality and lead time compression in the die casting industry, thus making it competitive with other net shape manufacturing processes.
Yang, Jing; Ye, Shu-jun; Wu, Ji-chun
2011-05-01
This paper studied on the influence of bioclogging on permeability of saturated porous media. Laboratory hydraulic tests were conducted in a two-dimensional C190 sand-filled cell (55 cm wide x 45 cm high x 1.28 cm thick) to investigate growth of the mixed microorganisms (KB-1) and influence of biofilm on permeability of saturated porous media under condition of rich nutrition. Biomass distributions in the water and on the sand in the cell were measured by protein analysis. The biofilm distribution on the sand was observed by confocal laser scanning microscopy. Permeability was measured by hydraulic tests. The biomass levels measured in water and on the sand increased with time, and were highest at the bottom of the cell. The biofilm on the sand at the bottom of the cell was thicker. The results of the hydraulic tests demonstrated that the permeability due to biofilm growth was estimated to be average 12% of the initial value. To investigate the spatial distribution of permeability in the two dimensional cell, three models (Taylor, Seki, and Clement) were used to calculate permeability of porous media with biofilm growth. The results of Taylor's model showed reduction in permeability of 2-5 orders magnitude. The Clement's model predicted 3%-98% of the initial value. Seki's model could not be applied in this study. Conclusively, biofilm growth could obviously decrease the permeability of two dimensional saturated porous media, however, the reduction was much less than that estimated in one dimensional condition. Additionally, under condition of two dimensional saturated porous media with rich nutrition, Seki's model could not be applied, Taylor's model predicted bigger reductions, and the results of Clement's model were closest to the result of hydraulic test.
Kontis, Angelo L.
1999-01-01
The seaward limit of the fresh ground-water system underlying Kings and Queens Counties on Long Island, N.Y., is at the freshwater-saltwater transition zone. This zone has been conceptualized in transient-state, three-dimensional models of the aquifer system as a sharp interface between freshwater and saltwater, and represented as a stationary, zero lateral-flow boundary. In this study, a pair of two-dimensional, four-layer ground-water flow models representing a generalized vertical section in Kings County and one in adjacent Queens County were developed to evaluate the validity of the boundary condition used in three-dimensional models of the aquifer system. The two-dimensional simulations used a model code that can simulate the movement of a sharp interface in response to transient stress. Sensitivity of interface movement to four factors was analyzed; these were (1) the method of simulating vertical leakage between freshwater and saltwater; (2) recharge at the normal rate, at 50-percent of the normal rate, and at zero for a prolonged (3-year) period; (3) high, medium, and low pumping rates; and (4) pumping from a hypothetical cluster of wells at two locations. Results indicate that the response of the interfaces to the magnitude and duration of pumping and the location of the hypothetical wells is probably sufficiently slow that the interfaces in three-dimensional models can reasonably be approximated as stationary, zero-lateral- flow boundaries.
NASA Technical Reports Server (NTRS)
Sohn, Kiho D.; Ip, Shek-Se P.
1988-01-01
Three-dimensional finite element models were generated and transferred into three-dimensional finite difference models to perform transient thermal analyses for the SSME high pressure fuel turbopump's first stage nozzles and rotor blades. STANCOOL was chosen to calculate the heat transfer characteristics (HTCs) around the airfoils, and endwall effects were included at the intersections of the airfoils and platforms for the steady-state boundary conditions. Free and forced convection due to rotation effects were also considered in hollow cores. Transient HTCs were calculated by taking ratios of the steady-state values based on the flow rates and fluid properties calculated at each time slice. Results are presented for both transient plots and three-dimensional color contour isotherm plots; they were also converted into universal files to be used for FEM stress analyses.
Characterization of Lifshitz transitions in topological nodal line semimetals
NASA Astrophysics Data System (ADS)
Jiang, Hui; Li, Linhu; Gong, Jiangbin; Chen, Shu
2018-04-01
We introduce a two-band model of three-dimensional nodal line semimetals (NLSMs), the Fermi surface of which at half-filling may form various one-dimensional configurations of different topology. We study the symmetries and "drumhead" surface states of the model, and find that the transitions between different configurations, namely, the Lifshitz transitions, can be identified solely by the number of gap-closing points on some high-symmetry planes in the Brillouin zone. A global phase diagram of this model is also obtained accordingly. We then investigate the effect of some extra terms analogous to a two-dimensional Rashba-type spin-orbit coupling. The introduced extra terms open a gap for the NLSMs and can be useful in engineering different topological insulating phases. We demonstrate that the behavior of surface Dirac cones in the resulting insulating system has a clear correspondence with the different configurations of the original nodal lines in the absence of the gap terms.
Khalil, Wael; EzEldeen, Mostafa; Van De Casteele, Elke; Shaheen, Eman; Sun, Yi; Shahbazian, Maryam; Olszewski, Raphael; Politis, Constantinus; Jacobs, Reinhilde
2016-03-01
Our aim was to determine the accuracy of 3-dimensional reconstructed models of teeth compared with the natural teeth by using 4 different 3-dimensional printers. This in vitro study was carried out using 2 intact, dry adult human mandibles, which were scanned with cone beam computed tomography. Premolars were selected for this study. Dimensional differences between natural teeth and the printed models were evaluated directly by using volumetric differences and indirectly through optical scanning. Analysis of variance, Pearson correlation, and Bland Altman plots were applied for statistical analysis. Volumetric measurements from natural teeth and fabricated models, either by the direct method (the Archimedes principle) or by the indirect method (optical scanning), showed no statistical differences. The mean volume difference ranged between 3.1 mm(3) (0.7%) and 4.4 mm(3) (1.9%) for the direct measurement, and between -1.3 mm(3) (-0.6%) and 11.9 mm(3) (+5.9%) for the optical scan. A surface part comparison analysis showed that 90% of the values revealed a distance deviation within the interval 0 to 0.25 mm. Current results showed a high accuracy of all printed models of teeth compared with natural teeth. This outcome opens perspectives for clinical use of cost-effective 3-dimensional printed teeth for surgical procedures, such as tooth autotransplantation. Copyright © 2016 Elsevier Inc. All rights reserved.
A finite element approach for solution of the 3D Euler equations
NASA Technical Reports Server (NTRS)
Thornton, E. A.; Ramakrishnan, R.; Dechaumphai, P.
1986-01-01
Prediction of thermal deformations and stresses has prime importance in the design of the next generation of high speed flight vehicles. Aerothermal load computations for complex three-dimensional shapes necessitate development of procedures to solve the full Navier-Stokes equations. This paper details the development of a three-dimensional inviscid flow approach which can be extended for three-dimensional viscous flows. A finite element formulation, based on a Taylor series expansion in time, is employed to solve the compressible Euler equations. Model generation and results display are done using a commercially available program, PATRAN, and vectorizing strategies are incorporated to ensure computational efficiency. Sample problems are presented to demonstrate the validity of the approach for analyzing high speed compressible flows.
NASA Technical Reports Server (NTRS)
Keil, J.
1985-01-01
Wind tunnel tests were conducted on airfoil models in order to study the flow separation phenomena occurring for high angles of attack. Pressure distribution on wings of different geometries were measured. Results show that for three-dimensional airfoils layout and span lift play a role. Separation effects on airfoils with moderate extension are three-dimensional. The flow domains separated from the air foil must be treated three-dimensionally. The rolling-up of separated vortex layers increases with angle in intensity and induction effect and shows strong nonlinearities. Boundary layer material moves perpendicularly to the flow direction due to the pressure gradients at the airfoil; this has a stabilizing effect. The separation starts earlier with increasing pointed profiles.
Dagdeviren, Omur E
2018-08-03
The effect of surface disorder, load, and velocity on friction between a single asperity contact and a model surface is explored with one-dimensional and two-dimensional Prandtl-Tomlinson (PT) models. We show that there are fundamental physical differences between the predictions of one-dimensional and two-dimensional models. The one-dimensional model estimates a monotonic increase in friction and energy dissipation with load, velocity, and surface disorder. However, a two-dimensional PT model, which is expected to approximate a tip-sample system more realistically, reveals a non-monotonic trend, i.e. friction is inert to surface disorder and roughness in wearless friction regime. The two-dimensional model discloses that the surface disorder starts to dominate the friction and energy dissipation when the tip and the sample interact predominantly deep into the repulsive regime. Our numerical calculations address that tracking the minimum energy path and the slip-stick motion are two competing effects that determine the load, velocity, and surface disorder dependence of friction. In the two-dimensional model, the single asperity can follow the minimum energy path in wearless regime; however, with increasing load and sliding velocity, the slip-stick movement dominates the dynamic motion and results in an increase in friction by impeding tracing the minimum energy path. Contrary to the two-dimensional model, when the one-dimensional PT model is employed, the single asperity cannot escape to the minimum energy minimum due to constraint motion and reveals only a trivial dependence of friction on load, velocity, and surface disorder. Our computational analyses clarify the physical differences between the predictions of the one-dimensional and two-dimensional models and open new avenues for disordered surfaces for low energy dissipation applications in wearless friction regime.
ERIC Educational Resources Information Center
Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka
2015-01-01
The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…
PyDREAM: high-dimensional parameter inference for biological models in python.
Shockley, Erin M; Vrugt, Jasper A; Lopez, Carlos F; Valencia, Alfonso
2018-02-15
Biological models contain many parameters whose values are difficult to measure directly via experimentation and therefore require calibration against experimental data. Markov chain Monte Carlo (MCMC) methods are suitable to estimate multivariate posterior model parameter distributions, but these methods may exhibit slow or premature convergence in high-dimensional search spaces. Here, we present PyDREAM, a Python implementation of the (Multiple-Try) Differential Evolution Adaptive Metropolis [DREAM(ZS)] algorithm developed by Vrugt and ter Braak (2008) and Laloy and Vrugt (2012). PyDREAM achieves excellent performance for complex, parameter-rich models and takes full advantage of distributed computing resources, facilitating parameter inference and uncertainty estimation of CPU-intensive biological models. PyDREAM is freely available under the GNU GPLv3 license from the Lopez lab GitHub repository at http://github.com/LoLab-VU/PyDREAM. c.lopez@vanderbilt.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Development and validation of a two-dimensional fast-response flood estimation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judi, David R; Mcpherson, Timothy N; Burian, Steven J
2009-01-01
A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. Themore » simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.« less
Wang, Huaijun; Kaneko, Osamu F; Tian, Lu; Hristov, Dimitre; Willmann, Jürgen K
2015-05-01
We sought to assess the feasibility and reproducibility of 3-dimensional ultrasound molecular imaging (USMI) of vascular endothelial growth factor receptor 2 (VEGFR2) expression in tumor angiogenesis using a clinical matrix array transducer and a clinical grade VEGFR2-targeted contrast agent in a murine model of human colon cancer. Animal studies were approved by the Institutional Administrative Panel on Laboratory Animal Care. Mice with human colon cancer xenografts (n = 33) were imaged with a clinical ultrasound system and transducer (Philips iU22; X6-1) after intravenous injection of either clinical grade VEGFR2-targeted microbubbles or nontargeted control microbubbles. Nineteen mice were scanned twice to assess imaging reproducibility. Fourteen mice were scanned both before and 24 hours after treatment with either bevacizumab (n = 7) or saline only (n = 7). Three-dimensional USMI data sets were retrospectively reconstructed into multiple consecutive 1-mm-thick USMI data sets to simulate 2-dimensional imaging. Vascular VEGFR2 expression was assessed ex vivo using immunofluorescence. Three-dimensional USMI was highly reproducible using both VEGFR2-targeted microbubbles and nontargeted control microbubbles (intraclass correlation coefficient, 0.83). The VEGFR2-targeted USMI signal significantly (P = 0.02) decreased by 57% after antiangiogenic treatment compared with the control group, which correlated well with ex vivo VEGFR2 expression on immunofluorescence (ρ = 0.93, P = 0.003). If only central 1-mm tumor planes were analyzed to assess antiangiogenic treatment response, the USMI signal change was significantly (P = 0.006) overestimated by an average of 27% (range, 2%-73%) compared with 3-dimensional USMI. Three-dimensional USMI is feasible and highly reproducible and allows accurate assessment and monitoring of VEGFR2 expression in tumor angiogenesis in a murine model of human colon cancer.
New tools for aquatic habitat modeling
D. Tonina; J. A. McKean; C. Tang; P. Goodwin
2011-01-01
Modeling of aquatic microhabitat in streams has been typically done over short channel reaches using one-dimensional simulations, partly because of a lack of high resolution. subaqueous topographic data to better define model boundary conditions. The Experimental Advanced Airborne Research Lidar (EAARL) is an airborne aquatic-terrestrial sensor that allows simultaneous...
Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...
Bayesian Estimation of Multivariate Latent Regression Models: Gauss versus Laplace
ERIC Educational Resources Information Center
Culpepper, Steven Andrew; Park, Trevor
2017-01-01
A latent multivariate regression model is developed that employs a generalized asymmetric Laplace (GAL) prior distribution for regression coefficients. The model is designed for high-dimensional applications where an approximate sparsity condition is satisfied, such that many regression coefficients are near zero after accounting for all the model…
Interfacing the NRL 1-D High Vertical Resolution Aerosol Model with COAMPS
2006-09-30
model integrated with mesoscale meterological data to study marine boundary layer aerosol dynamics, J. Geophys. Res., in press, 2006. Hoppel, W. A...W.A. Hoppel, J.J. Shi: A one-dimensional sectional aerosol model integrated with mesoscale meterological data to study marine boundary layer aerosol
NASA Astrophysics Data System (ADS)
Shen, Binglin; Xu, Xingqi; Xia, Chunsheng; Pan, Bailiang
2017-11-01
Combining the kinetic and fluid dynamic processes in static and flowing-gas diode-pumped alkali vapor lasers, a comprehensive physical model with three cyclically iterative algorithms for simulating the three-dimensional pump and laser intensities as well as temperature distribution in the vapor cell of side-pumped alkali vapor lasers is established. Comparison with measurement of a static side-pumped cesium vapor laser with a diffuse type hollow cylinder cavity, and with classical and modified models is made. Influences of flowed velocity and pump power on laser power are calculated and analyzed. The results have demonstrated that for high-power side-pumped alkali vapor lasers, it is necessary to take into account the three-dimensional distributions of pump energy, laser energy and temperature in the cell to simultaneously obtain the thermal features and output characteristics. Therefore, the model can deepen the understanding of the complete kinetic and fluid dynamic mechanisms of a side-pumped alkali vapor laser, and help with its further experimental design.
Analysing black phosphorus transistors using an analytic Schottky barrier MOSFET model.
Penumatcha, Ashish V; Salazar, Ramon B; Appenzeller, Joerg
2015-11-13
Owing to the difficulties associated with substitutional doping of low-dimensional nanomaterials, most field-effect transistors built from carbon nanotubes, two-dimensional crystals and other low-dimensional channels are Schottky barrier MOSFETs (metal-oxide-semiconductor field-effect transistors). The transmission through a Schottky barrier-MOSFET is dominated by the gate-dependent transmission through the Schottky barriers at the metal-to-channel interfaces. This makes the use of conventional transistor models highly inappropriate and has lead researchers in the past frequently to extract incorrect intrinsic properties, for example, mobility, for many novel nanomaterials. Here we propose a simple modelling approach to quantitatively describe the transfer characteristics of Schottky barrier-MOSFETs from ultra-thin body materials accurately in the device off-state. In particular, after validating the model through the analysis of a set of ultra-thin silicon field-effect transistor data, we have successfully applied our approach to extract Schottky barrier heights for electrons and holes in black phosphorus devices for a large range of body thicknesses.
Analysing black phosphorus transistors using an analytic Schottky barrier MOSFET model
Penumatcha, Ashish V.; Salazar, Ramon B.; Appenzeller, Joerg
2015-01-01
Owing to the difficulties associated with substitutional doping of low-dimensional nanomaterials, most field-effect transistors built from carbon nanotubes, two-dimensional crystals and other low-dimensional channels are Schottky barrier MOSFETs (metal-oxide-semiconductor field-effect transistors). The transmission through a Schottky barrier-MOSFET is dominated by the gate-dependent transmission through the Schottky barriers at the metal-to-channel interfaces. This makes the use of conventional transistor models highly inappropriate and has lead researchers in the past frequently to extract incorrect intrinsic properties, for example, mobility, for many novel nanomaterials. Here we propose a simple modelling approach to quantitatively describe the transfer characteristics of Schottky barrier-MOSFETs from ultra-thin body materials accurately in the device off-state. In particular, after validating the model through the analysis of a set of ultra-thin silicon field-effect transistor data, we have successfully applied our approach to extract Schottky barrier heights for electrons and holes in black phosphorus devices for a large range of body thicknesses. PMID:26563458
Empirical Bayes Approaches to Multivariate Fuzzy Partitions.
ERIC Educational Resources Information Center
Woodbury, Max A.; Manton, Kenneth G.
1991-01-01
An empirical Bayes-maximum likelihood estimation procedure is presented for the application of fuzzy partition models in describing high dimensional discrete response data. The model describes individuals in terms of partial membership in multiple latent categories that represent bounded discrete spaces. (SLD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K.; Petersson, N. A.; Rodgers, A.
Acoustic waveform modeling is a computationally intensive task and full three-dimensional simulations are often impractical for some geophysical applications such as long-range wave propagation and high-frequency sound simulation. In this study, we develop a two-dimensional high-order accurate finite-difference code for acoustic wave modeling. We solve the linearized Euler equations by discretizing them with the sixth order accurate finite difference stencils away from the boundary and the third order summation-by-parts (SBP) closure near the boundary. Non-planar topographic boundary is resolved by formulating the governing equation in curvilinear coordinates following the interface. We verify the implementation of the algorithm by numerical examplesmore » and demonstrate the capability of the proposed method for practical acoustic wave propagation problems in the atmosphere.« less
A two-dimensional, finite-difference model of the high plains aquifer in southern South Dakota
Kolm, K.E.; Case, H. L.
1983-01-01
The High Plains aquifer is the principal source of water for irrigation, industry, municipalities, and domestic use in south-central South Dakota. The aquifer, composed of upper sandstone units of the Arikaree Formation, and the overlying Ogallala and Sand Hills Formations, was simulated using a two-dimensional, finite-difference computer model. The maximum difference between simulated and measured potentiometric heads was less than 60 feet (1- to 4-percent error). Two-thirds of the simulated potentiometric heads were within 26 feet of the measured values (3-percent error). The estimated saturated thickness, computed from simulated potentiometric heads, was within 25-percent error of the known saturated thickness for 95 percent of the study area. (USGS)
Normal forms for reduced stochastic climate models
Majda, Andrew J.; Franzke, Christian; Crommelin, Daan
2009-01-01
The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOFs) (also known as Principal Component Analysis, Karhunen–Loéve and Proper Orthogonal Decomposition) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It is shown below that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large scales by the small scales and simultaneously strong cubic damping. These normal forms should prove useful for developing systematic strategies for the estimation of stochastic models from climate data. As an illustrative example the one-dimensional normal form is applied below to low-frequency patterns such as the North Atlantic Oscillation (NAO) in a climate model. The results here also illustrate the short comings of a recent linear scalar CAM noise model proposed elsewhere for low-frequency variability. PMID:19228943
Kuritz, K; Stöhr, D; Pollak, N; Allgöwer, F
2017-02-07
Cyclic processes, in particular the cell cycle, are of great importance in cell biology. Continued improvement in cell population analysis methods like fluorescence microscopy, flow cytometry, CyTOF or single-cell omics made mathematical methods based on ergodic principles a powerful tool in studying these processes. In this paper, we establish the relationship between cell cycle analysis with ergodic principles and age structured population models. To this end, we describe the progression of a single cell through the cell cycle by a stochastic differential equation on a one dimensional manifold in the high dimensional dataspace of cell cycle markers. Given the assumption that the cell population is in a steady state, we derive transformation rules which transform the number density on the manifold to the steady state number density of age structured population models. Our theory facilitates the study of cell cycle dependent processes including local molecular events, cell death and cell division from high dimensional "snapshot" data. Ergodic analysis can in general be applied to every process that exhibits a steady state distribution. By combining ergodic analysis with age structured population models we furthermore provide the theoretic basis for extensions of ergodic principles to distribution that deviate from their steady state. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Three-Dimensional DOSY HMQC Experiment for the High-Resolution Analysis of Complex Mixtures
NASA Astrophysics Data System (ADS)
Barjat, Hervé; Morris, Gareth A.; Swanson, Alistair G.
1998-03-01
A three-dimensional experiment is described in which NMR signals are separated according to their proton chemical shift,13C chemical shift, and diffusion coefficient. The sequence is built up from a stimulated echo sequence with bipolar field gradient pulses and a conventional decoupled HMQC sequence. Results are presented for a model mixture of quinine, camphene, and geraniol in deuteriomethanol.
Pinel, Nicolas; Bourlier, Christophe; Saillard, Joseph
2005-08-01
Energy conservation of the scattering from one-dimensional strongly rough dielectric surfaces is investigated using the Kirchhoff approximation with single reflection and by taking the shadowing phenomenon into account, both in reflection and transmission. In addition, because no shadowing function in transmission exists in the literature, this function is presented here in detail. The model is reduced to the high-frequency limit (or geometric optics). The energy conservation criterion is investigated versus the incidence angle, the permittivity of the lower medium, and the surface rms slope.
Weather prediction using a genetic memory
NASA Technical Reports Server (NTRS)
Rogers, David
1990-01-01
Kanaerva's sparse distributed memory (SDM) is an associative memory model based on the mathematical properties of high dimensional binary address spaces. Holland's genetic algorithms are a search technique for high dimensional spaces inspired by evolutional processes of DNA. Genetic Memory is a hybrid of the above two systems, in which the memory uses a genetic algorithm to dynamically reconfigure its physical storage locations to reflect correlations between the stored addresses and data. This architecture is designed to maximize the ability of the system to scale-up to handle real world problems.
Costello, John P; Olivieri, Laura J; Krieger, Axel; Thabit, Omar; Marshall, M Blair; Yoo, Shi-Joon; Kim, Peter C; Jonas, Richard A; Nath, Dilip S
2014-07-01
The current educational approach for teaching congenital heart disease (CHD) anatomy to students involves instructional tools and techniques that have significant limitations. This study sought to assess the feasibility of utilizing present-day three-dimensional (3D) printing technology to create high-fidelity synthetic heart models with ventricular septal defect (VSD) lesions and applying these models to a novel, simulation-based educational curriculum for premedical and medical students. Archived, de-identified magnetic resonance images of five common VSD subtypes were obtained. These cardiac images were then segmented and built into 3D computer-aided design models using Mimics Innovation Suite software. An Objet500 Connex 3D printer was subsequently utilized to print a high-fidelity heart model for each VSD subtype. Next, a simulation-based educational curriculum using these heart models was developed and implemented in the instruction of 29 premedical and medical students. Assessment of this curriculum was undertaken with Likert-type questionnaires. High-fidelity VSD models were successfully created utilizing magnetic resonance imaging data and 3D printing. Following instruction with these high-fidelity models, all students reported significant improvement in knowledge acquisition (P < .0001), knowledge reporting (P < .0001), and structural conceptualization (P < .0001) of VSDs. It is feasible to use present-day 3D printing technology to create high-fidelity heart models with complex intracardiac defects. Furthermore, this tool forms the foundation for an innovative, simulation-based educational approach to teach students about CHD and creates a novel opportunity to stimulate their interest in this field. © The Author(s) 2014.
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Kopasakis, George; Carlson, Jan-Renee; Woolwine, Kyle
2015-01-01
This paper covers the development of an integrated nonlinear dynamic model for a variable cycle turbofan engine, supersonic inlet, and convergent-divergent nozzle that can be integrated with an aeroelastic vehicle model to create an overall Aero-Propulso-Servo-Elastic (APSE) modeling tool. The primary focus of this study is to provide a means to capture relevant thrust dynamics of a full supersonic propulsion system by using relatively simple quasi-one dimensional computational fluid dynamics (CFD) methods that will allow for accurate control algorithm development and capture the key aspects of the thrust to feed into an APSE model. Previously, propulsion system component models have been developed and are used for this study of the fully integrated propulsion system. An overview of the methodology is presented for the modeling of each propulsion component, with a focus on its associated coupling for the overall model. To conduct APSE studies the de- scribed dynamic propulsion system model is integrated into a high fidelity CFD model of the full vehicle capable of conducting aero-elastic studies. Dynamic thrust analysis for the quasi-one dimensional dynamic propulsion system model is presented along with an initial three dimensional flow field model of the engine integrated into a supersonic commercial transport.
NASA Technical Reports Server (NTRS)
Connolly, Joe; Carlson, Jan-Renee; Kopasakis, George; Woolwine, Kyle
2015-01-01
This paper covers the development of an integrated nonlinear dynamic model for a variable cycle turbofan engine, supersonic inlet, and convergent-divergent nozzle that can be integrated with an aeroelastic vehicle model to create an overall Aero-Propulso-Servo-Elastic (APSE) modeling tool. The primary focus of this study is to provide a means to capture relevant thrust dynamics of a full supersonic propulsion system by using relatively simple quasi-one dimensional computational fluid dynamics (CFD) methods that will allow for accurate control algorithm development and capture the key aspects of the thrust to feed into an APSE model. Previously, propulsion system component models have been developed and are used for this study of the fully integrated propulsion system. An overview of the methodology is presented for the modeling of each propulsion component, with a focus on its associated coupling for the overall model. To conduct APSE studies the described dynamic propulsion system model is integrated into a high fidelity CFD model of the full vehicle capable of conducting aero-elastic studies. Dynamic thrust analysis for the quasi-one dimensional dynamic propulsion system model is presented along with an initial three dimensional flow field model of the engine integrated into a supersonic commercial transport.
NASA Astrophysics Data System (ADS)
Lazarowitz, Reuven; Naim, Raphael
2013-08-01
The cell topic was taught to 9th-grade students in three modes of instruction: (a) students "hands-on," who constructed three-dimensional cell organelles and macromolecules during the learning process; (b) teacher demonstration of the three-dimensional model of the cell structures; and (c) teaching the cell topic with the regular learning material in an expository mode (which use one- or two-dimensional cell structures as are presented in charts, textbooks and microscopic slides). The sample included 669, 9th-grade students from 25 classes who were taught by 22 Biology teachers. Students were randomly assigned to the three modes of instruction, and two tests in content knowledge in Biology were used. Data were treated with multiple analyses of variance. The results indicate that entry behavior in Biology was equal for all the study groups and types of schools. The "hands-on" learning group who build three-dimensional models through the learning process achieved significantly higher on academic achievements and on the high and low cognitive questions' levels than the other two groups. The study indicates the advantages students may have being actively engaged in the learning process through the "hands-on" mode of instruction/learning.
NASA Astrophysics Data System (ADS)
Heizler, Shay I.; Kessler, David A.
2017-06-01
Mode-I fracture exhibits microbranching in the high velocity regime where the simple straight crack is unstable. For velocities below the instability, classic modeling using linear elasticity is valid. However, showing the existence of the instability and calculating the dynamics postinstability within the linear elastic framework is difficult and controversial. The experimental results give several indications that the microbranching phenomenon is basically a three-dimensional (3D) phenomenon. Nevertheless, the theoretical effort has been focused mostly on two-dimensional (2D) modeling. In this paper we study the microbranching instability using three-dimensional atomistic simulations, exploring the difference between the 2D and the 3D models. We find that the basic 3D fracture pattern shares similar behavior with the 2D case. Nevertheless, we exhibit a clear 3D-2D transition as the crack velocity increases, whereas as long as the microbranches are sufficiently small, the behavior is pure 3D behavior, whereas at large driving, as the size of the microbranches increases, more 2D-like behavior is exhibited. In addition, in 3D simulations, the quantitative features of the microbranches, separating the regimes of steady-state cracks (mirror) and postinstability (mist-hackle) are reproduced clearly, consistent with the experimental findings.
An empirically derived three-dimensional Laplace resonance in the Gliese 876 planetary system
NASA Astrophysics Data System (ADS)
Nelson, Benjamin E.; Robertson, Paul M.; Payne, Matthew J.; Pritchard, Seth M.; Deck, Katherine M.; Ford, Eric B.; Wright, Jason T.; Isaacson, Howard T.
2016-01-01
We report constraints on the three-dimensional orbital architecture for all four planets known to orbit the nearby M dwarf Gliese 876 based solely on Doppler measurements and demanding long-term orbital stability. Our data set incorporates publicly available radial velocities taken with the ELODIE and CORALIE spectrographs, High Accuracy Radial velocity Planet Searcher (HARPS), and Keck HIgh Resolution Echelle Spectrometer (HIRES) as well as previously unpublished HIRES velocities. We first quantitatively assess the validity of the planets thought to orbit GJ 876 by computing the Bayes factors for a variety of different coplanar models using an importance sampling algorithm. We find that a four-planet model is preferred over a three-planet model. Next, we apply a Newtonian Markov chain Monte Carlo algorithm to perform a Bayesian analysis of the planet masses and orbits using an N-body model in three-dimensional space. Based on the radial velocities alone, we find that a 99 per cent credible interval provides upper limits on the mutual inclinations for the three resonant planets (Φcb < 6.20° for the {c} and {b} pair and Φbe < 28.5° for the {b} and {e} pair). Subsequent dynamical integrations of our posterior sample find that the GJ 876 planets must be roughly coplanar (Φcb < 2.60° and Φbe < 7.87°, suggesting that the amount of planet-planet scattering in the system has been low. We investigate the distribution of the respective resonant arguments of each planet pair and find that at least one argument for each planet pair and the Laplace argument librate. The libration amplitudes in our three-dimensional orbital model support the idea of the outer three planets having undergone significant past disc migration.
A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles
NASA Astrophysics Data System (ADS)
Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.
The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.
Neural Network Machine Learning and Dimension Reduction for Data Visualization
NASA Technical Reports Server (NTRS)
Liles, Charles A.
2014-01-01
Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.
Plurigon: three dimensional visualization and classification of high-dimensionality data
Martin, Bronwen; Chen, Hongyu; Daimon, Caitlin M.; Chadwick, Wayne; Siddiqui, Sana; Maudsley, Stuart
2013-01-01
High-dimensionality data is rapidly becoming the norm for biomedical sciences and many other analytical disciplines. Not only is the collection and processing time for such data becoming problematic, but it has become increasingly difficult to form a comprehensive appreciation of high-dimensionality data. Though data analysis methods for coping with multivariate data are well-documented in technical fields such as computer science, little effort is currently being expended to condense data vectors that exist beyond the realm of physical space into an easily interpretable and aesthetic form. To address this important need, we have developed Plurigon, a data visualization and classification tool for the integration of high-dimensionality visualization algorithms with a user-friendly, interactive graphical interface. Unlike existing data visualization methods, which are focused on an ensemble of data points, Plurigon places a strong emphasis upon the visualization of a single data point and its determining characteristics. Multivariate data vectors are represented in the form of a deformed sphere with a distinct topology of hills, valleys, plateaus, peaks, and crevices. The gestalt structure of the resultant Plurigon object generates an easily-appreciable model. User interaction with the Plurigon is extensive; zoom, rotation, axial and vector display, feature extraction, and anaglyph stereoscopy are currently supported. With Plurigon and its ability to analyze high-complexity data, we hope to see a unification of biomedical and computational sciences as well as practical applications in a wide array of scientific disciplines. Increased accessibility to the analysis of high-dimensionality data may increase the number of new discoveries and breakthroughs, ranging from drug screening to disease diagnosis to medical literature mining. PMID:23885241
Schiek, Richard [Albuquerque, NM
2006-06-20
A method of generating two-dimensional masks from a three-dimensional model comprises providing a three-dimensional model representing a micro-electro-mechanical structure for manufacture and a description of process mask requirements, reducing the three-dimensional model to a topological description of unique cross sections, and selecting candidate masks from the unique cross sections and the cross section topology. The method further can comprise reconciling the candidate masks based on the process mask requirements description to produce two-dimensional process masks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abers, G.A.
1994-03-10
Free-air gravity highs over forearcs represent a large fraction of the power in the Earth`s anomalous field, yet their origin remains uncertain. Seismic velocities, as indicators of density, are estimated here as a means to compare the relative importance of upper plate sources for the gravity high with sources in the downgoing plate. P and S arrival times for local earthquakes, recorded by a seismic network in the eastern Aleutians, are inverted for three-dimensional velocity structure between the volcanic arc and the downgoing plate. A three-dimensional ray tracing scheme is used to invert the 7974 P and 6764 S arrivalsmore » for seismic velocities and hypocenters of 635 events. One-dimensional inversions show that station P residuals are systematically 0.25 - 0.5 s positive at stations 0-30 km north of the Aleutian volcanic arc, indicating slow material, while residuals at stations 10-30 km south of the arc are 0.1-0.25 s negative. Both features are explained in three-dimensional inversions by velocity variations at depths less than 25-35 km. Tests using a one-dimensional or a two-dimensional slab starting model show that below 100 km depth, velocities are poorly determined and trade off almost completely with hypocenters for earthquakes at these depths. The locations of forearc velocity highs, in the crust of the upper plate, correspond to the location of the gravity high between the trench and volcanic arc. Free-air anomalies, calculated from the three-dimensional velocity inversion result, match observed gravity for a linear density-velocity relationship between 0.1 and 0.3 (Mg m{sup {minus}3})/(km s{sup {minus}1}), when a 50-km-thick slab is included with a density of 0.055{+-}0.005 Mg m{sup {minus}3}. Values outside these ranges do not match the observed gravity. The slab alone contributes one third to one half of the total 75-150 mGal amplitude of the gravity high but predicts a high that is much broader than is observed.« less
NASA Astrophysics Data System (ADS)
Fang, Z.; Ward, A. L.; Fang, Y.; Yabusaki, S.
2011-12-01
High-resolution geologic models have proven effective in improving the accuracy of subsurface flow and transport predictions. However, many of the parameters in subsurface flow and transport models cannot be determined directly at the scale of interest and must be estimated through inverse modeling. A major challenge, particularly in vadose zone flow and transport, is the inversion of the highly-nonlinear, high-dimensional problem as current methods are not readily scalable for large-scale, multi-process models. In this paper we describe the implementation of a fully automated approach for addressing complex parameter optimization and sensitivity issues on massively parallel multi- and many-core systems. The approach is based on the integration of PNNL's extreme scale Subsurface Transport Over Multiple Phases (eSTOMP) simulator, which uses the Global Array toolkit, with the Beowulf-Cluster inspired parallel nonlinear parameter estimation software, BeoPEST in the MPI mode. In the eSTOMP/BeoPEST implementation, a pre-processor generates all of the PEST input files based on the eSTOMP input file. Simulation results for comparison with observations are extracted automatically at each time step eliminating the need for post-process data extractions. The inversion framework was tested with three different experimental data sets: one-dimensional water flow at Hanford Grass Site; irrigation and infiltration experiment at the Andelfingen Site; and a three-dimensional injection experiment at Hanford's Sisson and Lu Site. Good agreements are achieved in all three applications between observations and simulations in both parameter estimates and water dynamics reproduction. Results show that eSTOMP/BeoPEST approach is highly scalable and can be run efficiently with hundreds or thousands of processors. BeoPEST is fault tolerant and new nodes can be dynamically added and removed. A major advantage of this approach is the ability to use high-resolution geologic models to preserve the spatial structure in the inverse model, which leads to better parameter estimates and improved predictions when using the inverse-conditioned realizations of parameter fields.
NASA Astrophysics Data System (ADS)
Zheng, Wei; Hsu, Hou-Tse; Zhong, Min; Yun, Mei-Juan
2012-10-01
The accuracy of the Earth's gravitational field measured from the gravity field and steady-state ocean circulation explorer (GOCE), up to 250 degrees, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij from the satellite gravity gradiometry (SGG) are contrastively demonstrated based on the analytical error model and numerical simulation, respectively. Firstly, the new analytical error model of the cumulative geoid height, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are established, respectively. In 250 degrees, the GOCE cumulative geoid height error measured by the radial gravity gradient Vzz is about 2½ times higher than that measured by the three-dimensional gravity gradient Vij. Secondly, the Earth's gravitational field from GOCE completely up to 250 degrees is recovered using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij by numerical simulation, respectively. The study results show that when the measurement error of the gravity gradient is 3 × 10-12/s2, the cumulative geoid height errors using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are 12.319 cm and 9.295 cm at 250 degrees, respectively. The accuracy of the cumulative geoid height using the three-dimensional gravity gradient Vij is improved by 30%-40% on average compared with that using the radial gravity gradient Vzz in 250 degrees. Finally, by mutual verification of the analytical error model and numerical simulation, the orders of magnitude from the accuracies of the Earth's gravitational field recovery make no substantial differences based on the radial and three-dimensional gravity gradients, respectively. Therefore, it is feasible to develop in advance a radial cold-atom interferometric gradiometer with a measurement accuracy of 10-13/s2-10-15/s2 for precisely producing the next-generation GOCE Follow-On Earth gravity field model with a high spatial resolution.
Penalized gaussian process regression and classification for high-dimensional nonlinear data.
Yi, G; Shi, J Q; Choi, T
2011-12-01
The model based on Gaussian process (GP) prior and a kernel covariance function can be used to fit nonlinear data with multidimensional covariates. It has been used as a flexible nonparametric approach for curve fitting, classification, clustering, and other statistical problems, and has been widely applied to deal with complex nonlinear systems in many different areas particularly in machine learning. However, it is a challenging problem when the model is used for the large-scale data sets and high-dimensional data, for example, for the meat data discussed in this article that have 100 highly correlated covariates. For such data, it suffers from large variance of parameter estimation and high predictive errors, and numerically, it suffers from unstable computation. In this article, penalized likelihood framework will be applied to the model based on GPs. Different penalties will be investigated, and their ability in application given to suit the characteristics of GP models will be discussed. The asymptotic properties will also be discussed with the relevant proofs. Several applications to real biomechanical and bioinformatics data sets will be reported. © 2011, The International Biometric Society No claim to original US government works.
Aircraft High-Lift Aerodynamic Analysis Using a Surface-Vorticity Solver
NASA Technical Reports Server (NTRS)
Olson, Erik D.; Albertson, Cindy W.
2016-01-01
This study extends an existing semi-empirical approach to high-lift analysis by examining its effectiveness for use with a three-dimensional aerodynamic analysis method. The aircraft high-lift geometry is modeled in Vehicle Sketch Pad (OpenVSP) using a newly-developed set of techniques for building a three-dimensional model of the high-lift geometry, and for controlling flap deflections using scripted parameter linking. Analysis of the low-speed aerodynamics is performed in FlightStream, a novel surface-vorticity solver that is expected to be substantially more robust and stable compared to pressure-based potential-flow solvers and less sensitive to surface perturbations. The calculated lift curve and drag polar are modified by an empirical lift-effectiveness factor that takes into account the effects of viscosity that are not captured in the potential-flow solution. Analysis results are validated against wind-tunnel data for The Energy-Efficient Transport AR12 low-speed wind-tunnel model, a 12-foot, full-span aircraft configuration with a supercritical wing, full-span slats, and part-span double-slotted flaps.
NASA Astrophysics Data System (ADS)
Matveev, A. D.
2016-11-01
To calculate the three-dimensional elastic body of heterogeneous structure under static loading, a method of multigrid finite element is provided, when implemented on the basis of algorithms of finite element method (FEM), using homogeneous and composite threedimensional multigrid finite elements (MFE). Peculiarities and differences of MFE from the currently available finite elements (FE) are to develop composite MFE (without increasing their dimensions), arbitrarily small basic partition of composite solids consisting of single-grid homogeneous FE of the first order can be used, i.e. in fact, to use micro approach in finite element form. These small partitions allow one to take into account in MFE, i.e. in the basic discrete models of composite solids, complex heterogeneous and microscopically inhomogeneous structure, shape, the complex nature of the loading and fixation and describe arbitrarily closely the stress and stain state by the equations of three-dimensional elastic theory without any additional simplifying hypotheses. When building the m grid FE, m of nested grids is used. The fine grid is generated by a basic partition of MFE, the other m —1 large grids are applied to reduce MFE dimensionality, when m is increased, MFE dimensionality becomes smaller. The procedures of developing MFE of rectangular parallelepiped, irregular shape, plate and beam types are given. MFE generate the small dimensional discrete models and numerical solutions with a high accuracy. An example of calculating the laminated plate, using three-dimensional 3-grid FE and the reference discrete model is given, with that having 2.2 milliards of FEM nodal unknowns.
Prediction model of sinoatrial node field potential using high order partial least squares.
Feng, Yu; Cao, Hui; Zhang, Yanbin
2015-01-01
High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).
Quality metrics in high-dimensional data visualization: an overview and systematization.
Bertini, Enrico; Tatu, Andrada; Keim, Daniel
2011-12-01
In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE
NASA Technical Reports Server (NTRS)
Kumar, A.
1984-01-01
A computer program NASCRIN has been developed for analyzing two-dimensional flow fields in high-speed inlets. It solves the two-dimensional Euler or Navier-Stokes equations in conservation form by an explicit, two-step finite-difference method. An explicit-implicit method can also be used at the user's discretion for viscous flow calculations. For turbulent flow, an algebraic, two-layer eddy-viscosity model is used. The code is operational on the CDC CYBER 203 computer system and is highly vectorized to take full advantage of the vector-processing capability of the system. It is highly user oriented and is structured in such a way that for most supersonic flow problems, the user has to make only a few changes. Although the code is primarily written for supersonic internal flow, it can be used with suitable changes in the boundary conditions for a variety of other problems.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*
Cai, T. Tony; Zhang, Anru
2016-01-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
Three-dimensional stress intensity factor analysis of a surface crack in a high-speed bearing
NASA Technical Reports Server (NTRS)
Ballarini, Roberto; Hsu, Yingchun
1990-01-01
The boundary element method is applied to calculate the stress intensity factors of a surface crack in the rotating inner raceway of a high-speed roller bearing. The three-dimensional model consists of an axially stressed surface cracked plate subjected to a moving Hertzian contact loading. A multidomain formulation and singular crack-tip elements were employed to calculate the stress intensity factors accurately and efficiently for a wide range of configuration parameters. The results can provide the basis for crack growth calculations and fatigue life predictions of high-performance rolling element bearings that are used in aircraft engines.
NASA Astrophysics Data System (ADS)
Shao, Meng; Xiao, Chengsi; Sun, Jinwei; Shao, Zhuxiao; Zheng, Qiuhong
2017-12-01
The paper analyzes hydrodynamic characteristics and the strength of a novel dot-matrix oscillating wave energy converter, which is in accordance with nowadays’ research tendency: high power, high efficiency, high reliability and low cost. Based on three-dimensional potential flow theory, the paper establishes motion control equations of the wave energy converter unit and calculates wave loads and motions. On this basis, a three-dimensional finite element model of the device is built to check its strength. Through the analysis, it can be confirmed that the WEC is feasible and the research results could be a reference for wave energy’s exploration and utilization.
A High-Resolution, Three-Dimensional Model of Jupiter's Great Red Spot
NASA Technical Reports Server (NTRS)
Cho, James Y.-K.; delaTorreJuarez, Manuel; Ingersoll, Andrew P.; Dritschel, David G.
2001-01-01
The turbulent flow at the periphery of the Great Red Spot (GRS) contains many fine-scale filamentary structures, while the more quiescent core, bounded by a narrow high- velocity ring, exhibits organized, possibly counterrotating, motion. Past studies have neither been able to capture this complexity nor adequately study the effect of vertical stratification L(sub R)(zeta) on the GRS. We present results from a series of high-resolution, three-dimensional simulations that advect the dynamical tracer, potential vorticity. The detailed flow is successfully captured with a characteristic value of L(sub R) approx. equals 2000 km, independent of the precise vertical stratification profile.
Alfvén Turbulence Driven by High-Dimensional Interior Crisis in the Solar Wind
NASA Astrophysics Data System (ADS)
Chian, A. C.-L.; Rempel, E. L.; Macau, E. E. N.; Rosa, R. R.; Christiansen, F.
2003-09-01
Alfvén intermittent turbulence has been observed in the solar wind. It has been previously shown that the interplanetary Alfvén intermittent turbulence can appear due to a low-dimensional temporal chaos [1]. In this paper, we study the nonlinear spatiotemporal dynamics of Alfvén waves governed by the Kuramoto-Sivashinsky equation which describes the phase evolution of a large-amplitude Alfvén wave. We investigate the Alfvén turbulence driven by a high-dimensional interior crisis, which is a global bifurcation caused by the collision of a chaotic attractor with an unstable periodic orbit. This nonlinear phenomenon is analyzed using the numerical solutions of the model equation. The identification of the unstable periodic orbits and their invariant manifolds is fundamental for understanding the instability, chaos and turbulence in complex systems such as the solar wind plasma. The high-dimensional dynamical system approach to space environment turbulence developed in this paper can improve our interpretation of the origin and the nature of Alfvén turbulence observed in the solar wind.
Fu, Feng; Qin, Zhe; Xu, Chao; Chen, Xu-yi; Li, Rui-xin; Wang, Li-na; Peng, Ding-wei; Sun, Hong-tao; Tu, Yue; Chen, Chong; Zhang, Sai; Zhao, Ming-liang; Li, Xiao-hong
2017-01-01
Conventional fabrication methods lack the ability to control both macro- and micro-structures of generated scaffolds. Three-dimensional printing is a solid free-form fabrication method that provides novel ways to create customized scaffolds with high precision and accuracy. In this study, an electrically controlled cortical impactor was used to induce randomized brain tissue defects. The overall shape of scaffolds was designed using rat-specific anatomical data obtained from magnetic resonance imaging, and the internal structure was created by computer-aided design. As the result of limitations arising from insufficient resolution of the manufacturing process, we magnified the size of the cavity model prototype five-fold to successfully fabricate customized collagen-chitosan scaffolds using three-dimensional printing. Results demonstrated that scaffolds have three-dimensional porous structures, high porosity, highly specific surface areas, pore connectivity and good internal characteristics. Neural stem cells co-cultured with scaffolds showed good viability, indicating good biocompatibility and biodegradability. This technique may be a promising new strategy for regenerating complex damaged brain tissues, and helps pave the way toward personalized medicine. PMID:28553343
NASA Astrophysics Data System (ADS)
Guinot, Vincent
2017-11-01
The validity of flux and source term formulae used in shallow water models with porosity for urban flood simulations is assessed by solving the two-dimensional shallow water equations over computational domains representing periodic building layouts. The models under assessment are the Single Porosity (SP), the Integral Porosity (IP) and the Dual Integral Porosity (DIP) models. 9 different geometries are considered. 18 two-dimensional initial value problems and 6 two-dimensional boundary value problems are defined. This results in a set of 96 fine grid simulations. Analysing the simulation results leads to the following conclusions: (i) the DIP flux and source term models outperform those of the SP and IP models when the Riemann problem is aligned with the main street directions, (ii) all models give erroneous flux closures when is the Riemann problem is not aligned with one of the main street directions or when the main street directions are not orthogonal, (iii) the solution of the Riemann problem is self-similar in space-time when the street directions are orthogonal and the Riemann problem is aligned with one of them, (iv) a momentum balance confirms the existence of the transient momentum dissipation model presented in the DIP model, (v) none of the source term models presented so far in the literature allows all flow configurations to be accounted for(vi) future laboratory experiments aiming at the validation of flux and source term closures should focus on the high-resolution, two-dimensional monitoring of both water depth and flow velocity fields.
Locating landmarks on high-dimensional free energy surfaces
Chen, Ming; Yu, Tang-Qing; Tuckerman, Mark E.
2015-01-01
Coarse graining of complex systems possessing many degrees of freedom can often be a useful approach for analyzing and understanding key features of these systems in terms of just a few variables. The relevant energy landscape in a coarse-grained description is the free energy surface as a function of the coarse-grained variables, which, despite the dimensional reduction, can still be an object of high dimension. Consequently, navigating and exploring this high-dimensional free energy surface is a nontrivial task. In this paper, we use techniques from multiscale modeling, stochastic optimization, and machine learning to devise a strategy for locating minima and saddle points (termed “landmarks”) on a high-dimensional free energy surface “on the fly” and without requiring prior knowledge of or an explicit form for the surface. In addition, we propose a compact graph representation of the landmarks and connections between them, and we show that the graph nodes can be subsequently analyzed and clustered based on key attributes that elucidate important properties of the system. Finally, we show that knowledge of landmark locations allows for the efficient determination of their relative free energies via enhanced sampling techniques. PMID:25737545
Modeling job sites in real time to improve safety during equipment operation
NASA Astrophysics Data System (ADS)
Caldas, Carlos H.; Haas, Carl T.; Liapi, Katherine A.; Teizer, Jochen
2006-03-01
Real-time three-dimensional (3D) modeling of work zones has received an increasing interest to perform equipment operation faster, safer and more precisely. In addition, hazardous job site environment like they exist on construction sites ask for new devices which can rapidly and actively model static and dynamic objects. Flash LADAR (Laser Detection and Ranging) cameras are one of the recent technology developments which allow rapid spatial data acquisition of scenes. Algorithms that can process and interpret the output of such enabling technologies into threedimensional models have the potential to significantly improve work processes. One particular important application is modeling the location and path of objects in the trajectory of heavy construction equipment navigation. Detecting and mapping people, materials and equipment into a three-dimensional computer model allows analyzing the location, path, and can limit or restrict access to hazardous areas. This paper presents experiments and results of a real-time three-dimensional modeling technique to detect static and moving objects within the field of view of a high-frame update rate laser range scanning device. Applications related to heavy equipment operations on transportation and construction job sites are specified.
Analysis and topology optimization design of high-speed driving spindle
NASA Astrophysics Data System (ADS)
Wang, Zhilin; Yang, Hai
2018-04-01
The three-dimensional model of high-speed driving spindle is established by using SOLIDWORKS. The model is imported through the interface of ABAQUS, A finite element analysis model of high-speed driving spindle was established by using spring element to simulate bearing boundary condition. High-speed driving spindle for the static analysis, the spindle of the stress, strain and displacement nephogram, and on the basis of the results of the analysis on spindle for topology optimization, completed the lightweight design of high-speed driving spindle. The design scheme provides guidance for the design of axial parts of similar structures.
Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin
2015-12-01
One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .
Chiral spin liquids at finite temperature in a three-dimensional Kitaev model
NASA Astrophysics Data System (ADS)
Kato, Yasuyuki; Kamiya, Yoshitomo; Nasu, Joji; Motome, Yukitoshi
2017-11-01
Chiral spin liquids (CSLs) in three dimensions and thermal phase transitions to paramagnet are studied by unbiased Monte Carlo simulations. For an extension of the Kitaev model to a three-dimensional tricoordinate network dubbed the hypernonagon lattice, we derive low-energy effective models in two different anisotropic limits. We show that the effective interactions between the emergent Z2 degrees of freedom called fluxes are unfrustrated in one limit, while highly frustrated in the other. In both cases, we find a first-order phase transition to the CSL, where both time-reversal and parity symmetries are spontaneously broken. In the frustrated case, however, the CSL state is highly exotic—the flux configuration is subextensively degenerate while showing a directional order with broken C3 rotational symmetry. Our results provide two contrasting archetypes of CSLs in three dimensions, both of which allow approximation-free simulation for investigating the thermodynamics.
Modeling and visual simulation of Microalgae photobioreactor
NASA Astrophysics Data System (ADS)
Zhao, Ming; Hou, Dapeng; Hu, Dawei
Microalgae is a kind of nutritious and high photosynthetic efficiency autotrophic plant, which is widely distributed in the land and the sea. It can be extensively used in medicine, food, aerospace, biotechnology, environmental protection and other fields. Photobioreactor which is important equipment is mainly used to cultivate massive and high-density microalgae. In this paper, based on the mathematical model of microalgae which grew under different light intensity, three-dimensional visualization model was built and implemented in 3ds max, Virtools and some other three dimensional software. Microalgae is photosynthetic organism, it can efficiently produce oxygen and absorb carbon dioxide. The goal of the visual simulation is to display its change and impacting on oxygen and carbon dioxide intuitively. In this paper, different temperatures and light intensities were selected to control the photobioreactor, and dynamic change of microalgal biomass, Oxygen and carbon dioxide was observed with the aim of providing visualization support for microalgal and photobioreactor research.
Three-dimensional drift kinetic response of high-β plasmas in the DIII-D tokamak.
Wang, Z R; Lanctot, M J; Liu, Y Q; Park, J-K; Menard, J E
2015-04-10
A quantitative interpretation of the experimentally measured high-pressure plasma response to externally applied three-dimensional (3D) magnetic field perturbations, across the no-wall Troyon β limit, is achieved. The self-consistent inclusion of the drift kinetic effects in magnetohydrodynamic (MHD) modeling [Y. Q. Liu et al., Phys. Plasmas 15, 112503 (2008)] successfully resolves an outstanding issue of the ideal MHD model, which significantly overpredicts the plasma-induced field amplification near the no-wall limit, as compared to experiments. The model leads to quantitative agreement not only for the measured field amplitude and toroidal phase but also for the measured internal 3D displacement of the plasma. The results can be important to the prediction of the reliable plasma behavior in advanced fusion devices, such as ITER [K. Ikeda, Nucl. Fusion 47, S1 (2007)].
Modeling the curing process of thick-section autoclave cured composites
NASA Technical Reports Server (NTRS)
Loos, A. C.; Dara, P. H.
1985-01-01
Temperature gradients are significant during cure of large area, thick-section composites. Such temperature gradients result in nonuniformly cured parts with high void contents, poor ply compaction, and variations in the fiber/resin distribution. A model was developed to determine the temperature distribution in thick-section autoclave cured composites. Using the model, long with temperature measurements obtained from the thick-section composites, the effects of various processing parameters on the thermal response of the composites were examined. A one-dimensional heat transfer model was constructed for the composite-tool assembly. The governing differential equations and associated boundary conditions describing one-dimensional unsteady heat-conduction in the composite, tool plate, and pressure plate are given. Solution of the thermal model was obtained using an implicit finite difference technique.
Numerical study of low-frequency discharge oscillations in a 5 kW Hall thruster
NASA Astrophysics Data System (ADS)
Le, YANG; Tianping, ZHANG; Juanjuan, CHEN; Yanhui, JIA
2018-07-01
A two-dimensional particle-in-cell plasma model is built in the R–Z plane to investigate the low-frequency plasma oscillations in the discharge channel of a 5 kW LHT-140 Hall thruster. In addition to the elastic, excitation, and ionization collisions between neutral atoms and electrons, the Coulomb collisions between electrons and electrons and between electrons and ions are analyzed. The sheath characteristic distortion is also corrected. Simulation results indicate the capability of the built model to reproduce the low-frequency oscillation with high accuracy. The oscillations of the discharge current and ion density produced by the model are consistent with the existing conclusions. The model predicts a frequency that is consistent with that calculated by the zero-dimensional theoretical model.
NASA Astrophysics Data System (ADS)
Akinpelu, Oluwatosin Caleb
The growing need for better definition of flow units and depositional heterogeneities in petroleum reservoirs and aquifers has stimulated a renewed interest in outcrop studies as reservoir analogues in the last two decades. Despite this surge in interest, outcrop studies remain largely two-dimensional; a major limitation to direct application of outcrop knowledge to the three dimensional heterogeneous world of subsurface reservoirs. Behind-outcrop Ground Penetrating Radar (GPR) imaging provides high-resolution geophysical data, which when combined with two dimensional architectural outcrop observation, becomes a powerful interpretation tool. Due to the high resolution, non-destructive and non-invasive nature of the GPR signal, as well as its reflection-amplitude sensitivity to shaly lithologies, three-dimensional outcrop studies combining two dimensional architectural element data and behind-outcrop GPR imaging hold significant promise with the potential to revolutionize outcrop studies the way seismic imaging changed basin analysis. Earlier attempts at GPR imaging on ancient clastic deposits were fraught with difficulties resulting from inappropriate field techniques and subsequent poorly-informed data processing steps. This project documents advances in GPR field methodology, recommends appropriate data collection and processing procedures and validates the value of integrating outcrop-based architectural-element mapping with GPR imaging to obtain three dimensional architectural data from outcrops. Case studies from a variety of clastic deposits: Whirlpool Formation (Niagara Escarpment), Navajo Sandstone (Moab, Utah), Dunvegan Formation (Pink Mountain, British Columbia), Chinle Formation (Southern Utah) and St. Mary River Formation (Alberta) demonstrate the usefulness of this approach for better interpretation of outcrop scale ancient depositional processes and ultimately as a tool for refining existing facies models, as well as a predictive tool for subsurface reservoir modelling. While this approach is quite promising for detailed three-dimensional outcrop studies, it is not an all-purpose panacea; thick overburden, poor antenna-ground coupling in rough terrains typical of outcrops, low penetration and rapid signal attenuation in mudstone and diagenetic clay- rich deposits often limit the prospects of this novel technique.
NASA Astrophysics Data System (ADS)
Korsholm, Ulrik; Petersen, Claus; Hansen Sass, Bent; Woetman, Niels; Getreuer Jensen, David; Olsen, Bjarke Tobias; GIll, Rasphal; Vedel, Henrik
2014-05-01
The DMI nowcasting system has been running in a pre-operational state for the past year. The system consists of hourly simulations with the High Resolution Limited Area weather model combined with surface and three-dimensional variational assimilation at each restart and nudging of satellite cloud products and radar precipitation. Nudging of a two-dimensional radar reflectivity CAPPI product is achieved using a new method where low level horizontal divergence is nudged towards pseudo observations. Pseudo observations are calculated based on an assumed relation between divergence and precipitation rate and the strength of the nudging is proportional to the offset between observed and modelled precipitation leading to increased moisture convergence below cloud base if there is an under-production of precipitation relative to the CAPPI product. If the model over-predicts precipitation, the low level moisture source is reduced, and in-cloud moisture is nudged towards environmental values. In this talk results will be discussed based on calculation of the fractions skill score in cases with heavy precipitation over Denmark. Furthermore, results from simulations combining reflectivity nudging and extrapolation of reflectivity will be shown. Results indicate that the new method leads to fast adjustment of the dynamical state of the model to facilitate precipitation release when the model precipitation intensity is too low. Removal of precipitation is also shown to be of importance and strong improvements were found in the position of the precipitation systems. Bias is reduced for low and extreme precipitation rates.
NASA Astrophysics Data System (ADS)
Fast, J. D.; Osteen, B. L.
An important aspect of the U.S. Department of Energy's Atmospheric Studies in Complex Terrain (ASCOT) program is the development and evaluation of numerical models that predict transport and diffusion of pollutants in complex terrain. Operational mesoscale modeling of the transport of pollutants in complex terrain will become increasingly practical as computational costs decrease and additional data from high-resolution remote sensing instrumentation networks become available during the 1990s. Four-dimensional data assimilation (4DDA) techniques are receiving a great deal of attention recently not only to improve the initial conditions of mesoscale forecast models, but to create high-quality four-dimensional mesoscale analysis fields that can be used as input to air-quality models. In this study, a four-dimensional data assimilation technique based on Newtonian relaxation is incorporated into the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) and evaluated using data taken from one experiment of the 1991 ASCOT field study along the front range of the Rockies in Colorado. The main objective of this study is to compare the observed surface concentrations with those predicted by a Lagrangian particle dispersion model and to demonstrate the effect of data assimilation on the simulated plume. In contrast to previous studies in which the smallest horizontal grid spacing was 10 km (Stauffer and Seaman, 1991) and 8 km (Yamada and Hermi, 1991), data assimilation is applied in this study to domains with a horizontal grid spacing as small as 1 km.
The semantic representation of prejudice and stereotypes.
Bhatia, Sudeep
2017-07-01
We use a theory of semantic representation to study prejudice and stereotyping. Particularly, we consider large datasets of newspaper articles published in the United States, and apply latent semantic analysis (LSA), a prominent model of human semantic memory, to these datasets to learn representations for common male and female, White, African American, and Latino names. LSA performs a singular value decomposition on word distribution statistics in order to recover word vector representations, and we find that our recovered representations display the types of biases observed in human participants using tasks such as the implicit association test. Importantly, these biases are strongest for vector representations with moderate dimensionality, and weaken or disappear for representations with very high or very low dimensionality. Moderate dimensional LSA models are also the best at learning race, ethnicity, and gender-based categories, suggesting that social category knowledge, acquired through dimensionality reduction on word distribution statistics, can facilitate prejudiced and stereotyped associations. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rosner, Helge
2011-03-01
A microscopic understanding of the structure-properties relation in crystalline materials is a main goal of modern solid state chemistry and physics. Due to their peculiar magnetism, low dimensional spin 1/2 systems are often highly sensitive to structural details. Seemingly unimportant structural details can be crucial for the magnetic ground state of a compound, especially in the case of competing interactions, frustration and near-degeneracy. Here, we present for selected, complex Cu 2+ systems that a first principles based approach can reliably provide the correct magnetic model, especially in cases where the interpretation of experimental data meets serious difficulties or fails. We demonstrate that the magnetism of low dimensional insulators crucially depends on the magnetically active orbitals which are determined by details of the ligand field of the magnetic cation. Our theoretical results are in very good agreement with thermodynamic and spectroscopic data and provide deep microscopic insight into topical low dimensional magnets.
Estimating Independent Locally Shifted Random Utility Models for Ranking Data
ERIC Educational Resources Information Center
Lam, Kar Yin; Koning, Alex J.; Franses, Philip Hans
2011-01-01
We consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we avoided the computation of high-dimensional integrals. We extended the approximation technique proposed by Henery (1981) in the context of the Thurstone-Mosteller-Daniels model to any…
NASA Technical Reports Server (NTRS)
Ivanov, B. A.
1986-01-01
Main concepts and theoretical models which are used for studying the mechanics of cratering are discussed. Numerical two-dimensional calculations are made of explosions near a surface and high-speed impact. Models are given for the motion of a medium during cratering. Data from laboratory modeling are given. The effect of gravitational force and scales of cratering phenomena is analyzed.
CLICK: The new USGS center for LIDAR information coordination and knowledge
Stoker, Jason M.; Greenlee, Susan K.; Gesch, Dean B.; Menig, Jordan C.
2006-01-01
Elevation data is rapidly becoming an important tool for the visualization and analysis of geographic information. The creation and display of three-dimensional models representing bare earth, vegetation, and structures have become major requirements for geographic research in the past few years. Light Detection and Ranging (lidar) has been increasingly accepted as an effective and accurate technology for acquiring high-resolution elevation data for bare earth, vegetation, and structures. Lidar is an active remote sensing system that records the distance, or range, of a laser fi red from an airborne or space borne platform such as an airplane, helicopter or satellite to objects or features on the Earth’s surface. By converting lidar data into bare ground topography and vegetation or structural morphologic information, extremely accurate, high-resolution elevation models can be derived to visualize and quantitatively represent scenes in three dimensions. In addition to high-resolution digital elevation models (Evans et al., 2001), other lidar-derived products include quantitative estimates of vegetative features such as canopy height, canopy closure, and biomass (Lefsky et al., 2002), and models of urban areas such as building footprints and three-dimensional city models (Maas, 2001).
NASA Astrophysics Data System (ADS)
Tjiputra, Jerry F.; Polzin, Dierk; Winguth, Arne M. E.
2007-03-01
An adjoint method is applied to a three-dimensional global ocean biogeochemical cycle model to optimize the ecosystem parameters on the basis of SeaWiFS surface chlorophyll observation. We showed with identical twin experiments that the model simulated chlorophyll concentration is sensitive to perturbation of phytoplankton and zooplankton exudation, herbivore egestion as fecal pellets, zooplankton grazing, and the assimilation efficiency parameters. The assimilation of SeaWiFS chlorophyll data significantly improved the prediction of chlorophyll concentration, especially in the high-latitude regions. Experiments that considered regional variations of parameters yielded a high seasonal variance of ecosystem parameters in the high latitudes, but a low variance in the tropical regions. These experiments indicate that the adjoint model is, despite the many uncertainties, generally capable to optimize sensitive parameters and carbon fluxes in the euphotic zone. The best fit regional parameters predict a global net primary production of 36 Pg C yr-1, which lies within the range suggested by Antoine et al. (1996). Additional constraints of nutrient data from the World Ocean Atlas showed further reduction in the model-data misfit and that assimilation with extensive data sets is necessary.
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION
Allen, Genevera I.; Tibshirani, Robert
2015-01-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. PMID:26877823
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.
Allen, Genevera I; Tibshirani, Robert
2010-06-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
Signal decomposition for surrogate modeling of a constrained ultrasonic design space
NASA Astrophysics Data System (ADS)
Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.
2018-04-01
The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.
A Multigrid NLS-4DVar Data Assimilation Scheme with Advanced Research WRF (ARW)
NASA Astrophysics Data System (ADS)
Zhang, H.; Tian, X.
2017-12-01
The motions of the atmosphere have multiscale properties in space and/or time, and the background error covariance matrix (Β) should thus contain error information at different correlation scales. To obtain an optimal analysis, the multigrid three-dimensional variational data assimilation scheme is used widely when sequentially correcting errors from large to small scales. However, introduction of the multigrid technique into four-dimensional variational data assimilation is not easy, due to its strong dependence on the adjoint model, which has extremely high computational costs in data coding, maintenance, and updating. In this study, the multigrid technique was introduced into the nonlinear least-squares four-dimensional variational assimilation (NLS-4DVar) method, which is an advanced four-dimensional ensemble-variational method that can be applied without invoking the adjoint models. The multigrid NLS-4DVar (MG-NLS-4DVar) scheme uses the number of grid points to control the scale, with doubling of this number when moving from a coarse to a finer grid. Furthermore, the MG-NLS-4DVar scheme not only retains the advantages of NLS-4DVar, but also sufficiently corrects multiscale errors to achieve a highly accurate analysis. The effectiveness and efficiency of the proposed MG-NLS-4DVar scheme were evaluated by several groups of observing system simulation experiments using the Advanced Research Weather Research and Forecasting Model. MG-NLS-4DVar outperformed NLS-4DVar, with a lower computational cost.
Mathew, Boby; Léon, Jens; Sannemann, Wiebke; Sillanpää, Mikko J.
2018-01-01
Gene-by-gene interactions, also known as epistasis, regulate many complex traits in different species. With the availability of low-cost genotyping it is now possible to study epistasis on a genome-wide scale. However, identifying genome-wide epistasis is a high-dimensional multiple regression problem and needs the application of dimensionality reduction techniques. Flowering Time (FT) in crops is a complex trait that is known to be influenced by many interacting genes and pathways in various crops. In this study, we successfully apply Sure Independence Screening (SIS) for dimensionality reduction to identify two-way and three-way epistasis for the FT trait in a Multiparent Advanced Generation Inter-Cross (MAGIC) barley population using the Bayesian multilocus model. The MAGIC barley population was generated from intercrossing among eight parental lines and thus, offered greater genetic diversity to detect higher-order epistatic interactions. Our results suggest that SIS is an efficient dimensionality reduction approach to detect high-order interactions in a Bayesian multilocus model. We also observe that many of our findings (genomic regions with main or higher-order epistatic effects) overlap with known candidate genes that have been already reported in barley and closely related species for the FT trait. PMID:29254994
Comparisons between thermodynamic and one-dimensional combustion models of spark-ignition engines
NASA Technical Reports Server (NTRS)
Ramos, J. I.
1986-01-01
Results from a one-dimensional combustion model employing a constant eddy diffusivity and a one-step chemical reaction are compared with those of one-zone and two-zone thermodynamic models to study the flame propagation in a spark-ignition engine. One-dimensional model predictions are found to be very sensitive to the eddy diffusivity and reaction rate data. The average mixing temperature found using the one-zone thermodynamic model is higher than those of the two-zone and one-dimensional models during the compression stroke, and that of the one-dimensional model is higher than those predicted by both thermodynamic models during the expansion stroke. The one-dimensional model is shown to predict an accelerating flame even when the front approaches the cold cylinder wall.
On the explicit construction of Parisi landscapes in finite dimensional Euclidean spaces
NASA Astrophysics Data System (ADS)
Fyodorov, Y. V.; Bouchaud, J.-P.
2007-12-01
An N-dimensional Gaussian landscape with multiscale translation-invariant logarithmic correlations has been constructed, and the statistical mechanics of a single particle in this environment has been investigated. In the limit of a high dimensional N → ∞, the free energy of the system in the thermodynamic limit coincides with the most general version of Derrida’s generalized random energy model. The low-temperature behavior depends essentially on the spectrum of length scales involved in the construction of the landscape. The construction is argued to be valid in any finite spatial dimensions N ≥1.
Advantages and Challenges of 10-Gbps Transmission on High-Density Interconnect Boards
NASA Astrophysics Data System (ADS)
Yee, Chang Fei; Jambek, Asral Bahari; Al-Hadi, Azremi Abdullah
2016-06-01
This paper provides a brief introduction to high-density interconnect (HDI) technology and its implementation on printed circuit boards (PCBs). The advantages and challenges of implementing 10-Gbps signal transmission on high-density interconnect boards are discussed in detail. The advantages (e.g., smaller via dimension and via stub removal) and challenges (e.g., crosstalk due to smaller interpair separation) of HDI are studied by analyzing the S-parameter, time-domain reflectometry (TDR), and transmission-line eye diagrams obtained by three-dimensional electromagnetic modeling (3DEM) and two-dimensional electromagnetic modeling (2DEM) using Mentor Graphics HyperLynx and Keysight Advanced Design System (ADS) electronic computer-aided design (ECAD) software. HDI outperforms conventional PCB technology in terms of signal integrity, but proper routing topology should be applied to overcome the challenge posed by crosstalk due to the tight spacing between traces.
Polarization-Resolved Study of High Harmonics from Bulk Semiconductors
NASA Astrophysics Data System (ADS)
Kaneshima, Keisuke; Shinohara, Yasushi; Takeuchi, Kengo; Ishii, Nobuhisa; Imasaka, Kotaro; Kaji, Tomohiro; Ashihara, Satoshi; Ishikawa, Kenichi L.; Itatani, Jiro
2018-06-01
The polarization property of high harmonics from gallium selenide is investigated using linearly polarized midinfrared laser pulses. With a high electric field, the perpendicular polarization component of the odd harmonics emerges, which is not present with a low electric field and cannot be explained by the perturbative nonlinear optics. A two-dimensional single-band model is developed to show that the anisotropic curvature of an energy band of solids, which is pronounced in an outer part of the Brillouin zone, induces the generation of the perpendicular odd harmonics. This model is validated by three-dimensional quantum mechanical simulations, which reproduce the orientation dependence of the odd-order harmonics. The quantum mechanical simulations also reveal that the odd- and even-order harmonics are produced predominantly by the intraband current and interband polarization, respectively. These experimental and theoretical demonstrations clearly show a strong link between the band structure of a solid and the polarization property of the odd-order harmonics.
Numerical analysis of real gas MHD flow on two-dimensional self-field MPD thrusters
NASA Astrophysics Data System (ADS)
Xisto, Carlos M.; Páscoa, José C.; Oliveira, Paulo J.
2015-07-01
A self-field magnetoplasmadynamic (MPD) thruster is a low-thrust electric propulsion space-system that enables the usage of magnetohydrodynamic (MHD) principles for accelerating a plasma flow towards high speed exhaust velocities. It can produce an high specific impulse, making it suitable for long duration interplanetary space missions. In this paper numerical results obtained with a new code, which is being developed at C-MAST (Centre for Mechanical and Aerospace Technologies), for a two-dimensional self-field MPD thruster are presented. The numerical model is based on the macroscopic MHD equations for compressible and electrically resistive flow and is able to predict the two most important thrust mechanisms that are associated with this kind of propulsion system, namely the thermal thrust and the electromagnetic thrust. Moreover, due to the range of very high temperatures that could occur during the operation of the MPD, it also includes a real gas model for argon.
A two-dimensional modeling of the warm-up phase of a high-pressure mercury discharge lamp
DOE Office of Scientific and Technical Information (OSTI.GOV)
Araoud, Z.; Ben Ahmed, R.; Ben Hamida, M. B.
2010-06-15
The main objective of this work is to provide a better understanding of the warm-up phase of high-intensity discharge lamps. As an example of application, we chose the high-pressure mercury lamp. Based on two-dimensional fluid model parameters, such as the electric current, the length and the diameter of the burner are modified and the effect of the convective transport is studied. This allows us to obtain a thorough understanding of the physics of these lamps in their transitory phase. The simulation of the warm-up phase is a must for the proper predictions of the lamp behavior and can be conductedmore » by solving the energy balance, momentum, and Laplace's equations for the plasma, using the frame of the local thermodynamic equilibrium coupled with the energy balance of the wall.« less
Superconductivity from strong repulsive interactions in the two-dimensional Hubbard model
NASA Astrophysics Data System (ADS)
Sarasua, L. G.
2011-10-01
In this work, we study superconductivity in the strong coupling limit of the two-dimensional Hubbard model using a generalization of the Hubbard-I approximation. The results are compared with those obtained by Beenen and Edwards with the two-pole method of Roth, revealing a qualitative agreement between the two approaches. The effect of the hopping parameter t' between next-nearest neighbour sites on the critical temperature is considered. It is shown that the present approach reproduces the relation between t' and the maximum Tc in high temperature superconductors reported by Pavarini et al (2001 Phys. Rev. Lett. 87 047003).
Study of guided modes in three-dimensional composites
NASA Astrophysics Data System (ADS)
Baste, S.; Gerard, A.
The propagation of elastic waves in a three-dimensional carbon-carbon composite is modeled with a mixed variational method, using the Bloch or Floquet theories and the Hellinger-Reissner function for two independent fields. The model of the equivalent homogeneous material only exists below a cut-off frequency of about 600 kHz. The existence below the cut-off frequency of two guided waves can account for the presence of a slow guided wave on either side of the cut-off frequency. Optical modes are generated at low frequencies, and can attain high velocites (rapid guided modes of 15,000 m/sec).
2007-04-01
contact with a freshly spin-coated NC–titania pre- polymer , which was transferred to a hot plate to initiate polymerization . The pattern of the PDMS stamp...to quantify pO2 and pH in vivo with high three-dimensional resolution (~1 µm3) and significant depth penetration (up to 400 µm) with MPLSM. The...proposed to develop techniques for measuring in vivo pO2 and pH of HER2-positive and negative primary tumors in murine models of breast cancer using
Estimating average growth trajectories in shape-space using kernel smoothing.
Hutton, Tim J; Buxton, Bernard F; Hammond, Peter; Potts, Henry W W
2003-06-01
In this paper, we show how a dense surface point distribution model of the human face can be computed and demonstrate the usefulness of the high-dimensional shape-space for expressing the shape changes associated with growth and aging. We show how average growth trajectories for the human face can be computed in the absence of longitudinal data by using kernel smoothing across a population. A training set of three-dimensional surface scans of 199 male and 201 female subjects of between 0 and 50 years of age is used to build the model.
Phase Transitions in a Model of Y-Molecules Abstract
NASA Astrophysics Data System (ADS)
Holz, Danielle; Ruth, Donovan; Toral, Raul; Gunton, James
Immunoglobulin is a Y-shaped molecule that functions as an antibody to neutralize pathogens. In special cases where there is a high concentration of immunoglobulin molecules, self-aggregation can occur and the molecules undergo phase transitions. This prevents the molecules from completing their function. We used a simplified model of 2-Dimensional Y-molecules with three identical arms on a triangular lattice with 2-dimensional Grand Canonical Ensemble. The molecules were permitted to be placed, removed, rotated or moved on the lattice. Once phase coexistence was found, we used histogram reweighting and multicanonical sampling to calculate our phase diagram.
Network community-based model reduction for vortical flows
NASA Astrophysics Data System (ADS)
Gopalakrishnan Meena, Muralikrishnan; Nair, Aditya G.; Taira, Kunihiko
2018-06-01
A network community-based reduced-order model is developed to capture key interactions among coherent structures in high-dimensional unsteady vortical flows. The present approach is data-inspired and founded on network-theoretic techniques to identify important vortical communities that are comprised of vortical elements that share similar dynamical behavior. The overall interaction-based physics of the high-dimensional flow field is distilled into the vortical community centroids, considerably reducing the system dimension. Taking advantage of these vortical interactions, the proposed methodology is applied to formulate reduced-order models for the inter-community dynamics of vortical flows, and predict lift and drag forces on bodies in wake flows. We demonstrate the capabilities of these models by accurately capturing the macroscopic dynamics of a collection of discrete point vortices, and the complex unsteady aerodynamic forces on a circular cylinder and an airfoil with a Gurney flap. The present formulation is found to be robust against simulated experimental noise and turbulence due to its integrating nature of the system reduction.
A two-dimensional model of odd nitrogen in the thermosphere and mesosphere
NASA Technical Reports Server (NTRS)
Gerard, J. C.; Roble, R. G.; Rusch, D. W.
1980-01-01
Satellite measurements of the global nitric oxide distribution demonstrating the need for a two dimensional model of odd nitrogen photochemistry and transport in the thermosphere and mesosphere are reviewed. The main characteristics of a new code solving the transport equation for N(4S), N(2D), and N0 are given. This model extends from pole to pole between 75 and 275 km and reacts to the magnetic activity, the ultraviolet solar flux, and the neutral wind field. The effects of ionization and subsequent odd nitrogen production by high latitude particle precipitation are also included. Preliminary results are illustrated for a magnetically quiet solar minimum period with no neutral wind.
Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models
NASA Technical Reports Server (NTRS)
Marquette, Michele L.; Sognier, Marguerite A.
2013-01-01
An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.
NASA Astrophysics Data System (ADS)
Vichi, M.; Oddo, P.; Zavatarelli, M.; Coluccelli, A.; Coppini, G.; Celio, M.; Fonda Umani, S.; Pinardi, N.
2003-01-01
In this paper we show results from numerical simulations carried out with a complex biogeochemical fluxes model coupled with a one-dimensional high-resolution hydrodynamical model and implemented at three different locations of the northern Adriatic shelf. One location is directly affected by the Po River influence, one has more open-sea characteristics and one is located in the Gulf of Trieste with an intermediate behavior; emphasis is put on the comparison with observations and on the functioning of the northern Adriatic ecosystem in the three areas. The work has been performed in a climatological context and has to be considered as preliminary to the development of three-dimensional numerical simulations. Biogeochemical model parameterizations have been ameliorated with a detailed description of bacterial substrate utilization associated with the quality of the dissolved organic matter (DOM), in order to improve the models capability in capturing the observed DOM dynamics in the basin. The coupled model has been calibrated and validated at the three locations by means of climatological data sets. Results show satisfactory model behavior in simulating local seasonal dynamics in the limit of the available boundary conditions and the one-dimensional implementation. Comparisons with available measurements of primary and bacterial production and bacterial abundances have been performed in all locations. Model simulated rates and bacterial dynamics are in the same order of magnitude of observations and show a qualitatively correct time evolution. The importance of temperature as a factor controlling bacteria efficiency is investigated with sensitivity experiments on the model parameterizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, Christopher A.
In this dissertation the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas is investigated. To properly assess this possibility, data from both numerical simulations and experiment are analyzed. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos in the data. These tools include phase portraits and Poincare sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulatemore » the plasma dynamics. These are the DEBS code, which models global RFP dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low dimensional chaos and simple determinism. Experimental date were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or low simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less
A model for near-wall dynamics in turbulent Rayleigh Bénard convection
NASA Astrophysics Data System (ADS)
Theerthan, S. Ananda; Arakeri, Jaywant H.
1998-10-01
Experiments indicate that turbulent free convection over a horizontal surface (e.g. Rayleigh Bénard convection) consists of essentially line plumes near the walls, at least for moderately high Rayleigh numbers. Based on this evidence, we propose here a two-dimensional model for near-wall dynamics in Rayleigh Bénard convection and in general for convection over heated horizontal surfaces. The model proposes a periodic array of steady laminar two-dimensional plumes. A plume is fed on either side by boundary layers on the wall. The results from the model are obtained in two ways. One of the methods uses the similarity solution of Rotem & Classen (1969) for the boundary layer and the similarity solution of Fuji (1963) for the plume. We have derived expressions for mean temperature and temperature and velocity fluctuations near the wall. In the second approach, we compute the two-dimensional flow field in a two-dimensional rectangular open cavity. The number of plumes in the cavity depends on the length of the cavity. The plume spacing is determined from the critical length at which the number of plumes increases by one. The results for average plume spacing and the distribution of r.m.s. temperature and velocity fluctuations are shown to be in acceptable agreement with experimental results.
NASA Astrophysics Data System (ADS)
Falvo, Cyril
2018-02-01
The theory of linear and non-linear infrared response of vibrational Holstein polarons in one-dimensional lattices is presented in order to identify the spectral signatures of self-trapping phenomena. Using a canonical transformation, the optical response is computed from the small polaron point of view which is valid in the anti-adiabatic limit. Two types of phonon baths are considered: optical phonons and acoustical phonons, and simple expressions are derived for the infrared response. It is shown that for the case of optical phonons, the linear response can directly probe the polaron density of states. The model is used to interpret the experimental spectrum of crystalline acetanilide in the C=O range. For the case of acoustical phonons, it is shown that two bound states can be observed in the two-dimensional infrared spectrum at low temperature. At high temperature, analysis of the time-dependence of the two-dimensional infrared spectrum indicates that bath mediated correlations slow down spectral diffusion. The model is used to interpret the experimental linear-spectroscopy of model α-helix and β-sheet polypeptides. This work shows that the Davydov Hamiltonian cannot explain the observations in the NH stretching range.
A non-linear dimension reduction methodology for generating data-driven stochastic input models
NASA Astrophysics Data System (ADS)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
2008-06-01
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional input stochastic models to represent thermal diffusivity in two-phase microstructures. This model is used in analyzing the effect of topological variations of two-phase microstructures on the evolution of temperature in heat conduction processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Cheng-Hsien; Department of Water Resources and Environmental Engineering, Tamkang University, New Taipei City 25137, Taiwan; Low, Ying Min, E-mail: ceelowym@nus.edu.sg
2016-05-15
Sediment transport is fundamentally a two-phase phenomenon involving fluid and sediments; however, many existing numerical models are one-phase approaches, which are unable to capture the complex fluid-particle and inter-particle interactions. In the last decade, two-phase models have gained traction; however, there are still many limitations in these models. For example, several existing two-phase models are confined to one-dimensional problems; in addition, the existing two-dimensional models simulate only the region outside the sand bed. This paper develops a new three-dimensional two-phase model for simulating sediment transport in the sheet flow condition, incorporating recently published rheological characteristics of sediments. The enduring-contact, inertial,more » and fluid viscosity effects are considered in determining sediment pressure and stresses, enabling the model to be applicable to a wide range of particle Reynolds number. A k − ε turbulence model is adopted to compute the Reynolds stresses. In addition, a novel numerical scheme is proposed, thus avoiding numerical instability caused by high sediment concentration and allowing the sediment dynamics to be computed both within and outside the sand bed. The present model is applied to two classical problems, namely, sheet flow and scour under a pipeline with favorable results. For sheet flow, the computed velocity is consistent with measured data reported in the literature. For pipeline scour, the computed scour rate beneath the pipeline agrees with previous experimental observations. However, the present model is unable to capture vortex shedding; consequently, the sediment deposition behind the pipeline is overestimated. Sensitivity analyses reveal that model parameters associated with turbulence have strong influence on the computed results.« less
SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows
NASA Astrophysics Data System (ADS)
Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu
2017-12-01
A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.
Continuum modeling of three-dimensional truss-like space structures
NASA Technical Reports Server (NTRS)
Nayfeh, A. H.; Hefzy, M. S.
1978-01-01
A mathematical and computational analysis capability has been developed for calculating the effective mechanical properties of three-dimensional periodic truss-like structures. Two models are studied in detail. The first, called the octetruss model, is a three-dimensional extension of a two-dimensional model, and the second is a cubic model. Symmetry considerations are employed as a first step to show that the specific octetruss model has four independent constants and that the cubic model has two. The actual values of these constants are determined by averaging the contributions of each rod element to the overall structure stiffness. The individual rod member contribution to the overall stiffness is obtained by a three-dimensional coordinate transformation. The analysis shows that the effective three-dimensional elastic properties of both models are relatively close to each other.
Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko
2013-01-01
The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175
Unsupervised machine learning account of magnetic transitions in the Hubbard model
NASA Astrophysics Data System (ADS)
Ch'ng, Kelvin; Vazquez, Nick; Khatami, Ehsan
2018-01-01
We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t -distributed stochastic neighboring ensemble (t -SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t -SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t -SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiengarten, T.; Fichtner, H.; Kleimann, J.
2016-12-10
We extend a two-component model for the evolution of fluctuations in the solar wind plasma so that it is fully three-dimensional (3D) and also coupled self-consistently to the large-scale magnetohydrodynamic equations describing the background solar wind. The two classes of fluctuations considered are a high-frequency parallel-propagating wave-like piece and a low-frequency quasi-two-dimensional component. For both components, the nonlinear dynamics is dominanted by quasi-perpendicular spectral cascades of energy. Driving of the fluctuations by, for example, velocity shear and pickup ions is included. Numerical solutions to the new model are obtained using the Cronos framework, and validated against previous simpler models. Comparing results frommore » the new model with spacecraft measurements, we find improved agreement relative to earlier models that employ prescribed background solar wind fields. Finally, the new results for the wave-like and quasi-two-dimensional fluctuations are used to calculate ab initio diffusion mean-free paths and drift lengthscales for the transport of cosmic rays in the turbulent solar wind.« less
3D printing of preclinical X-ray computed tomographic data sets.
Doney, Evan; Krumdick, Lauren A; Diener, Justin M; Wathen, Connor A; Chapman, Sarah E; Stamile, Brian; Scott, Jeremiah E; Ravosa, Matthew J; Van Avermaete, Tony; Leevy, W Matthew
2013-03-22
Three-dimensional printing allows for the production of highly detailed objects through a process known as additive manufacturing. Traditional, mold-injection methods to create models or parts have several limitations, the most important of which is a difficulty in making highly complex products in a timely, cost-effective manner.(1) However, gradual improvements in three-dimensional printing technology have resulted in both high-end and economy instruments that are now available for the facile production of customized models.(2) These printers have the ability to extrude high-resolution objects with enough detail to accurately represent in vivo images generated from a preclinical X-ray CT scanner. With proper data collection, surface rendering, and stereolithographic editing, it is now possible and inexpensive to rapidly produce detailed skeletal and soft tissue structures from X-ray CT data. Even in the early stages of development, the anatomical models produced by three-dimensional printing appeal to both educators and researchers who can utilize the technology to improve visualization proficiency. (3, 4) The real benefits of this method result from the tangible experience a researcher can have with data that cannot be adequately conveyed through a computer screen. The translation of pre-clinical 3D data to a physical object that is an exact copy of the test subject is a powerful tool for visualization and communication, especially for relating imaging research to students, or those in other fields. Here, we provide a detailed method for printing plastic models of bone and organ structures derived from X-ray CT scans utilizing an Albira X-ray CT system in conjunction with PMOD, ImageJ, Meshlab, Netfabb, and ReplicatorG software packages.
Huang, Zheng; Chen, Zhi
2013-10-01
This study describes the details of how to construct a three-dimensional (3D) finite element model of a maxillary first premolar tooth based on micro-CT data acquisition technique, MIMICS software and ANSYS software. The tooth was scanned by micro-CT, in which 1295 slices were obtained and then 648 slices were selected for modeling. The 3D surface mesh models of enamel and dentin were created by MIMICS (STL file). The solid mesh model was constructed by ANSYS. After the material properties and boundary conditions were set, a loading analysis was performed to demonstrate the applicableness of the resulting model. The first and third principal stresses were then evaluated. The results showed that the number of nodes and elements of the finite element model were 56 618 and 311801, respectively. The geometric form of the model was highly consistent with that of the true tooth, and the deviation between them was -0.28%. The loading analysis revealed the typical stress patterns in the contour map. The maximum compressive stress existed in the contact points and the maximum tensile stress existed in the deep fissure between the two cusps. It is concluded that by using the micro-CT and highly integrated software, construction of the 3D finite element model with high quality will not be difficult for clinical researchers.
Morrissey, Daniel J.
1989-01-01
The highly permeable, unconfined, glacial-drift aquifers that occupy most New England river valleys constitute the principal source of drinking water for many of the communities that obtain part or all of their public water supply from ground water. Recent events have shown that these aquifers are highly susceptible to contamination that results from a number of sources, such as seepage from wastewater lagoons, leaking petroleum-product storage tanks, and road salting. To protect the quality of water pumped from supply wells in these aquifers, it is necessary to ensure that potentially harmful contaminants do not enter the ground in the area that contributes water to the well. A high degree of protection can be achieved through the application of appropriate land-use controls within the contributing area. However, the contributing areas for most supply wells are not known. This report describes the factors that affect the size and shape of contributing areas to public supply wells and evaluates several methods that may be used to delineate contributing areas of wells in glacial-drift, river-valley aquifers. Analytical, two-dimensional numerical, and three-dimensional numerical models were used to delineate contributing areas. These methods of analysis were compared by applying them to a hypothetical aquifer having the dimensions and geometry of a typical glacial-drift, river-valley aquifer. In the model analyses, factors that control the size and shape of a contributing area were varied over ranges of values common to glacial-drift aquifers in New England. The controlling factors include the rate of well discharge, rate of recharge to the aquifer from precipitation and from adjacent till and bedrock uplands, distance of a pumping well from a stream or other potential source of induced recharge, degree of hydraulic connection of the aquifer with a stream, horizontal hydraulic conductivity of the aquifer, ratio of horizontal to vertical hydraulic conductivity, and degree of well penetration. Analytical methods proved easiest to apply but gave results that are considered to be less accurate than those obtainable by means of numerical-model analysis. Numerical models have the capability to more closely reflect the variable geohydrologic conditions typical of glacial-drift valley aquifers. For average conditions in the hypothetical aquifer, the analytical method predicts a contributing area limited to the well side of the river because a constant-head boundary simulated by image wells is used in the analytical model. For typical glacial-drift, river-valley aquifers, this simulation is unrealistic because drawdowns, caused by a pumping well, and the contributing area of the well can extend beneath and beyond a river or stream. A wide range of hydrologic conditions was simulated by using the two-dimensional numerical model. The resulting contributing area for a well pumped at 1.0 million gallons per day--a common pumping rate--ranged from about 0.9 to 1.8 square miles. Model analyses also show that the contributing area of pumped wells may be expected to extend to the opposite side of the river and to include significant areas of till uplands adjacent to the aquifer on both sides of the valley. Simulations done with the three-dimensional model allow a full three-dimensional delineation of the zone of contribution for a pumped well. For the relatively thin (100 feet or less) unconfined aquifers considered in this analysis, the three-dimensional model showed that the zone of contribution extended throughout the entire saturated thickness of aquifer; therefore, the two-dimensional simulations were considered adequate for delineating contributing areas in this particular hydrologic setting. For thicker aquifers, especially those having partially penetrating wells, three-dimensional models are preferable. Values for several of the factors that affect the size and shape of contributing recharge areas cannot be det
Cross-entropy embedding of high-dimensional data using the neural gas model.
Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi
2005-01-01
A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).
NASA Astrophysics Data System (ADS)
Seleznev, R. K.
2017-02-01
In the paper two-dimensional and quasi-one dimensional models for scramjet combustion chamber are described. Comparison of the results of calculations for the two-dimensional and quasi-one dimensional code by the example of VAG experiment are presented.
NASA Technical Reports Server (NTRS)
Bade, W. L.; Yos, J. M.
1975-01-01
A computer program for calculating quasi-one-dimensional gas flow in axisymmetric and two-dimensional nozzles and rectangular channels is presented. Flow is assumed to start from a state of thermochemical equilibrium at a high temperature in an upstream reservoir. The program provides solutions based on frozen chemistry, chemical equilibrium, and nonequilibrium flow with finite reaction rates. Electronic nonequilibrium effects can be included using a two-temperature model. An approximate laminar boundary layer calculation is given for the shear and heat flux on the nozzle wall. Boundary layer displacement effects on the inviscid flow are considered also. Chemical equilibrium and transport property calculations are provided by subroutines. The code contains precoded thermochemical, chemical kinetic, and transport cross section data for high-temperature air, CO2-N2-Ar mixtures, helium, and argon. It provides calculations of the stagnation conditions on axisymmetric or two-dimensional models, and of the conditions on the flat surface of a blunt wedge. The primary purpose of the code is to describe the flow conditions and test conditions in electric arc heated wind tunnels.
TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow
Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.
1993-01-01
A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.
NASA Astrophysics Data System (ADS)
Feucht, D. W.; Sheehan, A. F.; Bedrosian, P.
2015-12-01
A recent magnetotelluric (MT) survey in central Colorado, USA, when interpreted alongside existing seismic tomography, reveals potential mechanisms of support for high topography both regionally and locally. Broadband and long period magnetotelluric data were collected at twenty-three sites along a 330 km E-W profile across the Southern Rocky Mountains and High Plains of central North America as part of the Deep RIFT Electrical Resistivity (DRIFTER) experiment. Remote-reference data processing yielded high quality MT data over a period range of 100 Hz to 10,000 seconds. A prominent feature of the regional geo-electric structure is the Denver Basin, which contains a thick package of highly conductive shales and porous sandstone aquifers. One-dimensional forward modeling was performed on stations within the Denver Basin to estimate depth to the base of this shallow conductor. Those estimates were then used to place a horizontal penalty cut in the model mesh of a regularized two-dimensional inversion. Two-dimensional modeling of the resistivity structure reveals two major anomalous regions in the lithosphere: 1) a high conductivity region in the crust under the tallest peaks of the Rocky Mountains and 2) a lateral step increase in lithospheric resistivity beneath the plains. The Rocky Mountain crustal anomaly coincides with low seismic wave speeds and enhanced heat flow and is thus interpreted as evidence of partial melt and/or high temperature fluids emplaced in the crust by tectonic activity along the Rio Grande Rift. The lateral variation in the mantle lithosphere, while co-located with a pronounced step increase in seismic velocity, appears to be a gradational boundary in resistivity across eastern Colorado and could indicate a small degree of compositional modification at the edge of the North American craton. These inferred conductivity mechanisms, namely crustal melt and modification of mantle lithosphere, likely contribute to high topography locally in the Rocky Mountains and regionally in the High Plains.
Zhang, Lingling; Huang, Xinyu; Qin, Changyong; Brinkman, Kyle; Gong, Yunhui; Wang, Siwei; Huang, Kevin
2013-08-21
Identification of the existence of pyrocarbonate ion C2O5(2-) in molten carbonates exposed to a CO2 atmosphere provides key support for a newly established bi-ionic transport model that explains the mechanisms of high CO2 permeation flux observed in mixed oxide-ion and carbonate-ion conducting (MOCC) membranes containing highly interconnected three dimensional ionic channels. Here we report the first Raman spectroscopic evidence of C2O5(2-) as an active species involved in the CO2-transport process of MOCC membranes exposed to a CO2 atmosphere. The two new broad peaks centered at 1317 cm(-1) and 1582 cm(-1) are identified as the characteristic frequencies of the C2O5(2-) species. The measured characteristic Raman frequencies of C2O5(2-) are in excellent agreement with the DFT-model consisting of six overlapping individual theoretical bands calculated from Li2C2O5 and Na2C2O5.
Modeling dam-break flows using finite volume method on unstructured grid
USDA-ARS?s Scientific Manuscript database
Two-dimensional shallow water models based on unstructured finite volume method and approximate Riemann solvers for computing the intercell fluxes have drawn growing attention because of their robustness, high adaptivity to complicated geometry and ability to simulate flows with mixed regimes and di...
Development and application of theoretical models for Rotating Detonation Engine flowfields
NASA Astrophysics Data System (ADS)
Fievisohn, Robert
As turbine and rocket engine technology matures, performance increases between successive generations of engine development are becoming smaller. One means of accomplishing significant gains in thermodynamic performance and power density is to use detonation-based heat release instead of deflagration. This work is focused on developing and applying theoretical models to aid in the design and understanding of Rotating Detonation Engines (RDEs). In an RDE, a detonation wave travels circumferentially along the bottom of an annular chamber where continuous injection of fresh reactants sustains the detonation wave. RDEs are currently being designed, tested, and studied as a viable option for developing a new generation of turbine and rocket engines that make use of detonation heat release. One of the main challenges in the development of RDEs is to understand the complex flowfield inside the annular chamber. While simplified models are desirable for obtaining timely performance estimates for design analysis, one-dimensional models may not be adequate as they do not provide flow structure information. In this work, a two-dimensional physics-based model is developed, which is capable of modeling the curved oblique shock wave, exit swirl, counter-flow, detonation inclination, and varying pressure along the inflow boundary. This is accomplished by using a combination of shock-expansion theory, Chapman-Jouguet detonation theory, the Method of Characteristics (MOC), and other compressible flow equations to create a shock-fitted numerical algorithm and generate an RDE flowfield. This novel approach provides a numerically efficient model that can provide performance estimates as well as details of the large-scale flow structures in seconds on a personal computer. Results from this model are validated against high-fidelity numerical simulations that may require a high-performance computing framework to provide similar performance estimates. This work provides a designer a new tool to conduct large-scale parametric studies to optimize a design space before conducting computationally-intensive, high-fidelity simulations that may be used to examine additional effects. The work presented in this thesis not only bridges the gap between simple one-dimensional models and high-fidelity full numerical simulations, but it also provides an effective tool for understanding and exploring RDE flow processes.
NASA Astrophysics Data System (ADS)
Taşkin Kaya, Gülşen
2013-10-01
Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input-output relationships in high-dimensional systems for many problems in science and engineering. The HDMR method is developed to improve the efficiency of the deducing high dimensional behaviors. The method is formed by a particular organization of low dimensional component functions, in which each function is the contribution of one or more input variables to the output variables.
Experimental witness of genuine high-dimensional entanglement
NASA Astrophysics Data System (ADS)
Guo, Yu; Hu, Xiao-Min; Liu, Bi-Heng; Huang, Yun-Feng; Li, Chuan-Feng; Guo, Guang-Can
2018-06-01
Growing interest has been invested in exploring high-dimensional quantum systems, for their promising perspectives in certain quantum tasks. How to characterize a high-dimensional entanglement structure is one of the basic questions to take full advantage of it. However, it is not easy for us to catch the key feature of high-dimensional entanglement, for the correlations derived from high-dimensional entangled states can be possibly simulated with copies of lower-dimensional systems. Here, we follow the work of Kraft et al. [Phys. Rev. Lett. 120, 060502 (2018), 10.1103/PhysRevLett.120.060502], and present the experimental realizing of creation and detection, by the normalized witness operation, of the notion of genuine high-dimensional entanglement, which cannot be decomposed into lower-dimensional Hilbert space and thus form the entanglement structures existing in high-dimensional systems only. Our experiment leads to further exploration of high-dimensional quantum systems.
A comparative study of two prediction models for brain tumor progression
NASA Astrophysics Data System (ADS)
Zhou, Deqi; Tran, Loc; Wang, Jihong; Li, Jiang
2015-03-01
MR diffusion tensor imaging (DTI) technique together with traditional T1 or T2 weighted MRI scans supplies rich information sources for brain cancer diagnoses. These images form large-scale, high-dimensional data sets. Due to the fact that significant correlations exist among these images, we assume low-dimensional geometry data structures (manifolds) are embedded in the high-dimensional space. Those manifolds might be hidden from radiologists because it is challenging for human experts to interpret high-dimensional data. Identification of the manifold is a critical step for successfully analyzing multimodal MR images. We have developed various manifold learning algorithms (Tran et al. 2011; Tran et al. 2013) for medical image analysis. This paper presents a comparative study of an incremental manifold learning scheme (Tran. et al. 2013) versus the deep learning model (Hinton et al. 2006) in the application of brain tumor progression prediction. The incremental manifold learning is a variant of manifold learning algorithm to handle large-scale datasets in which a representative subset of original data is sampled first to construct a manifold skeleton and remaining data points are then inserted into the skeleton by following their local geometry. The incremental manifold learning algorithm aims at mitigating the computational burden associated with traditional manifold learning methods for large-scale datasets. Deep learning is a recently developed multilayer perceptron model that has achieved start-of-the-art performances in many applications. A recent technique named "Dropout" can further boost the deep model by preventing weight coadaptation to avoid over-fitting (Hinton et al. 2012). We applied the two models on multiple MRI scans from four brain tumor patients to predict tumor progression and compared the performances of the two models in terms of average prediction accuracy, sensitivity, specificity and precision. The quantitative performance metrics were calculated as average over the four patients. Experimental results show that both the manifold learning and deep neural network models produced better results compared to using raw data and principle component analysis (PCA), and the deep learning model is a better method than manifold learning on this data set. The averaged sensitivity and specificity by deep learning are comparable with these by the manifold learning approach while its precision is considerably higher. This means that the predicted abnormal points by deep learning are more likely to correspond to the actual progression region.
Hilton, David J
2012-12-31
We develop a new characteristic matrix-based method to analyze cyclotron resonance experiments in high mobility two-dimensional electron gas samples where direct interference between primary and satellite reflections has previously limited the frequency resolution. This model is used to simulate experimental data taken using terahertz time-domain spectroscopy that show multiple pulses from the substrate with a separation of 15 ps that directly interfere in the time-domain. We determine a cyclotron dephasing lifetime of 15.1 ± 0.5 ps at 1.5 K and 5.0 ± 0.5 ps at 75 K.
Garashchuk, Sophya; Rassolov, Vitaly A
2008-07-14
Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.
Flamm, Christoph; Graef, Andreas; Pirker, Susanne; Baumgartner, Christoph; Deistler, Manfred
2013-01-01
Granger causality is a useful concept for studying causal relations in networks. However, numerical problems occur when applying the corresponding methodology to high-dimensional time series showing co-movement, e.g. EEG recordings or economic data. In order to deal with these shortcomings, we propose a novel method for the causal analysis of such multivariate time series based on Granger causality and factor models. We present the theoretical background, successfully assess our methodology with the help of simulated data and show a potential application in EEG analysis of epileptic seizures. PMID:23354014
A hierarchy for modeling high speed propulsion systems
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Deabreu, Alex
1991-01-01
General research efforts on reduced order propulsion models for control systems design are overviewed. Methods for modeling high speed propulsion systems are discussed including internal flow propulsion systems that do not contain rotating machinery such as inlets, ramjets, and scramjets. The discussion is separated into four sections: (1) computational fluid dynamics model for the entire nonlinear system or high order nonlinear models; (2) high order linearized model derived from fundamental physics; (3) low order linear models obtained from other high order models; and (4) low order nonlinear models. Included are special considerations on any relevant control system designs. The methods discussed are for the quasi-one dimensional Euler equations of gasdynamic flow. The essential nonlinear features represented are large amplitude nonlinear waves, moving normal shocks, hammershocks, subsonic combustion via heat addition, temperature dependent gases, detonation, and thermal choking.
Decorrelation of the true and estimated classifier errors in high-dimensional settings.
Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R
2007-01-01
The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.
Narin, B; Ozyörük, Y; Ulas, A
2014-05-30
This paper describes a two-dimensional code developed for analyzing two-phase deflagration-to-detonation transition (DDT) phenomenon in granular, energetic, solid, explosive ingredients. The two-dimensional model is constructed in full two-phase, and based on a highly coupled system of partial differential equations involving basic flow conservation equations and some constitutive relations borrowed from some one-dimensional studies that appeared in open literature. The whole system is solved using an optimized high-order accurate, explicit, central-difference scheme with selective-filtering/shock capturing (SF-SC) technique, to augment central-diffencing and prevent excessive dispersion. The sources of the equations describing particle-gas interactions in terms of momentum and energy transfers make the equation system quite stiff, and hence its explicit integration difficult. To ease the difficulties, a time-split approach is used allowing higher time steps. In the paper, the physical model for the sources of the equation system is given for a typical explosive, and several numerical calculations are carried out to assess the developed code. Microscale intergranular and/or intragranular effects including pore collapse, sublimation, pyrolysis, etc. are not taken into account for ignition and growth, and a basic temperature switch is applied in calculations to control ignition in the explosive domain. Results for one-dimensional DDT phenomenon are in good agreement with experimental and computational results available in literature. A typical shaped-charge wave-shaper case study is also performed to test the two-dimensional features of the code and it is observed that results are in good agreement with those of commercial software. Copyright © 2014 Elsevier B.V. All rights reserved.
Three-Dimensional High Fidelity Progressive Failure Damage Modeling of NCF Composites
NASA Technical Reports Server (NTRS)
Aitharaju, Venkat; Aashat, Satvir; Kia, Hamid G.; Satyanarayana, Arunkumar; Bogert, Philip B.
2017-01-01
Performance prediction of off-axis laminates is of significant interest in designing composite structures for energy absorption. Phenomenological models available in most of the commercial programs, where the fiber and resin properties are smeared, are very efficient for large scale structural analysis, but lack the ability to model the complex nonlinear behavior of the resin and fail to capture the complex load transfer mechanisms between the fiber and the resin matrix. On the other hand, high fidelity mesoscale models, where the fiber tows and matrix regions are explicitly modeled, have the ability to account for the complex behavior in each of the constituents of the composite. However, creating a finite element model of a larger scale composite component could be very time consuming and computationally very expensive. In the present study, a three-dimensional mesoscale model of non-crimp composite laminates was developed for various laminate schemes. The resin material was modeled as an elastic-plastic material with nonlinear hardening. The fiber tows were modeled with an orthotropic material model with brittle failure. In parallel, new stress based failure criteria combined with several damage evolution laws for matrix stresses were proposed for a phenomenological model. The results from both the mesoscale and phenomenological models were compared with the experiments for a variety of off-axis laminates.
NASA Astrophysics Data System (ADS)
Wang, L.; Jiang, T. L.; Dai, H. L.; Ni, Q.
2018-05-01
The present study develops a new three-dimensional nonlinear model for investigating vortex-induced vibrations (VIV) of flexible pipes conveying internal fluid flow. The unsteady hydrodynamic forces associated with the wake dynamics are modeled by two distributed van der Pol wake oscillators. In particular, the nonlinear partial differential equations of motion of the pipe and the wake are derived, taking into account the coupling between the structure and the fluid. The nonlinear equations of motion for the coupled system are then discretized by means of the Galerkin technique, resulting in a high-dimensional reduced-order model of the system. It is shown that the natural frequencies for in-plane and out-of-plane motions of the pipe may be different at high internal flow velocities beyond the threshold of buckling instability. The orientation angle of the postbuckling configuration is time-varying due to the disturbance of hydrodynamic forces, thus yielding sometimes unexpected results. For a buckled pipe with relatively low cross-flow velocity, interestingly, examining the nonlinear dynamics of the pipe indicates that the combined effects of the cross-flow-induced resonance of the in-plane first mode and the internal-flow-induced buckling on the IL and CF oscillation amplitudes may be significant. For higher cross-flow velocities, however, the effect of internal fluid flow on the nonlinear VIV responses of the pipe is not pronounced.
Two-way reflector based on two-dimensional sub-wavelength high-index contrast grating on SOI
NASA Astrophysics Data System (ADS)
Kaur, Harpinder; Kumar, Mukesh
2016-05-01
A two-dimensional (2D) high-index contrast grating (HCG) is proposed as a two-way reflector on Silicon-on-insulator (SOI). The proposed reflector provides high reflectivity over two (practically important) sets of angles of incidence- normal (θ = 0 °) and oblique/grazing (θ = 80 ° - 85 ° / 90 °). Analytical model of 2D HCG is presented using improved Fourier modal method. The vertical incidence is useful for application in VCSEL while oblique/grazing incidence can be utilized in high confinement (HCG mirrors based) hollow waveguides and Bragg reflectors. The proposed two-way reflector also exhibits a large reflection bandwidth (around telecom wavelength) which is an advantage for broadband photonic devices.
Maruo, Shoji; Hasegawa, Takuya; Yoshimura, Naoki
2009-11-09
In high-precision two-photon microfabrication of three-dimensional (3-D) polymeric microstructures, supercritical CO(2) drying was employed to reduce surface tension, which tends to cause the collapse of micro/nano structures. Use of supercritical drying allowed high-aspect ratio microstructures, such as micropillars and cantilevers, to be fabricated. We also propose a single-anchor supporting method to eliminate non-uniform shrinkage of polymeric structures otherwise caused by attachment to the substrate. Use of this method permitted frame models such as lattices to be produced without harmful distortion. The combination of supercritical CO(2) drying and the single-anchor supporting method offers reliable high-precision microfabrication of sophisticated, fragile 3-D micro/nano structures.
Robust hashing with local models for approximate similarity search.
Song, Jingkuan; Yang, Yi; Li, Xuelong; Huang, Zi; Yang, Yang
2014-07-01
Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing l2,1 -norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.
NASA Astrophysics Data System (ADS)
Kühn, Christine; Brasse, Heinrich; Schwarz, Gerhard
2017-12-01
Magnetotelluric investigations were carried out in the late 1980s across all morphological units of the South American subduction zone with the aim to observe lithosphere structures and subduction-induced processes in northern Chile, southwestern Bolivia, and northwestern Argentina at 22°S. Earlier two-dimensional forward modeling yielded a complex picture of the lower crust and upper mantle, with strong variations between the individual morphological units as well as between forearc and backarc. The principal result was a highly conductive zone beneath the volcanic arc of the Western Cordillera starting at 25 km depth. Goal of this work is to extend the existing 2-D results using three-dimensional modeling techniques at least for the volcanic arc and forearc region between 22°S and 23°S in Northern Chile. Dimensionality analysis indicates strong 3-D effects along the volcanic arc at the transition zone to the Altiplano, in the Preandean Depression and around the Precordillera Fault System at 22°S. In general, the new 3-D models corroborate previous findings, but also enable a clearer image of lateral resistivity variations. The magmatic arc conductor emerges now as a trench-parallel, N-S elongated structure slightly shifted to the east of the volcanic front. The forearc appears highly resistive except of some conductive structures associated with younger sedimentary infill or young magmatic record beneath the Precordillera and Preandean Depression. The most prominent conductor in the whole Central Andes beneath the Altiplano and Puna is also modeled here; it is, however, outside the station array and thus poorly resolved in this study.
Schätzlein, Martina Palomino; Becker, Johanna; Schulze-Sünninghausen, David; Pineda-Lucena, Antonio; Herance, José Raul; Luy, Burkhard
2018-04-01
Isotope labeling enables the use of 13 C-based metabolomics techniques with strongly improved resolution for a better identification of relevant metabolites and tracing of metabolic fluxes in cell and animal models, as required in fluxomics studies. However, even at high NMR-active isotope abundance, the acquisition of one-dimensional 13 C and classical two-dimensional 1 H, 13 C-HSQC experiments remains time consuming. With the aim to provide a shorter, more efficient alternative, herein we explored the ALSOFAST-HSQC experiment with its rapid acquisition scheme for the analysis of 13 C-labeled metabolites in complex biological mixtures. As an initial step, the parameters of the pulse sequence were optimized to take into account the specific characteristics of the complex samples. We then applied the fast two-dimensional experiment to study the effect of different kinds of antioxidant gold nanoparticles on a HeLa cancer cell model grown on 13 C glucose-enriched medium. As a result, 1 H, 13 C-2D correlations could be obtained in a couple of seconds to few minutes, allowing a simple and reliable identification of various 13 C-enriched metabolites and the determination of specific variations between the different sample groups. Thus, it was possible to monitor glucose metabolism in the cell model and study the antioxidant effect of the coated gold nanoparticles in detail. Finally, with an experiment time of only half an hour, highly resolved 1 H, 13 C-HSQC spectra using the ALSOFAST-HSQC pulse sequence were acquired, revealing the isotope-position-patterns of the corresponding 13 C-nuclei from carbon multiplets. Graphical abstract Fast NMR applied to metabolomics and fluxomics studies with gold nanoparticles.
NASA Astrophysics Data System (ADS)
Emmons, D. J.; Weeks, D. E.; Eshel, B.; Perram, G. P.
2018-01-01
Simulations of an α-mode radio frequency dielectric barrier discharge are performed for varying mixtures of argon and helium at pressures ranging from 200 to 500 Torr using both zero and one-dimensional models. Metastable densities are analyzed as a function of argon-helium mixture and pressure to determine the optimal conditions, maximizing metastable density for use in an optically pumped rare gas laser. Argon fractions corresponding to the peak metastable densities are found to be pressure dependent, shifting from approximately 15% Ar in He at 200 Torr to 10% at 500 Torr. A decrease in metastable density is observed as pressure is increased due to a diminution in the reduced electric field and a quadratic increase in metastable loss rates through A r2* formation. A zero-dimensional effective direct current model of the dielectric barrier discharge is implemented, showing agreement with the trends predicted by the one-dimensional fluid model in the bulk plasma.
NASA Astrophysics Data System (ADS)
Makhijani, Vinod B.; Przekwas, Andrzej J.
2002-10-01
This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Bin; Department of Chemical Physics, University of Science and Technology of China, Hefei 230026; Guo, Hua, E-mail: hguo@unm.edu
Recently, we reported the first highly accurate nine-dimensional global potential energy surface (PES) for water interacting with a rigid Ni(111) surface, built on a large number of density functional theory points [B. Jiang and H. Guo, Phys. Rev. Lett. 114, 166101 (2015)]. Here, we investigate site-specific reaction probabilities on this PES using a quasi-seven-dimensional quantum dynamical model. It is shown that the site-specific reactivity is largely controlled by the topography of the PES instead of the barrier height alone, underscoring the importance of multidimensional dynamics. In addition, the full-dimensional dissociation probability is estimated by averaging fixed-site reaction probabilities with appropriatemore » weights. To validate this model and gain insights into the dynamics, additional quasi-classical trajectory calculations in both full and reduced dimensions have also been performed and important dynamical factors such as the steering effect are discussed.« less
Probing density and spin correlations in two-dimensional Hubbard model with ultracold fermions
NASA Astrophysics Data System (ADS)
Chan, Chun Fai; Drewes, Jan Henning; Gall, Marcell; Wurz, Nicola; Cocchi, Eugenio; Miller, Luke; Pertot, Daniel; Brennecke, Ferdinand; Koehl, Michael
2017-04-01
Quantum gases of interacting fermionic atoms in optical lattices is a promising candidate to study strongly correlated quantum phases of the Hubbard model such as the Mott-insulator, spin-ordered phases, or in particular d-wave superconductivity. We experimentally realise the two-dimensional Hubbard model by loading a quantum degenerate Fermi gas of 40 K atoms into a three-dimensional optical lattice geometry. High-resolution absorption imaging in combination with radiofrequency spectroscopy is applied to spatially resolve the atomic distribution in a single 2D layer. We investigate in local measurements of spatial correlations in both the density and spin sector as a function of filling, temperature and interaction strength. In the density sector, we compare the local density fluctuations and the global thermodynamic quantities, and in the spin sector, we observe the onset of non-local spin correlation, signalling the emergence of the anti-ferromagnetic phase. We would report our recent experimental endeavours to investigate further down in temperature in the spin sector.
Physical Model of the Genotype-to-Phenotype Map of Proteins
NASA Astrophysics Data System (ADS)
Tlusty, Tsvi; Libchaber, Albert; Eckmann, Jean-Pierre
2017-04-01
How DNA is mapped to functional proteins is a basic question of living matter. We introduce and study a physical model of protein evolution which suggests a mechanical basis for this map. Many proteins rely on large-scale motion to function. We therefore treat protein as learning amorphous matter that evolves towards such a mechanical function: Genes are binary sequences that encode the connectivity of the amino acid network that makes a protein. The gene is evolved until the network forms a shear band across the protein, which allows for long-range, soft modes required for protein function. The evolution reduces the high-dimensional sequence space to a low-dimensional space of mechanical modes, in accord with the observed dimensional reduction between genotype and phenotype of proteins. Spectral analysis of the space of 1 06 solutions shows a strong correspondence between localization around the shear band of both mechanical modes and the sequence structure. Specifically, our model shows how mutations are correlated among amino acids whose interactions determine the functional mode.
NASA Astrophysics Data System (ADS)
Cao, C.; Lee, X.; Xu, J.
2017-12-01
Unmanned Aerial Vehicles (UAVs) or drones have been widely used in environmental, ecological and engineering applications in recent years. These applications require assessment of positional and dimensional accuracy. In this study, positional accuracy refers to the accuracy of the latitudinal and longitudinal coordinates of locations on the mosaicked image in reference to the coordinates of the same locations measured by a Global Positioning System (GPS) in a ground survey, and dimensional accuracy refers to length and height of a ground target. Here, we investigate the effects of the number of Ground Control Points (GCPs) and the accuracy of the GPS used to measure the GCPs on positional and dimensional accuracy of a drone 3D model. Results show that using on-board GPS and a hand-held GPS produce a positional accuracy on the order of 2-9 meters. In comparison, using a differential GPS with high accuracy (30 cm) improves the positional accuracy of the drone model by about 40 %. Increasing the number of GCPs can compensate for the uncertainty brought by the GPS equipment with low accuracy. In terms of the dimensional accuracy of the drone model, even with the use of a low resolution GPS onboard the vehicle, the mean absolute errors are only 0.04 m for height and 0.10 m for length, which are well suited for some applications in precision agriculture and in land survey studies.
THR-TH: a high-temperature gas-cooled nuclear reactor core thermal hydraulics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.
1984-07-01
The ORNL version of PEBBLE, the (RZ) pebble bed thermal hydraulics code, has been extended for application to a prismatic gas cooled reactor core. The supplemental treatment is of one-dimensional coolant flow in up to a three-dimensional core description. Power density data from a neutronics and exposure calculation are used as the basic information for the thermal hydraulics calculation of heat removal. Two-dimensional neutronics results may be expanded for a three-dimensional hydraulics calculation. The geometric description for the hydraulics problem is the same as used by the neutronics code. A two-dimensional thermal cell model is used to predict temperatures inmore » the fuel channel. The capability is available in the local BOLD VENTURE computation system for reactor core analysis with capability to account for the effect of temperature feedback by nuclear cross section correlation. Some enhancements have also been added to the original code to add pebble bed modeling flexibility and to generate useful auxiliary results. For example, an estimate is made of the distribution of fuel temperatures based on average and extreme conditions regularly calculated at a number of locations.« less
Estimation of High-Dimensional Graphical Models Using Regularized Score Matching
Lin, Lina; Drton, Mathias; Shojaie, Ali
2017-01-01
Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498
Emura, Takeshi; Nakatochi, Masahiro; Matsui, Shigeyuki; Michimae, Hirofumi; Rondeau, Virginie
2017-01-01
Developing a personalized risk prediction model of death is fundamental for improving patient care and touches on the realm of personalized medicine. The increasing availability of genomic information and large-scale meta-analytic data sets for clinicians has motivated the extension of traditional survival prediction based on the Cox proportional hazards model. The aim of our paper is to develop a personalized risk prediction formula for death according to genetic factors and dynamic tumour progression status based on meta-analytic data. To this end, we extend the existing joint frailty-copula model to a model allowing for high-dimensional genetic factors. In addition, we propose a dynamic prediction formula to predict death given tumour progression events possibly occurring after treatment or surgery. For clinical use, we implement the computation software of the prediction formula in the joint.Cox R package. We also develop a tool to validate the performance of the prediction formula by assessing the prediction error. We illustrate the method with the meta-analysis of individual patient data on ovarian cancer patients.
Discrete models for the numerical analysis of time-dependent multidimensional gas dynamics
NASA Technical Reports Server (NTRS)
Roe, P. L.
1984-01-01
A possible technique is explored for extending to multidimensional flows some of the upwind-differencing methods that are highly successful in the one-dimensional case. Emphasis is on the two-dimensional case, and the flow domain is assumed to be divided into polygonal computational elements. Inside each element, the flow is represented by a local superposition of elementary solutions consisting of plane waves not necessarily aligned with the element boundaries.
Quasi-one-dimensional modes in strip plates: Theory and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arreola, A.; Báez, G.; Méndez-Sánchez, R. A.
2014-01-14
Using acoustic resonance spectroscopy we measure the elastic resonances of a strip rectangular plate with all its ends free. The experimental setup consist of a vector network analyzer, a high-fidelity audio amplifier, and electromagnetic-acoustic transducers. The one-dimensional modes are identified from the measured spectra by comparing them with theoretical predictions of compressional and bending modes of the plate modeled as a beam. The agreement between theory and experiment is excellent.
Leak detection utilizing analog binaural (VLSI) techniques
NASA Technical Reports Server (NTRS)
Hartley, Frank T. (Inventor)
1995-01-01
A detection method and system utilizing silicon models of the traveling wave structure of the human cochlea to spatially and temporally locate a specific sound source in the presence of high noise pandemonium. The detection system combines two-dimensional stereausis representations, which are output by at least three VLSI binaural hearing chips, to generate a three-dimensional stereausis representation including both binaural and spectral information which is then used to locate the sound source.
Quantum Hall effect breakdown in two-dimensional hole gases
NASA Astrophysics Data System (ADS)
Eaves, L.; Stoddart, S. T.; Wirtz, R.; Neumann, A. C.; Gallagher, B. L.; Main, P. C.; Henini, M.
2000-02-01
The breakdown of dissipationless current flow in the quantum Hall effect is studied for a two-dimensional hole gas at filling factors i=1 and 2. At high currents, the magnetoresistance curves at breakdown exhibit a series of steps accompanied by hysteresis and intermittent noise. These are compared with similar data for electron systems and are discussed in terms of a hydrodynamic model involving inter-Landau level scattering at the sample edge.
Baghaie, Ahmadreza; Pahlavan Tafti, Ahmad; Owen, Heather A; D'Souza, Roshan M; Yu, Zeyun
2017-01-01
Scanning Electron Microscope (SEM) as one of the major research and industrial equipment for imaging of micro-scale samples and surfaces has gained extensive attention from its emerge. However, the acquired micrographs still remain two-dimensional (2D). In the current work a novel and highly accurate approach is proposed to recover the hidden third-dimension by use of multi-view image acquisition of the microscopic samples combined with pre/post-processing steps including sparse feature-based stereo rectification, nonlocal-based optical flow estimation for dense matching and finally depth estimation. Employing the proposed approach, three-dimensional (3D) reconstructions of highly complex microscopic samples were achieved to facilitate the interpretation of topology and geometry of surface/shape attributes of the samples. As a byproduct of the proposed approach, high-definition 3D printed models of the samples can be generated as a tangible means of physical understanding. Extensive comparisons with the state-of-the-art reveal the strength and superiority of the proposed method in uncovering the details of the highly complex microscopic samples.
ERIC Educational Resources Information Center
Li, Fangzheng; Liu, Chunying; Song, Xuexiong; Huan, Yanjun; Gao, Shansong; Jiang, Zhongling
2018-01-01
Access to adequate anatomical specimens can be an important aspect in learning the anatomy of domestic animals. In this study, the authors utilized a structured light scanner and fused deposition modeling (FDM) printer to produce highly accurate animal skeletal models. First, various components of the bovine skeleton, including the femur, the…
Analysis of the Three-Dimensional Vector FAÇADE Model Created from Photogrammetric Data
NASA Astrophysics Data System (ADS)
Kamnev, I. S.; Seredovich, V. A.
2017-12-01
The results of the accuracy assessment analysis for creation of a three-dimensional vector model of building façade are described. In the framework of the analysis, analytical comparison of three-dimensional vector façade models created by photogrammetric and terrestrial laser scanning data has been done. The three-dimensional model built from TLS point clouds was taken as the reference one. In the course of the experiment, the three-dimensional model to be analyzed was superimposed on the reference one, the coordinates were measured and deviations between the same model points were determined. The accuracy estimation of the three-dimensional model obtained by using non-metric digital camera images was carried out. Identified façade surface areas with the maximum deviations were revealed.
REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*
Fan, Jianqing; Jiang, Jiancheng
2011-01-01
High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171
REGULARIZATION FOR COX'S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY.
Bradic, Jelena; Fan, Jianqing; Jiang, Jiancheng
2011-01-01
High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples.
Concentration data and dimensionality in groundwater models: evaluation using inverse modelling
Barlebo, H.C.; Hill, M.C.; Rosbjerg, D.; Jensen, K.H.
1998-01-01
A three-dimensional inverse groundwater flow and transport model that fits hydraulic-head and concentration data simultaneously using nonlinear regression is presented and applied to a layered sand and silt groundwater system beneath the Grindsted Landfill in Denmark. The aquifer is composed of rather homogeneous hydrogeologic layers. Two issues common to groundwater flow and transport modelling are investigated: 1) The accuracy of simulated concentrations in the case of calibration with head data alone; and 2) The advantages and disadvantages of using a two-dimensional cross-sectional model instead of a three-dimensional model to simulate contaminant transport when the source is at the land surface. Results show that using only hydraulic heads in the nonlinear regression produces a simulated plume that is profoundly different from what is obtained in a calibration using both hydraulic-head and concentration data. The present study provides a well-documented example of the differences that can occur. Representing the system as a two-dimensional cross-section obviously omits some of the system dynamics. It was, however, possible to obtain a simulated plume cross-section that matched the actual plume cross-section well. The two-dimensional model execution times were about a seventh of those for the three-dimensional model, but some difficulties were encountered in representing the spatially variable source concentrations and less precise simulated concentrations were calculated by the two-dimensional model compared to the three-dimensional model. Summed up, the present study indicates that three dimensional modelling using both hydraulic heads and concentrations in the calibration should be preferred in the considered type of transport studies.
Particle-in-a-box model of one-dimensional excitons in conjugated polymers
NASA Astrophysics Data System (ADS)
Pedersen, Thomas G.; Johansen, Per M.; Pedersen, Henrik C.
2000-04-01
A simple two-particle model of excitons in conjugated polymers is proposed as an alternative to usual highly computationally demanding quantum chemical methods. In the two-particle model, the exciton is described as an electron-hole pair interacting via Coulomb forces and confined to the polymer backbone by rigid walls. Furthermore, by integrating out the transverse part, the two-particle equation is reduced to one-dimensional form. It is demonstrated how essentially exact solutions are obtained in the cases of short and long conjugation length, respectively. From a linear combination of these cases an approximate solution for the general case is obtained. As an application of the model the influence of a static electric field on the electron-hole overlap integral and exciton energy is considered.