Sample records for high dimensional variable

  1. A Selective Overview of Variable Selection in High Dimensional Feature Space

    PubMed Central

    Fan, Jianqing

    2010-01-01

    High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods. PMID:21572976

  2. An Efficient Variable Screening Method for Effective Surrogate Models for Reliability-Based Design Optimization

    DTIC Science & Technology

    2014-04-01

    surrogate model generation is difficult for high -dimensional problems, due to the curse of dimensionality. Variable screening methods have been...a variable screening model was developed for the quasi-molecular treatment of ion-atom collision [16]. In engineering, a confidence interval of...for high -level radioactive waste [18]. Moreover, the design sensitivity method can be extended to the variable screening method because vital

  3. Dimensional control of die castings

    NASA Astrophysics Data System (ADS)

    Karve, Aniruddha Ajit

    The demand for net shape die castings, which require little or no machining, is steadily increasing. Stringent customer requirements are forcing die casters to deliver high quality castings in increasingly short lead times. Dimensional conformance to customer specifications is an inherent part of die casting quality. The dimensional attributes of a die casting are essentially dependent upon many factors--the quality of the die and the degree of control over the process variables being the two major sources of dimensional error in die castings. This study focused on investigating the nature and the causes of dimensional error in die castings. The two major components of dimensional error i.e., dimensional variability and die allowance were studied. The major effort of this study was to qualitatively and quantitatively study the effects of casting geometry and process variables on die casting dimensional variability and die allowance. This was accomplished by detailed dimensional data collection at production die casting sites. Robust feature characterization schemes were developed to describe complex casting geometry in quantitative terms. Empirical modeling was utilized to quantify the effects of the casting variables on dimensional variability and die allowance for die casting features. A number of casting geometry and process variables were found to affect dimensional variability in die castings. The dimensional variability was evaluated by comparisons with current published dimensional tolerance standards. The casting geometry was found to play a significant role in influencing the die allowance of the features measured. The predictive models developed for dimensional variability and die allowance were evaluated to test their effectiveness. Finally, the relative impact of all the components of dimensional error in die castings was put into perspective, and general guidelines for effective dimensional control in the die casting plant were laid out. The results of this study will contribute to enhancement of dimensional quality and lead time compression in the die casting industry, thus making it competitive with other net shape manufacturing processes.

  4. Low-Dimensional Statistics of Anatomical Variability via Compact Representation of Image Deformations.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2016-10-01

    Using image-based descriptors to investigate clinical hypotheses and therapeutic implications is challenging due to the notorious "curse of dimensionality" coupled with a small sample size. In this paper, we present a low-dimensional analysis of anatomical shape variability in the space of diffeomorphisms and demonstrate its benefits for clinical studies. To combat the high dimensionality of the deformation descriptors, we develop a probabilistic model of principal geodesic analysis in a bandlimited low-dimensional space that still captures the underlying variability of image data. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than models based on the high-dimensional state-of-the-art approaches such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA).

  5. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  6. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  7. Probabilistic modeling of anatomical variability using a low dimensional parameterization of diffeomorphisms.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2017-10-01

    We present an efficient probabilistic model of anatomical variability in a linear space of initial velocities of diffeomorphic transformations and demonstrate its benefits in clinical studies of brain anatomy. To overcome the computational challenges of the high dimensional deformation-based descriptors, we develop a latent variable model for principal geodesic analysis (PGA) based on a low dimensional shape descriptor that effectively captures the intrinsic variability in a population. We define a novel shape prior that explicitly represents principal modes as a multivariate complex Gaussian distribution on the initial velocities in a bandlimited space. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than the state-of-the-art method such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA) that operate in the high dimensional image space. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Variance-based interaction index measuring heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom

    2016-06-01

    This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.

  9. Analysis of chaos in high-dimensional wind power system.

    PubMed

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  10. Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach

    NASA Astrophysics Data System (ADS)

    Chowdhury, R.; Adhikari, S.

    2012-10-01

    Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.

  11. High dimensional model representation method for fuzzy structural dynamics

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  12. Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition

    PubMed Central

    Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao

    2017-01-01

    Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943

  13. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    PubMed

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Nonlinear intrinsic variables and state reconstruction in multiscale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dsilva, Carmeline J., E-mail: cdsilva@princeton.edu; Talmon, Ronen, E-mail: ronen.talmon@yale.edu; Coifman, Ronald R., E-mail: coifman@math.yale.edu

    2013-11-14

    Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certainmore » simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.« less

  15. Nonlinear intrinsic variables and state reconstruction in multiscale simulations

    NASA Astrophysics Data System (ADS)

    Dsilva, Carmeline J.; Talmon, Ronen; Rabin, Neta; Coifman, Ronald R.; Kevrekidis, Ioannis G.

    2013-11-01

    Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certain simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.

  16. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  17. Independence screening for high dimensional nonlinear additive ODE models with applications to dynamic gene regulatory networks.

    PubMed

    Xue, Hongqi; Wu, Shuang; Wu, Yichao; Ramirez Idarraga, Juan C; Wu, Hulin

    2018-05-02

    Mechanism-driven low-dimensional ordinary differential equation (ODE) models are often used to model viral dynamics at cellular levels and epidemics of infectious diseases. However, low-dimensional mechanism-based ODE models are limited for modeling infectious diseases at molecular levels such as transcriptomic or proteomic levels, which is critical to understand pathogenesis of diseases. Although linear ODE models have been proposed for gene regulatory networks (GRNs), nonlinear regulations are common in GRNs. The reconstruction of large-scale nonlinear networks from time-course gene expression data remains an unresolved issue. Here, we use high-dimensional nonlinear additive ODEs to model GRNs and propose a 4-step procedure to efficiently perform variable selection for nonlinear ODEs. To tackle the challenge of high dimensionality, we couple the 2-stage smoothing-based estimation method for ODEs and a nonlinear independence screening method to perform variable selection for the nonlinear ODE models. We have shown that our method possesses the sure screening property and it can handle problems with non-polynomial dimensionality. Numerical performance of the proposed method is illustrated with simulated data and a real data example for identifying the dynamic GRN of Saccharomyces cerevisiae. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Prediction of Incident Diabetes in the Jackson Heart Study Using High-Dimensional Machine Learning

    PubMed Central

    Casanova, Ramon; Saldana, Santiago; Simpson, Sean L.; Lacy, Mary E.; Subauste, Angela R.; Blackshear, Chad; Wagenknecht, Lynne; Bertoni, Alain G.

    2016-01-01

    Statistical models to predict incident diabetes are often based on limited variables. Here we pursued two main goals: 1) investigate the relative performance of a machine learning method such as Random Forests (RF) for detecting incident diabetes in a high-dimensional setting defined by a large set of observational data, and 2) uncover potential predictors of diabetes. The Jackson Heart Study collected data at baseline and in two follow-up visits from 5,301 African Americans. We excluded those with baseline diabetes and no follow-up, leaving 3,633 individuals for analyses. Over a mean 8-year follow-up, 584 participants developed diabetes. The full RF model evaluated 93 variables including demographic, anthropometric, blood biomarker, medical history, and echocardiogram data. We also used RF metrics of variable importance to rank variables according to their contribution to diabetes prediction. We implemented other models based on logistic regression and RF where features were preselected. The RF full model performance was similar (AUC = 0.82) to those more parsimonious models. The top-ranked variables according to RF included hemoglobin A1C, fasting plasma glucose, waist circumference, adiponectin, c-reactive protein, triglycerides, leptin, left ventricular mass, high-density lipoprotein cholesterol, and aldosterone. This work shows the potential of RF for incident diabetes prediction while dealing with high-dimensional data. PMID:27727289

  19. Hierarchical Protein Free Energy Landscapes from Variationally Enhanced Sampling.

    PubMed

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-12-13

    In recent work, we demonstrated that it is possible to obtain approximate representations of high-dimensional free energy surfaces with variationally enhanced sampling ( Shaffer, P.; Valsson, O.; Parrinello, M. Proc. Natl. Acad. Sci. , 2016 , 113 , 17 ). The high-dimensional spaces considered in that work were the set of backbone dihedral angles of a small peptide, Chignolin, and the high-dimensional free energy surface was approximated as the sum of many two-dimensional terms plus an additional term which represents an initial estimate. In this paper, we build on that work and demonstrate that we can calculate high-dimensional free energy surfaces of very high accuracy by incorporating additional terms. The additional terms apply to a set of collective variables which are more coarse than the base set of collective variables. In this way, it is possible to build hierarchical free energy surfaces, which are composed of terms that act on different length scales. We test the accuracy of these free energy landscapes for the proteins Chignolin and Trp-cage by constructing simple coarse-grained models and comparing results from the coarse-grained model to results from atomistic simulations. The approach described in this paper is ideally suited for problems in which the free energy surface has important features on different length scales or in which there is some natural hierarchy.

  20. Effects of B1 inhomogeneity correction for three-dimensional variable flip angle T1 measurements in hip dGEMRIC at 3 T and 1.5 T.

    PubMed

    Siversson, Carl; Chan, Jenny; Tiderius, Carl-Johan; Mamisch, Tallal Charles; Jellus, Vladimir; Svensson, Jonas; Kim, Young-Jo

    2012-06-01

    Delayed gadolinium-enhanced MRI of cartilage is a technique for studying the development of osteoarthritis using quantitative T(1) measurements. Three-dimensional variable flip angle is a promising method for performing such measurements rapidly, by using two successive spoiled gradient echo sequences with different excitation pulse flip angles. However, the three-dimensional variable flip angle method is very sensitive to inhomogeneities in the transmitted B(1) field in vivo. In this study, a method for correcting for such inhomogeneities, using an additional B(1) mapping spin-echo sequence, was evaluated. Phantom studies concluded that three-dimensional variable flip angle with B(1) correction calculates accurate T(1) values also in areas with high B(1) deviation. Retrospective analysis of in vivo hip delayed gadolinium-enhanced MRI of cartilage data from 40 subjects showed the difference between three-dimensional variable flip angle with and without B(1) correction to be generally two to three times higher at 3 T than at 1.5 T. In conclusion, the B(1) variations should always be taken into account, both at 1.5 T and at 3 T. Copyright © 2011 Wiley-Liss, Inc.

  1. Decomposition and model selection for large contingency tables.

    PubMed

    Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter

    2010-04-01

    Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.

  2. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.

    PubMed

    Gong, Xiajing; Hu, Meng; Zhao, Liang

    2018-05-01

    Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  3. Evaluation of variable selection methods for random forests and omics data sets.

    PubMed

    Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke

    2017-10-16

    Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.

  4. Locating landmarks on high-dimensional free energy surfaces

    PubMed Central

    Chen, Ming; Yu, Tang-Qing; Tuckerman, Mark E.

    2015-01-01

    Coarse graining of complex systems possessing many degrees of freedom can often be a useful approach for analyzing and understanding key features of these systems in terms of just a few variables. The relevant energy landscape in a coarse-grained description is the free energy surface as a function of the coarse-grained variables, which, despite the dimensional reduction, can still be an object of high dimension. Consequently, navigating and exploring this high-dimensional free energy surface is a nontrivial task. In this paper, we use techniques from multiscale modeling, stochastic optimization, and machine learning to devise a strategy for locating minima and saddle points (termed “landmarks”) on a high-dimensional free energy surface “on the fly” and without requiring prior knowledge of or an explicit form for the surface. In addition, we propose a compact graph representation of the landmarks and connections between them, and we show that the graph nodes can be subsequently analyzed and clustered based on key attributes that elucidate important properties of the system. Finally, we show that knowledge of landmark locations allows for the efficient determination of their relative free energies via enhanced sampling techniques. PMID:25737545

  5. Learning an intrinsic-variable preserving manifold for dynamic visual tracking.

    PubMed

    Qiao, Hong; Zhang, Peng; Zhang, Bo; Zheng, Suiwu

    2010-06-01

    Manifold learning is a hot topic in the field of computer science, particularly since nonlinear dimensionality reduction based on manifold learning was proposed in Science in 2000. The work has achieved great success. The main purpose of current manifold-learning approaches is to search for independent intrinsic variables underlying high dimensional inputs which lie on a low dimensional manifold. In this paper, a new manifold is built up in the training step of the process, on which the input training samples are set to be close to each other if the values of their intrinsic variables are close to each other. Then, the process of dimensionality reduction is transformed into a procedure of preserving the continuity of the intrinsic variables. By utilizing the new manifold, the dynamic tracking of a human who can move and rotate freely is achieved. From the theoretical point of view, it is the first approach to transfer the manifold-learning framework to dynamic tracking. From the application point of view, a new and low dimensional feature for visual tracking is obtained and successfully applied to the real-time tracking of a free-moving object from a dynamic vision system. Experimental results from a dynamic tracking system which is mounted on a dynamic robot validate the effectiveness of the new algorithm.

  6. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb

    NASA Astrophysics Data System (ADS)

    Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee

    2015-08-01

    Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.

  7. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.

    PubMed

    Hero, Alfred O; Rajaratnam, Bala

    2016-01-01

    When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

  8. Social Inferences from Faces: Ambient Images Generate a Three-Dimensional Model

    ERIC Educational Resources Information Center

    Sutherland, Clare A. M.; Oldmeadow, Julian A.; Santos, Isabel M.; Towler, John; Burt, D. Michael; Young, Andrew W.

    2013-01-01

    Three experiments are presented that investigate the two-dimensional valence/trustworthiness by dominance model of social inferences from faces (Oosterhof & Todorov, 2008). Experiment 1 used image averaging and morphing techniques to demonstrate that consistent facial cues subserve a range of social inferences, even in a highly variable sample of…

  9. Bayesian Analysis of High Dimensional Classification

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Subhadeep; Liang, Faming

    2009-12-01

    Modern data mining and bioinformatics have presented an important playground for statistical learning techniques, where the number of input variables is possibly much larger than the sample size of the training data. In supervised learning, logistic regression or probit regression can be used to model a binary output and form perceptron classification rules based on Bayesian inference. In these cases , there is a lot of interest in searching for sparse model in High Dimensional regression(/classification) setup. we first discuss two common challenges for analyzing high dimensional data. The first one is the curse of dimensionality. The complexity of many existing algorithms scale exponentially with the dimensionality of the space and by virtue of that algorithms soon become computationally intractable and therefore inapplicable in many real applications. secondly, multicollinearities among the predictors which severely slowdown the algorithm. In order to make Bayesian analysis operational in high dimension we propose a novel 'Hierarchical stochastic approximation monte carlo algorithm' (HSAMC), which overcomes the curse of dimensionality, multicollinearity of predictors in high dimension and also it possesses the self-adjusting mechanism to avoid the local minima separated by high energy barriers. Models and methods are illustrated by simulation inspired from from the feild of genomics. Numerical results indicate that HSAMC can work as a general model selection sampler in high dimensional complex model space.

  10. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  11. Method and apparatus for multiple-projection, dual-energy x-ray absorptiometry scanning

    NASA Technical Reports Server (NTRS)

    Feldmesser, Howard S. (Inventor); Magee, Thomas C. (Inventor); Charles, Jr., Harry K. (Inventor); Beck, Thomas J. (Inventor)

    2007-01-01

    Methods and apparatuses for advanced, multiple-projection, dual-energy X-ray absorptiometry scanning systems include combinations of a conical collimator; a high-resolution two-dimensional detector; a portable, power-capped, variable-exposure-time power supply; an exposure-time control element; calibration monitoring; a three-dimensional anti-scatter-grid; and a gantry-gantry base assembly that permits up to seven projection angles for overlapping beams. Such systems are capable of high precision bone structure measurements that can support three dimensional bone modeling and derivations of bone strength, risk of injury, and efficacy of countermeasures among other properties.

  12. Improved imaging of cochlear nerve hypoplasia using a 3-Tesla variable flip-angle turbo spin-echo sequence and a 7-cm surface coil.

    PubMed

    Giesemann, Anja M; Raab, Peter; Lyutenski, Stefan; Dettmer, Sabine; Bültmann, Eva; Frömke, Cornelia; Lenarz, Thomas; Lanfermann, Heinrich; Goetz, Friedrich

    2014-03-01

    Magnetic resonance imaging of the temporal bone has an important role in decision making with regard to cochlea implantation, especially in children with cochlear nerve deficiency. The purpose of this study was to evaluate the usefulness of the combination of an advanced high-resolution T2-weighted sequence with a surface coil in a 3-Tesla magnetic resonance imaging scanner in cases of suspected cochlear nerve aplasia. Prospective study. Seven patients with cochlear nerve hypoplasia or aplasia were prospectively examined using a high-resolution three-dimensional variable flip-angle turbo spin-echo sequence using a surface coil, and the images were compared with the same sequence in standard resolution using a standard head coil. Three neuroradiologists evaluated the magnetic resonance images independently, rating the visibility of the nerves in diagnosing hypoplasia or aplasia. Eight ears in seven patients with hypoplasia or aplasia of the cochlear nerve were examined. The average age was 2.7 years (range, 9 months-5 years). Seven ears had accompanying malformations. The inter-rater reliability in diagnosing hypoplasia or aplasia was greater using the high-resolution three-dimensional variable flip-angle turbo spin-echo sequence (fixed-marginal kappa: 0.64) than with the same sequence in lower resolution (fixed-marginal kappa: 0.06). Examining cases of suspected cochlear nerve aplasia using the high-resolution three-dimensional variable flip-angle turbo spin-echo sequence in combination with a surface coil shows significant improvement over standard methods. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  13. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  14. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data

    PubMed Central

    Dazard, Jean-Eudes; Rao, J. Sunil

    2012-01-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950

  15. Dimensional reduction in sensorimotor systems: A framework for understanding muscle coordination of posture

    PubMed Central

    Ting, Lena H.

    2014-01-01

    The simple act of standing up is an important and essential motor behavior that most humans and animals achieve with ease. Yet, maintaining standing balance involves complex sensorimotor transformations that must continually integrate a large array of sensory inputs and coordinate multiple motor outputs to muscles throughout the body. Multiple, redundant local sensory signals are integrated to form an estimate of a few global, task-level variables important to postural control, such as body center of mass position and body orientation with respect to Earth-vertical. Evidence suggests that a limited set of muscle synergies, reflecting preferential sets of muscle activation patterns, are used to move task variables such as center of mass position in a predictable direction following a postural perturbations. We propose a hierarchal feedback control system that allows the nervous system the simplicity of performing goal-directed computations in task-variable space, while maintaining the robustness afforded by redundant sensory and motor systems. We predict that modulation of postural actions occurs in task-variable space, and in the associated transformations between the low-dimensional task-space and high-dimensional sensor and muscle spaces. Development of neuromechanical models that reflect these neural transformations between low and high-dimensional representations will reveal the organizational principles and constraints underlying sensorimotor transformations for balance control, and perhaps motor tasks in general. This framework and accompanying computational models could be used to formulate specific hypotheses about how specific sensory inputs and motor outputs are generated and altered following neural injury, sensory loss, or rehabilitation. PMID:17925254

  16. Model-based Clustering of High-Dimensional Data in Astrophysics

    NASA Astrophysics Data System (ADS)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  17. Probabilistic and spatially variable niches inferred from demography

    Treesearch

    Jeffrey M. Diez; Itamar Giladi; Robert Warren; H. Ronald Pulliam

    2014-01-01

    Summary 1. Mismatches between species distributions and habitat suitability are predicted by niche theory and have important implications for forecasting how species may respond to environmental changes. Quantifying these mismatches is challenging, however, due to the high dimensionality of species niches and the large spatial and temporal variability in population...

  18. Quantile Regression for Analyzing Heterogeneity in Ultra-high Dimension

    PubMed Central

    Wang, Lan; Wu, Yichao

    2012-01-01

    Ultra-high dimensional data often display heterogeneity due to either heteroscedastic variance or other forms of non-location-scale covariate effects. To accommodate heterogeneity, we advocate a more general interpretation of sparsity which assumes that only a small number of covariates influence the conditional distribution of the response variable given all candidate covariates; however, the sets of relevant covariates may differ when we consider different segments of the conditional distribution. In this framework, we investigate the methodology and theory of nonconvex penalized quantile regression in ultra-high dimension. The proposed approach has two distinctive features: (1) it enables us to explore the entire conditional distribution of the response variable given the ultra-high dimensional covariates and provides a more realistic picture of the sparsity pattern; (2) it requires substantially weaker conditions compared with alternative methods in the literature; thus, it greatly alleviates the difficulty of model checking in the ultra-high dimension. In theoretic development, it is challenging to deal with both the nonsmooth loss function and the nonconvex penalty function in ultra-high dimensional parameter space. We introduce a novel sufficient optimality condition which relies on a convex differencing representation of the penalized loss function and the subdifferential calculus. Exploring this optimality condition enables us to establish the oracle property for sparse quantile regression in the ultra-high dimension under relaxed conditions. The proposed method greatly enhances existing tools for ultra-high dimensional data analysis. Monte Carlo simulations demonstrate the usefulness of the proposed procedure. The real data example we analyzed demonstrates that the new approach reveals substantially more information compared with alternative methods. PMID:23082036

  19. REASSESSING MECHANISM AS A PREDICTOR OF PEDIATRIC INJURY MORTALITY

    PubMed Central

    Beck, Haley; Mittal, Sushil; Madigan, David; Burd, Randall S.

    2015-01-01

    Background The use of mechanism of injury as a predictor of injury outcome presents practical challenges because this variable may be missing or inaccurate in many databases. The purpose of this study was to determine the importance of mechanism of injury as a predictor of mortality among injured children. Methods The records of children (<15 years old) sustaining a blunt injury were obtained from the National Trauma Data Bank. Models predicting injury mortality were developed using mechanism of injury and injury coding using either Abbreviated Injury Scale post-dot values (low-dimensional injury coding) or injury ICD-9 codes and their two-way interactions (high-dimensional injury coding). Model performance with and without inclusion of mechanism of injury was compared for both coding schemes, and the relative importance of mechanism of injury as a variable in each model type was evaluated. Results Among 62,569 records, a mortality rate of 0.9% was observed. Inclusion of mechanism of injury improved model performance when using low-dimensional injury coding but was associated with no improvement when using high-dimensional injury coding. Mechanism of injury contributed to 28% of model variance when using low-dimensional injury coding and <1% when high-dimensional injury coding was used. Conclusions Although mechanism of injury may be an important predictor of injury mortality among children sustaining blunt trauma, its importance as a predictor of mortality depends on approach used for injury coding. Mechanism of injury is not an essential predictor of outcome after injury when coding schemes are used that better characterize injuries sustained after blunt pediatric trauma. PMID:26197948

  20. Clustering high-dimensional mixed data to uncover sub-phenotypes: joint analysis of phenotypic and genotypic data.

    PubMed

    McParland, D; Phillips, C M; Brennan, L; Roche, H M; Gormley, I C

    2017-12-10

    The LIPGENE-SU.VI.MAX study, like many others, recorded high-dimensional continuous phenotypic data and categorical genotypic data. LIPGENE-SU.VI.MAX focuses on the need to account for both phenotypic and genetic factors when studying the metabolic syndrome (MetS), a complex disorder that can lead to higher risk of type 2 diabetes and cardiovascular disease. Interest lies in clustering the LIPGENE-SU.VI.MAX participants into homogeneous groups or sub-phenotypes, by jointly considering their phenotypic and genotypic data, and in determining which variables are discriminatory. A novel latent variable model that elegantly accommodates high dimensional, mixed data is developed to cluster LIPGENE-SU.VI.MAX participants using a Bayesian finite mixture model. A computationally efficient variable selection algorithm is incorporated, estimation is via a Gibbs sampling algorithm and an approximate BIC-MCMC criterion is developed to select the optimal model. Two clusters or sub-phenotypes ('healthy' and 'at risk') are uncovered. A small subset of variables is deemed discriminatory, which notably includes phenotypic and genotypic variables, highlighting the need to jointly consider both factors. Further, 7 years after the LIPGENE-SU.VI.MAX data were collected, participants underwent further analysis to diagnose presence or absence of the MetS. The two uncovered sub-phenotypes strongly correspond to the 7-year follow-up disease classification, highlighting the role of phenotypic and genotypic factors in the MetS and emphasising the potential utility of the clustering approach in early screening. Additionally, the ability of the proposed approach to define the uncertainty in sub-phenotype membership at the participant level is synonymous with the concepts of precision medicine and nutrition. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Development for 2D pattern quantification method on mask and wafer

    NASA Astrophysics Data System (ADS)

    Matsuoka, Ryoichi; Mito, Hiroaki; Toyoda, Yasutaka; Wang, Zhigang

    2010-03-01

    We have developed the effective method of mask and silicon 2-dimensional metrology. The aim of this method is evaluating the performance of the silicon corresponding to Hotspot on a mask. The method adopts a metrology management system based on DBM (Design Based Metrology). This is the high accurate contouring created by an edge detection algorithm used in mask CD-SEM and silicon CD-SEM. Currently, as semiconductor manufacture moves towards even smaller feature size, this necessitates more aggressive optical proximity correction (OPC) to drive the super-resolution technology (RET). In other words, there is a trade-off between highly precise RET and mask manufacture, and this has a big impact on the semiconductor market that centers on the mask business. 2-dimensional Shape quantification is important as optimal solution over these problems. Although 1-dimensional shape measurement has been performed by the conventional technique, 2-dimensional shape management is needed in the mass production line under the influence of RET. We developed the technique of analyzing distribution of shape edge performance as the shape management technique. On the other hand, there is roughness in the silicon shape made from a mass-production line. Moreover, there is variation in the silicon shape. For this reason, quantification of silicon shape is important, in order to estimate the performance of a pattern. In order to quantify, the same shape is equalized in two dimensions. And the method of evaluating based on the shape is popular. In this study, we conducted experiments for averaging method of the pattern (Measurement Based Contouring) as two-dimensional mask and silicon evaluation technique. That is, observation of the identical position of a mask and a silicon was considered. It is possible to analyze variability of the edge of the same position with high precision. The result proved its detection accuracy and reliability of variability on two-dimensional pattern (mask and silicon) and is adaptable to following fields of mask quality management. - Estimate of the correlativity of shape variability and a process margin. - Determination of two-dimensional variability of pattern. - Verification of the performance of the pattern of various kinds of Hotspots. In this report, we introduce the experimental results and the application. We expect that the mask measurement and the shape control on mask production will make a huge contribution to mask yield-enhancement and that the DFM solution for mask quality control process will become much more important technology than ever. It is very important to observe the shape of the same location of Design, Mask, and Silicon in such a viewpoint.

  2. Modeling Associations among Multivariate Longitudinal Categorical Variables in Survey Data: A Semiparametric Bayesian Approach

    ERIC Educational Resources Information Center

    Tchumtchoua, Sylvie; Dey, Dipak K.

    2012-01-01

    This paper proposes a semiparametric Bayesian framework for the analysis of associations among multivariate longitudinal categorical variables in high-dimensional data settings. This type of data is frequent, especially in the social and behavioral sciences. A semiparametric hierarchical factor analysis model is developed in which the…

  3. Variable importance in nonlinear kernels (VINK): classification of digitized histopathology.

    PubMed

    Ginsburg, Shoshana; Ali, Sahirzeeshan; Lee, George; Basavanhally, Ajay; Madabhushi, Anant

    2013-01-01

    Quantitative histomorphometry is the process of modeling appearance of disease morphology on digitized histopathology images via image-based features (e.g., texture, graphs). Due to the curse of dimensionality, building classifiers with large numbers of features requires feature selection (which may require a large training set) or dimensionality reduction (DR). DR methods map the original high-dimensional features in terms of eigenvectors and eigenvalues, which limits the potential for feature transparency or interpretability. Although methods exist for variable selection and ranking on embeddings obtained via linear DR schemes (e.g., principal components analysis (PCA)), similar methods do not yet exist for nonlinear DR (NLDR) methods. In this work we present a simple yet elegant method for approximating the mapping between the data in the original feature space and the transformed data in the kernel PCA (KPCA) embedding space; this mapping provides the basis for quantification of variable importance in nonlinear kernels (VINK). We show how VINK can be implemented in conjunction with the popular Isomap and Laplacian eigenmap algorithms. VINK is evaluated in the contexts of three different problems in digital pathology: (1) predicting five year PSA failure following radical prostatectomy, (2) predicting Oncotype DX recurrence risk scores for ER+ breast cancers, and (3) distinguishing good and poor outcome p16+ oropharyngeal tumors. We demonstrate that subsets of features identified by VINK provide similar or better classification or regression performance compared to the original high dimensional feature sets.

  4. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hero, Alfred O.; Rajaratnam, Bala

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  5. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    PubMed Central

    Hero, Alfred O.; Rajaratnam, Bala

    2015-01-01

    When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700

  6. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE PAGES

    Hero, Alfred O.; Rajaratnam, Bala

    2015-12-09

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  7. Variable dimensionality in the uranium fluoride/2-methyl-piperazine system: Synthesis and structures of UFO-5, -6, and -7; Zero-, one-, and two-dimensional materials with unprecedented topologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Francis, R.J.; Halasyamani, P.S.; Bee, J.S.

    Recently, low temperature (T < 300 C) hydrothermal reactions of inorganic precursors in the presence of organic cations have proven highly productive for the synthesis of novel solid-state materials. Interest in these materials is driven by the astonishingly diverse range of structures produced, as well as by their many potential materials chemistry applications. This report describes the high yield, phase pure hydrothermal syntheses of three new uranium fluoride phases with unprecedented structure types. Through the systematic control of the synthesis conditions the authors have successfully controlled the architecture and dimensionality of the phase formed and selectively synthesized novel zero-, one-,more » and two-dimensional materials.« less

  8. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    PubMed Central

    Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition- and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity. PMID:23366954

  9. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    PubMed

    Cowley, Benjamin R; Kaufman, Matthew T; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2012-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition-and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity.

  10. The Fisher-Markov selector: fast selecting maximally separable feature subset for multiclass classification with applications to high-dimensional data.

    PubMed

    Cheng, Qiang; Zhou, Hongbo; Cheng, Jie

    2011-06-01

    Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.

  11. Differences in aquatic habitat quality as an impact of one- and two-dimensional hydrodynamic model simulated flow variables

    NASA Astrophysics Data System (ADS)

    Benjankar, R. M.; Sohrabi, M.; Tonina, D.; McKean, J. A.

    2013-12-01

    Aquatic habitat models utilize flow variables which may be predicted with one-dimensional (1D) or two-dimensional (2D) hydrodynamic models to simulate aquatic habitat quality. Studies focusing on the effects of hydrodynamic model dimensionality on predicted aquatic habitat quality are limited. Here we present the analysis of the impact of flow variables predicted with 1D and 2D hydrodynamic models on simulated spatial distribution of habitat quality and Weighted Usable Area (WUA) for fall-spawning Chinook salmon. Our study focuses on three river systems located in central Idaho (USA), which are a straight and pool-riffle reach (South Fork Boise River), small pool-riffle sinuous streams in a large meadow (Bear Valley Creek) and a steep-confined plane-bed stream with occasional deep forced pools (Deadwood River). We consider low and high flows in simple and complex morphologic reaches. Results show that 1D and 2D modeling approaches have effects on both the spatial distribution of the habitat and WUA for both discharge scenarios, but we did not find noticeable differences between complex and simple reaches. In general, the differences in WUA were small, but depended on stream type. Nevertheless, spatially distributed habitat quality difference is considerable in all streams. The steep-confined plane bed stream had larger differences between aquatic habitat quality defined with 1D and 2D flow models compared to results for streams with well defined macro-topographies, such as pool-riffle bed forms. KEY WORDS: one- and two-dimensional hydrodynamic models, habitat modeling, weighted usable area (WUA), hydraulic habitat suitability, high and low discharges, simple and complex reaches

  12. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    DOE PAGES

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; ...

    2017-10-10

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less

  13. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less

  14. A review of covariate selection for non-experimental comparative effectiveness research.

    PubMed

    Sauer, Brian C; Brookhart, M Alan; Roy, Jason; VanderWeele, Tyler

    2013-11-01

    This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for a common cause pathway between treatment and outcome can remove confounding, whereas adjustment for other structural types may increase bias. For this reason, variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely known. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher's knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. Copyright © 2013 John Wiley & Sons, Ltd.

  15. A Review of Covariate Selection for Nonexperimental Comparative Effectiveness Research

    PubMed Central

    Sauer, Brian C.; Brookhart, Alan; Roy, Jason; Vanderweele, Tyler

    2014-01-01

    This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research (CER), and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for on a common cause pathway between treatment and outcome can remove confounding, while adjustment for other structural types may increase bias. For this reason variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely know. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses the high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher’s knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically-derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. PMID:24006330

  16. Normal forms for reduced stochastic climate models

    PubMed Central

    Majda, Andrew J.; Franzke, Christian; Crommelin, Daan

    2009-01-01

    The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOFs) (also known as Principal Component Analysis, Karhunen–Loéve and Proper Orthogonal Decomposition) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It is shown below that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large scales by the small scales and simultaneously strong cubic damping. These normal forms should prove useful for developing systematic strategies for the estimation of stochastic models from climate data. As an illustrative example the one-dimensional normal form is applied below to low-frequency patterns such as the North Atlantic Oscillation (NAO) in a climate model. The results here also illustrate the short comings of a recent linear scalar CAM noise model proposed elsewhere for low-frequency variability. PMID:19228943

  17. Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Dorrepaal, J. Mark

    1990-01-01

    The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.

  18. Enhanced, targeted sampling of high-dimensional free-energy landscapes using variationally enhanced sampling, with an application to chignolin

    PubMed Central

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-01-01

    The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868

  19. A three-dimensional Dirichlet-to-Neumann operator for water waves over topography

    NASA Astrophysics Data System (ADS)

    Andrade, D.; Nachbin, A.

    2018-06-01

    Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.

  20. Visions of visualization aids - Design philosophy and observations

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1989-01-01

    Aids for the visualization of high-dimensional scientific or other data must be designed. Simply casting multidimensional data into a two-dimensional or three-dimensional spatial metaphor does not guarantee that the presentation will provide insight or a parsimonious description of phenomena implicit in the data. Useful visualization, in contrast to glitzy, high-tech, computer-graphics imagery, is generally based on preexisting theoretical beliefs concerning the underlying phenomena. These beliefs guide selection and formatting of the plotted variables. Visualization tools are useful for understanding naturally three-dimensional data bases such as those used by pilots or astronauts. Two examples of such aids for spatial maneuvering illustrate that informative geometric distortion may be introduced to assist visualization and that visualization of complex dynamics alone may not be adequate to provide the necessary insight into the underlying processes.

  1. State estimation and prediction using clustered particle filters.

    PubMed

    Lee, Yoonsang; Majda, Andrew J

    2016-12-20

    Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.

  2. State estimation and prediction using clustered particle filters

    PubMed Central

    Lee, Yoonsang; Majda, Andrew J.

    2016-01-01

    Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332

  3. Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis

    USGS Publications Warehouse

    Hong, Y.-S.T.; Rosen, Michael R.; Bhamidimarri, R.

    2003-01-01

    This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.

  4. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    NASA Astrophysics Data System (ADS)

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  5. Phase-space finite elements in a least-squares solution of the transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drumm, C.; Fan, W.; Pautz, S.

    2013-07-01

    The linear Boltzmann transport equation is solved using a least-squares finite element approximation in the space, angular and energy phase-space variables. The method is applied to both neutral particle transport and also to charged particle transport in the presence of an electric field, where the angular and energy derivative terms are handled with the energy/angular finite elements approximation, in a manner analogous to the way the spatial streaming term is handled. For multi-dimensional problems, a novel approach is used for the angular finite elements: mapping the surface of a unit sphere to a two-dimensional planar region and using a meshingmore » tool to generate a mesh. In this manner, much of the spatial finite-elements machinery can be easily adapted to handle the angular variable. The energy variable and the angular variable for one-dimensional problems make use of edge/beam elements, also building upon the spatial finite elements capabilities. The methods described here can make use of either continuous or discontinuous finite elements in space, angle and/or energy, with the use of continuous finite elements resulting in a smaller problem size and the use of discontinuous finite elements resulting in more accurate solutions for certain types of problems. The work described in this paper makes use of continuous finite elements, so that the resulting linear system is symmetric positive definite and can be solved with a highly efficient parallel preconditioned conjugate gradients algorithm. The phase-space finite elements capability has been built into the Sceptre code and applied to several test problems, including a simple one-dimensional problem with an analytic solution available, a two-dimensional problem with an isolated source term, showing how the method essentially eliminates ray effects encountered with discrete ordinates, and a simple one-dimensional charged-particle transport problem in the presence of an electric field. (authors)« less

  6. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Improved Sparse Multi-Class SVM and Its Application for Gene Selection in Cancer Classification

    PubMed Central

    Huang, Lingkang; Zhang, Hao Helen; Zeng, Zhao-Bang; Bushel, Pierre R.

    2013-01-01

    Background Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. Results The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. Conclusions High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. Availability: The source MATLAB code are available from http://math.arizona.edu/~hzhang/software.html. PMID:23966761

  8. Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics

    NASA Astrophysics Data System (ADS)

    Wehmeyer, Christoph; Noé, Frank

    2018-06-01

    Inspired by the success of deep learning techniques in the physical and chemical sciences, we apply a modification of an autoencoder type deep neural network to the task of dimension reduction of molecular dynamics data. We can show that our time-lagged autoencoder reliably finds low-dimensional embeddings for high-dimensional feature spaces which capture the slow dynamics of the underlying stochastic processes—beyond the capabilities of linear dimension reduction techniques.

  9. Study of three-dimensional effects on vortex breakdown

    NASA Technical Reports Server (NTRS)

    Salas, M. D.; Kuruvila, G.

    1988-01-01

    The incompressible axisymmetric steady Navier-Stokes equations in primitive variables are used to simulate vortex breakdown. The equations, discretized using a second-order, central-difference scheme, are linearized and then solved using an exact LU decomposition, Gaussian elimination, and Newton iteration. Solutions are presented for Reynolds numbers, based on vortex-core radius, as high as 1500. An attempt to study the stability of the axisymmetric solutions against three-dimensional perturbations is discussed.

  10. Parsimonious description for predicting high-dimensional dynamics

    PubMed Central

    Hirata, Yoshito; Takeuchi, Tomoya; Horai, Shunsuke; Suzuki, Hideyuki; Aihara, Kazuyuki

    2015-01-01

    When we observe a system, we often cannot observe all its variables and may have some of its limited measurements. Under such a circumstance, delay coordinates, vectors made of successive measurements, are useful to reconstruct the states of the whole system. Although the method of delay coordinates is theoretically supported for high-dimensional dynamical systems, practically there is a limitation because the calculation for higher-dimensional delay coordinates becomes more expensive. Here, we propose a parsimonious description of virtually infinite-dimensional delay coordinates by evaluating their distances with exponentially decaying weights. This description enables us to predict the future values of the measurements faster because we can reuse the calculated distances, and more accurately because the description naturally reduces the bias of the classical delay coordinates toward the stable directions. We demonstrate the proposed method with toy models of the atmosphere and real datasets related to renewable energy. PMID:26510518

  11. Motion of gas in highly rarefied space

    NASA Astrophysics Data System (ADS)

    Chirkunov, Yu A.

    2017-10-01

    A model describing a motion of gas in a highly rarefied space received an unlucky number 13 in the list of the basic models of the motion of gas in the three-dimensional space obtained by L.V. Ovsyannikov. For a given initial pressure distribution, a special choice of mass Lagrangian variables leads to the system describing this motion for which the number of independent variables is less by one. Hence, there is a foliation of a highly rarefied gas with respect to pressure. In a strongly rarefied space for each given initial pressure distribution, all gas particles are localized on a two-dimensional surface that moves with time in this space We found some exact solutions of the obtained system that describe the processes taking place inside of the tornado. For this system we found all nontrivial conservation laws of the first order. In addition to the classical conservation laws the system has another conservation law, which generalizes the energy conservation law. With the additional condition we found another one generalized energy conservation law.

  12. A global × global test for testing associations between two large sets of variables.

    PubMed

    Chaturvedi, Nimisha; de Menezes, Renée X; Goeman, Jelle J

    2017-01-01

    In high-dimensional omics studies where multiple molecular profiles are obtained for each set of patients, there is often interest in identifying complex multivariate associations, for example, copy number regulated expression levels in a certain pathway or in a genomic region. To detect such associations, we present a novel approach to test for association between two sets of variables. Our approach generalizes the global test, which tests for association between a group of covariates and a single univariate response, to allow high-dimensional multivariate response. We apply the method to several simulated datasets as well as two publicly available datasets, where we compare the performance of multivariate global test (G2) with univariate global test. The method is implemented in R and will be available as a part of the globaltest package in R. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis

    PubMed Central

    Daye, Z. John; Chen, Jinbo; Li, Hongzhe

    2011-01-01

    Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833

  14. Climate and climate variability of the wind power resources in the Great Lakes region of the United States

    Treesearch

    X. Li; S. Zhong; X. Bian; W.E. Heilman

    2010-01-01

    The climate and climate variability of low-level winds over the Great Lakes region of the United States is examined using 30 year (1979-2008) wind records from the recently released North American Regional Reanalysis (NARR), a three-dimensional, high-spatial and temporal resolution, and dynamically consistent climate data set. The analyses focus on spatial distribution...

  15. Replicates in high dimensions, with applications to latent variable graphical models.

    PubMed

    Tan, Kean Ming; Ning, Yang; Witten, Daniela M; Liu, Han

    2016-12-01

    In classical statistics, much thought has been put into experimental design and data collection. In the high-dimensional setting, however, experimental design has been less of a focus. In this paper, we stress the importance of collecting multiple replicates for each subject in this setting. We consider learning the structure of a graphical model with latent variables, under the assumption that these variables take a constant value across replicates within each subject. By collecting multiple replicates for each subject, we are able to estimate the conditional dependence relationships among the observed variables given the latent variables. To test the null hypothesis of conditional independence between two observed variables, we propose a pairwise decorrelated score test. Theoretical guarantees are established for parameter estimation and for this test. We show that our proposal is able to estimate latent variable graphical models more accurately than some existing proposals, and apply the proposed method to a brain imaging dataset.

  16. A system of three-dimensional complex variables

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1986-01-01

    Some results of a new theory of multidimensional complex variables are reported, including analytic functions of a three-dimensional (3-D) complex variable. Three-dimensional complex numbers are defined, including vector properties and rules of multiplication. The necessary conditions for a function of a 3-D variable to be analytic are given and shown to be analogous to the 2-D Cauchy-Riemann equations. A simple example also demonstrates the analogy between the newly defined 3-D complex velocity and 3-D complex potential and the corresponding ordinary complex velocity and complex potential in two dimensions.

  17. Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes

    NASA Astrophysics Data System (ADS)

    Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong

    2018-04-01

    In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.

  18. Effect of Branching on Rod-coil Polyimides as Membrane Materials for Lithium Polymer Batteries

    NASA Technical Reports Server (NTRS)

    Meador, Mary Ann B.; Cubon, Valerie A.; Scheiman, Daniel A.; Bennett, William R.

    2003-01-01

    This paper describes a series of rod-coil block co-polymers that produce easy to fabricate, dimensionally stable films with good ionic conductivity down to room temperature for use as electrolytes for lithium polymer batteries. The polymers consist of short, rigid rod polyimide segments, alternating with flexible, polyalkylene oxide coil segments. The highly incompatible rods and coils should phase separate, especially in the presence of lithium ions. The coil phase would allow for conduction of lithium ions, while the rigid rod phase would provide a high degree of dimensional stability. An optimization study was carried out to study the effect of four variables (degree of branching, formulated molecular weight, polymerization solvent and lithium salt concentration) on ionic conductivity, glass transition temperature and dimensional stability in this system.

  19. Modeling and enhanced sampling of molecular systems with smooth and nonlinear data-driven collective variables

    NASA Astrophysics Data System (ADS)

    Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino

    2013-12-01

    Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.

  20. Modeling and enhanced sampling of molecular systems with smooth and nonlinear data-driven collective variables.

    PubMed

    Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino

    2013-12-07

    Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.

  1. STILTS Plotting Tools

    NASA Astrophysics Data System (ADS)

    Taylor, M. B.

    2009-09-01

    The new plotting functionality in version 2.0 of STILTS is described. STILTS is a mature and powerful package for all kinds of table manipulation, and this version adds facilities for generating plots from one or more tables to its existing wide range of non-graphical capabilities. 2- and 3-dimensional scatter plots and 1-dimensional histograms may be generated using highly configurable style parameters. Features include multiple dataset overplotting, variable transparency, 1-, 2- or 3-dimensional symmetric or asymmetric error bars, higher-dimensional visualization using color, and textual point labeling. Vector and bitmapped output formats are supported. The plotting options provide enough flexibility to perform meaningful visualization on datasets from a few points up to tens of millions. Arbitrarily large datasets can be plotted without heavy memory usage.

  2. Dimensional analysis of the endometrial cavity: how many dimensions should the ideal intrauterine device or system have?

    PubMed

    Goldstuck, Norman D

    2018-01-01

    The geometrical shape of the human uterus most closely approximates that of a prolate ellipsoid. The endometrial cavity itself is more likely to also have the shape of a prolate ellipsoid especially when the extension of the cervix is omitted. Using this information and known endometrial cavity volumes and lateral and vertical dimensions, it is possible to calculate the anteroposterior (AP) dimensions and get a complete evaluation of all possible dimensions of the endometrial cavity. These are singular observations and not part of any other study. The AP dimensions of the endometrial cavity of the uterus were calculated using the formula for the volume of the prolate ellipsoid to complete a three-dimensional picture of the endometrial cavity. Calculations confirm ultrasound imaging which shows large variations in cavity size and shape. Known cavity volumes and length and breadth measurements indicate that the AP diameter may vary from 6.29 to 38.2 mm. These measurements confirm the difficulty of getting a fixed-frame intrauterine device (IUD) to accommodate to a space of highly variable dimensions. This is especially true of three-dimension IUDs. A one-dimensional frameless IUD is most likely to be able to conform to this highly variable space and shape. The endometrial cavity may assume many varied prolate ellipsoid configurations where one or more measurements may be too small to accommodate standard IUDs. A one-dimensional device is most likely to be able to be accommodated by most uterine cavities as compared to two- and three-dimensional devices.

  3. HDMR methods to assess reliability in slope stability analyses

    NASA Astrophysics Data System (ADS)

    Kozubal, Janusz; Pula, Wojciech; Vessia, Giovanna

    2014-05-01

    Stability analyses of complex rock-soil deposits shall be tackled considering the complex structure of discontinuities within rock mass and embedded soil layers. These materials are characterized by a high variability in physical and mechanical properties. Thus, to calculate the slope safety factor in stability analyses two issues must be taken into account: 1) the uncertainties related to structural setting of the rock-slope mass and 2) the variability in mechanical properties of soils and rocks. High Dimensional Model Representation (HDMR) (Chowdhury et al. 2009; Chowdhury and Rao 2010) can be used to carry out the reliability index within complex rock-soil slopes when numerous random variables with high coefficient of variations are considered. HDMR implements the inverse reliability analysis, meaning that the unknown design parameters are sought provided that prescribed reliability index values are attained. Such approach uses implicit response functions according to the Response Surface Method (RSM). The simple RSM can be efficiently applied when less than four random variables are considered; as the number of variables increases, the efficiency in reliability index estimation decreases due to the great amount of calculations. Therefore, HDMR method is used to improve the computational accuracy. In this study, the sliding mechanism in Polish Flysch Carpathian Mountains have been studied by means of HDMR. The Southern part of Poland where Carpathian Mountains are placed is characterized by a rather complicated sedimentary pattern of flysh rocky-soil deposits that can be simplified into three main categories: (1) normal flysch, consisting of adjacent sandstone and shale beds of approximately equal thickness, (2) shale flysch, where shale beds are thicker than adjacent sandstone beds, and (3) sandstone flysch, where the opposite holds. Landslides occur in all flysch deposit types thus some configurations of possible unstable settings (within fractured rocky-soil masses) resulting in sliding mechanisms have been investigated in this study. The reliability indices values drawn from the HDRM method have been compared with conventional approaches as neural networks: the efficiency of HDRM is shown in the case studied. References Chowdhury R., Rao B.N. and Prasad A.M. 2009. High-dimensional model representation for structural reliability analysis. Commun. Numer. Meth. Engng, 25: 301-337. Chowdhury R. and Rao B. 2010. Probabilistic Stability Assessment of Slopes Using High Dimensional Model Representation. Computers and Geotechnics, 37: 876-884.

  4. Two-Dimensional Bifurcated Inlet Variable Cowl Lip Test Completed in 10- by 10-Foot Supersonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Hoffman, T. R.

    2000-01-01

    Researchers at the NASA Glenn Research Center at Lewis Field successfully tested a variable cowl lip inlet at simulated takeoff conditions in Glenn s 10- by 10-Foot Supersonic Wind Tunnel (10x10 SWT) as part of the High-Speed Research Program. The test was a follow-on to the Two-Dimensional Bifurcated (2DB) Inlet/Engine test. At the takeoff condition for a High-Speed Civil Transport aircraft, the inlet must provide adequate airflow to the engine with an acceptable distortion level and high-pressure recovery. The test was conducted to study the effectiveness of installing two rotating lips on the 2DB Inlet cowls to increase mass flow rate and eliminate or reduce boundary layer flow separation near the lips. Hardware was mounted vertically in the test section so that it extended through the tunnel ceiling and that the 2DB Inlet was exposed to the atmosphere above the test section. The tunnel was configured in the aerodynamic mode, and exhausters were used to pump down the tunnel to vacuum levels and to provide a maximum flow rate of approximately 58 lb/sec. The test determined the (1) maximum flow in the 2DB Inlet for each variable cowl lip, (2) distortion level and pressure recovery for each lip configuration, (3) boundary layer conditions near variable lips inside the 2DB Inlet, (4) effects of a wing structure adjacent to the 2DB Inlet, and (5) effects of different 2DB Inlet exit configurations. It also employed flow visualization to generate enough qualitative data on variable lips to optimize the variable lip concept. This test was a collaborative effort between the Boeing Company and Glenn. Extensive inhouse support at Glenn contributed significantly to the progress and accomplishment of this test.

  5. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    NASA Astrophysics Data System (ADS)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input-output relationships in high-dimensional systems for many problems in science and engineering. The HDMR method is developed to improve the efficiency of the deducing high dimensional behaviors. The method is formed by a particular organization of low dimensional component functions, in which each function is the contribution of one or more input variables to the output variables.

  6. One- and Two-dimensional Solitary Wave States in the Nonlinear Kramers Equation with Movement Direction as a Variable

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Hidetsugu; Ishibashi, Kazuya

    2018-06-01

    We study self-propelled particles by direct numerical simulation of the nonlinear Kramers equation for self-propelled particles. In our previous paper, we studied self-propelled particles with velocity variables in one dimension. In this paper, we consider another model in which each particle exhibits directional motion. The movement direction is expressed with a variable ϕ. We show that one-dimensional solitary wave states appear in direct numerical simulations of the nonlinear Kramers equation in one- and two-dimensional systems, which is a generalization of our previous result. Furthermore, we find two-dimensionally localized states in the case that each self-propelled particle exhibits rotational motion. The center of mass of the two-dimensionally localized state exhibits circular motion, which implies collective rotating motion. Finally, we consider a simple one-dimensional model equation to qualitatively understand the formation of the solitary wave state.

  7. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    NASA Astrophysics Data System (ADS)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    2017-08-01

    Molecular dynamics simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules, but they are limited by the time scale barrier. That is, we may not obtain properties' efficiently because we need to run microseconds or longer simulations using femtosecond time steps. To overcome this time scale barrier, we can use the weighted ensemble (WE) method, a powerful enhanced sampling method that efficiently samples thermodynamic and kinetic properties. However, the WE method requires an appropriate partitioning of phase space into discrete macrostates, which can be problematic when we have a high-dimensional collective space or when little is known a priori about the molecular system. Hence, we developed a new WE-based method, called the "Concurrent Adaptive Sampling (CAS) algorithm," to tackle these issues. The CAS algorithm is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective variables and adaptive macrostates to enhance the sampling in the high-dimensional space. This is especially useful for systems in which we do not know what the right reaction coordinates are, in which case we can use many collective variables to sample conformations and pathways. In addition, a clustering technique based on the committor function is used to accelerate sampling the slowest process in the molecular system. In this paper, we introduce the new method and show results from two-dimensional models and bio-molecules, specifically penta-alanine and a triazine trimer.

  8. KP Equation in a Three-Dimensional Unmagnetized Warm Dusty Plasma with Variable Dust Charge

    NASA Astrophysics Data System (ADS)

    El-Shorbagy, Kh. H.; Mahassen, Hania; El-Bendary, Atef Ahmed

    2017-12-01

    In this work, we investigate the propagation of three-dimensional nonlinear dust-acoustic and dust-Coulomb waves in an unmagnetized warm dusty plasma consisting of electrons, ions, and charged dust particles. The grain charge fluctuation is incorporated through the current balance equation. Using the perturbation method, a Kadomtsev-Petviashvili (KP) equation is obtained. It has been shown that the charge fluctuation would modify the wave structures, and the waves in such systems are unstable due to high-order long wave perturbations.

  9. Deterministic models for traffic jams

    NASA Astrophysics Data System (ADS)

    Nagel, Kai; Herrmann, Hans J.

    1993-10-01

    We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.

  10. System for selecting relevant information for decision support.

    PubMed

    Kalina, Jan; Seidl, Libor; Zvára, Karel; Grünfeldová, Hana; Slovák, Dalibor; Zvárová, Jana

    2013-01-01

    We implemented a prototype of a decision support system called SIR which has a form of a web-based classification service for diagnostic decision support. The system has the ability to select the most relevant variables and to learn a classification rule, which is guaranteed to be suitable also for high-dimensional measurements. The classification system can be useful for clinicians in primary care to support their decision-making tasks with relevant information extracted from any available clinical study. The implemented prototype was tested on a sample of patients in a cardiological study and performs an information extraction from a high-dimensional set containing both clinical and gene expression data.

  11. Assessment of WENO-extended two-fluid modelling in compressible multiphase flows

    NASA Astrophysics Data System (ADS)

    Kitamura, Keiichi; Nonomura, Taku

    2017-03-01

    The two-fluid modelling based on an advection-upwind-splitting-method (AUSM)-family numerical flux function, AUSM+-up, following the work by Chang and Liou [Journal of Computational Physics 2007;225: 840-873], has been successfully extended to the fifth order by weighted-essentially-non-oscillatory (WENO) schemes. Then its performance is surveyed in several numerical tests. The results showed a desired performance in one-dimensional benchmark test problems: Without relying upon an anti-diffusion device, the higher-order two-fluid method captures the phase interface within a fewer grid points than the conventional second-order method, as well as a rarefaction wave and a very weak shock. At a high pressure ratio (e.g. 1,000), the interpolated variables appeared to affect the performance: the conservative-variable-based characteristic-wise WENO interpolation showed less sharper but more robust representations of the shocks and expansions than the primitive-variable-based counterpart did. In two-dimensional shock/droplet test case, however, only the primitive-variable-based WENO with a huge void fraction realised a stable computation.

  12. Variable-range-hopping magnetoresistance

    NASA Astrophysics Data System (ADS)

    Azbel, Mark Ya

    1991-03-01

    The hopping magnetoresistance R of a two-dimensional insulator with metallic impurities is considered. In sufficiently weak magnetic fields it increases or decreases depending on the impurity density n: It decreases if n is low and increases if n is high. In high magnetic fields B, it always exponentially increases with √B . Such fields yield a one-dimensional temperature dependence: lnR~1/ √T . The calculation provides an accurate leading approximation for small impurities with one eigenstate in their potential well. In the limit of infinitesimally small impurities, an impurity potential is described by a generalized function. This function, similar to a δ function, is localized at a point, but, contrary to a δ function in the dimensionality above 1, it has finite eigenenergies. Such functions may be helpful in the study of scattering and localization of any waves.

  13. A time-dependent, three-dimensional model of the Delaware Bay and River system. Part 2: Three-dimensional flow fields and residual circulation

    NASA Astrophysics Data System (ADS)

    Galperin, Boris; Mellor, George L.

    1990-09-01

    The three-dimensional model of Delaware Bay, River and adjacent continental shelf was described in Part 1. Here, Part 2 of this two-part paper demonstrates that the model is capable of realistic simulation of current and salinity distributions, tidal cycle variability, events of strong mixing caused by high winds and rapid salinity changes due to high river runoff. The 25-h average subtidal circulation strongly depends on the wind forcing. Monthly residual currents and salinity distributions demonstrate a classical two-layer estuarine circulation wherein relatively low salinity water flows out at the surface and compensating high salinity water from the shelf flows at the bottom. The salinity intrusion is most vigorous along deep channels in the Bay. Winds can generate salinity fronts inside and outside the Bay and enhance or weaken the two-layer circulation pattern. Since the portion of the continental shelf included in the model is limited, the model shelf circulation is locally wind-driven and excludes such effects as coastally trapped waves and interaction with Gulf Stream rings; nevertheless, a significant portion of the coastal elevation variability is hindcast by the model. Also, inclusion of the shelf improves simulation of salinity inside the Bay compared with simulations where the salinity boundary condition is specified at the mouth of the Bay.

  14. A Comparative Study on Multifactor Dimensionality Reduction Methods for Detecting Gene-Gene Interactions with the Survival Phenotype

    PubMed Central

    Lee, Seungyeoun; Kim, Yongkang; Kwon, Min-Seok; Park, Taesung

    2015-01-01

    Genome-wide association studies (GWAS) have extensively analyzed single SNP effects on a wide variety of common and complex diseases and found many genetic variants associated with diseases. However, there is still a large portion of the genetic variants left unexplained. This missing heritability problem might be due to the analytical strategy that limits analyses to only single SNPs. One of possible approaches to the missing heritability problem is to consider identifying multi-SNP effects or gene-gene interactions. The multifactor dimensionality reduction method has been widely used to detect gene-gene interactions based on the constructive induction by classifying high-dimensional genotype combinations into one-dimensional variable with two attributes of high risk and low risk for the case-control study. Many modifications of MDR have been proposed and also extended to the survival phenotype. In this study, we propose several extensions of MDR for the survival phenotype and compare the proposed extensions with earlier MDR through comprehensive simulation studies. PMID:26339630

  15. Fast exploration of an optimal path on the multidimensional free energy surface

    PubMed Central

    Chen, Changjun

    2017-01-01

    In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475

  16. Low-dimensional approximation searching strategy for transfer entropy from non-uniform embedding

    PubMed Central

    2018-01-01

    Transfer entropy from non-uniform embedding is a popular tool for the inference of causal relationships among dynamical subsystems. In this study we present an approach that makes use of low-dimensional conditional mutual information quantities to decompose the original high-dimensional conditional mutual information in the searching procedure of non-uniform embedding for significant variables at different lags. We perform a series of simulation experiments to assess the sensitivity and specificity of our proposed method to demonstrate its advantage compared to previous algorithms. The results provide concrete evidence that low-dimensional approximations can help to improve the statistical accuracy of transfer entropy in multivariate causality analysis and yield a better performance over other methods. The proposed method is especially efficient as the data length grows. PMID:29547669

  17. Adaptation of an articulated fetal skeleton model to three-dimensional fetal image data

    NASA Astrophysics Data System (ADS)

    Klinder, Tobias; Wendland, Hannes; Wachter-Stehle, Irina; Roundhill, David; Lorenz, Cristian

    2015-03-01

    The automatic interpretation of three-dimensional fetal images poses specific challenges compared to other three-dimensional diagnostic data, especially since the orientation of the fetus in the uterus and the position of the extremities is highly variable. In this paper, we present a comprehensive articulated model of the fetal skeleton and the adaptation of the articulation for pose estimation in three-dimensional fetal images. The model is composed out of rigid bodies where the articulations are represented as rigid body transformations. Given a set of target landmarks, the model constellation can be estimated by optimization of the pose parameters. Experiments are carried out on 3D fetal MRI data yielding an average error per case of 12.03+/-3.36 mm between target and estimated landmark positions.

  18. The Goertler vortex instability mechanism in three-dimensional boundary layers

    NASA Technical Reports Server (NTRS)

    Hall, P.

    1984-01-01

    The two dimensional boundary layer on a concave wall is centrifugally unstable with respect to vortices aligned with the basic flow for sufficiently high values of the Goertler number. However, in most situations of practical interest the basic flow is three dimensional and previous theoretical investigations do not apply. The linear stability of the flow over an infinitely long swept wall of variable curvature is considered. If there is no pressure gradient in the boundary layer the instability problem can always be related to an equivalent two dimensional calculation. However, in general, this is not the case and even for small values of the crossflow velocity field dramatic differences between the two and three dimensional problems emerge. When the size of the crossflow is further increased, the vortices in the neutral location have their axes locally perpendicular to the vortex lines of the basic flow.

  19. Multiple-block grid adaption for an airplane geometry

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid Samareh; Smith, Robert E.

    1988-01-01

    Grid-adaption methods are developed with the capability of moving grid points in accordance with several variables for a three-dimensional multiple-block grid system. These methods are algebraic, and they are implemented for the computation of high-speed flow over an airplane configuration.

  20. A reanalysis dataset of the South China Sea.

    PubMed

    Zeng, Xuezhi; Peng, Shiqiu; Li, Zhijin; Qi, Yiquan; Chen, Rongyu

    2014-01-01

    Ocean reanalysis provides a temporally continuous and spatially gridded four-dimensional estimate of the ocean state for a better understanding of the ocean dynamics and its spatial/temporal variability. Here we present a 19-year (1992-2010) high-resolution ocean reanalysis dataset of the upper ocean in the South China Sea (SCS) produced from an ocean data assimilation system. A wide variety of observations, including in-situ temperature/salinity profiles, ship-measured and satellite-derived sea surface temperatures, and sea surface height anomalies from satellite altimetry, are assimilated into the outputs of an ocean general circulation model using a multi-scale incremental three-dimensional variational data assimilation scheme, yielding a daily high-resolution reanalysis dataset of the SCS. Comparisons between the reanalysis and independent observations support the reliability of the dataset. The presented dataset provides the research community of the SCS an important data source for studying the thermodynamic processes of the ocean circulation and meso-scale features in the SCS, including their spatial and temporal variability.

  1. A reanalysis dataset of the South China Sea

    PubMed Central

    Zeng, Xuezhi; Peng, Shiqiu; Li, Zhijin; Qi, Yiquan; Chen, Rongyu

    2014-01-01

    Ocean reanalysis provides a temporally continuous and spatially gridded four-dimensional estimate of the ocean state for a better understanding of the ocean dynamics and its spatial/temporal variability. Here we present a 19-year (1992–2010) high-resolution ocean reanalysis dataset of the upper ocean in the South China Sea (SCS) produced from an ocean data assimilation system. A wide variety of observations, including in-situ temperature/salinity profiles, ship-measured and satellite-derived sea surface temperatures, and sea surface height anomalies from satellite altimetry, are assimilated into the outputs of an ocean general circulation model using a multi-scale incremental three-dimensional variational data assimilation scheme, yielding a daily high-resolution reanalysis dataset of the SCS. Comparisons between the reanalysis and independent observations support the reliability of the dataset. The presented dataset provides the research community of the SCS an important data source for studying the thermodynamic processes of the ocean circulation and meso-scale features in the SCS, including their spatial and temporal variability. PMID:25977803

  2. Teaching a Machine to Feel Postoperative Pain: Combining High-Dimensional Clinical Data with Machine Learning Algorithms to Forecast Acute Postoperative Pain

    PubMed Central

    Tighe, Patrick J.; Harle, Christopher A.; Hurley, Robert W.; Aytug, Haldun; Boezaart, Andre P.; Fillingim, Roger B.

    2015-01-01

    Background Given their ability to process highly dimensional datasets with hundreds of variables, machine learning algorithms may offer one solution to the vexing challenge of predicting postoperative pain. Methods Here, we report on the application of machine learning algorithms to predict postoperative pain outcomes in a retrospective cohort of 8071 surgical patients using 796 clinical variables. Five algorithms were compared in terms of their ability to forecast moderate to severe postoperative pain: Least Absolute Shrinkage and Selection Operator (LASSO), gradient-boosted decision tree, support vector machine, neural network, and k-nearest neighbor, with logistic regression included for baseline comparison. Results In forecasting moderate to severe postoperative pain for postoperative day (POD) 1, the LASSO algorithm, using all 796 variables, had the highest accuracy with an area under the receiver-operating curve (ROC) of 0.704. Next, the gradient-boosted decision tree had an ROC of 0.665 and the k-nearest neighbor algorithm had an ROC of 0.643. For POD 3, the LASSO algorithm, using all variables, again had the highest accuracy, with an ROC of 0.727. Logistic regression had a lower ROC of 0.5 for predicting pain outcomes on POD 1 and 3. Conclusions Machine learning algorithms, when combined with complex and heterogeneous data from electronic medical record systems, can forecast acute postoperative pain outcomes with accuracies similar to methods that rely only on variables specifically collected for pain outcome prediction. PMID:26031220

  3. Multi-dimensional scores to predict mortality in patients with idiopathic pulmonary fibrosis undergoing lung transplantation assessment.

    PubMed

    Fisher, Jolene H; Al-Hejaili, Faris; Kandel, Sonja; Hirji, Alim; Shapera, Shane; Mura, Marco

    2017-04-01

    The heterogeneous progression of idiopathic pulmonary fibrosis (IPF) makes prognostication difficult and contributes to high mortality on the waitlist for lung transplantation (LTx). Multi-dimensional scores (Composite Physiologic index [CPI], [Gender-Age-Physiology [GAP]; RIsk Stratification scorE [RISE]) demonstrated enhanced predictive power towards outcome in IPF. The lung allocation score (LAS) is a multi-dimensional tool commonly used to stratify patients assessed for LTx. We sought to investigate whether IPF-specific multi-dimensional scores predict mortality in patients with IPF assessed for LTx. The study included 302 patients with IPF who underwent a LTx assessment (2003-2014). Multi-dimensional scores were calculated. The primary outcome was 12-month mortality after assessment. LTx was considered as competing event in all analyses. At the end of the observation period, there were 134 transplants, 63 deaths, and 105 patients were alive without LTx. Multi-dimensional scores predicted mortality with accuracy similar to LAS, and superior to that of individual variables: area under the curve (AUC) for LAS was 0.78 (sensitivity 71%, specificity 86%); CPI 0.75 (sensitivity 67%, specificity 82%); GAP 0.67 (sensitivity 59%, specificity 74%); RISE 0.78 (sensitivity 71%, specificity 84%). A separate analysis conducted only in patients actively listed for LTx (n = 247; 50 deaths) yielded similar results. In patients with IPF assessed for LTx as well as in those actually listed, multi-dimensional scores predict mortality better than individual variables, and with accuracy similar to the LAS. If validated, multi-dimensional scores may serve as inexpensive tools to guide decisions on the timing of referral and listing for LTx. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Comparison of three-dimensional multi-segmental foot models used in clinical gait laboratories.

    PubMed

    Nicholson, Kristen; Church, Chris; Takata, Colton; Niiler, Tim; Chen, Brian Po-Jung; Lennon, Nancy; Sees, Julie P; Henley, John; Miller, Freeman

    2018-05-16

    Many skin-mounted three-dimensional multi-segmented foot models are currently in use for gait analysis. Evidence regarding the repeatability of models, including between trial and between assessors, is mixed, and there are no between model comparisons of kinematic results. This study explores differences in kinematics and repeatability between five three-dimensional multi-segmented foot models. The five models include duPont, Heidelberg, Oxford Child, Leardini, and Utah. Hind foot, forefoot, and hallux angles were calculated with each model for ten individuals. Two physical therapists applied markers three times to each individual to assess within and between therapist variability. Standard deviations were used to evaluate marker placement variability. Locally weighted regression smoothing with alpha-adjusted serial T tests analysis was used to assess kinematic similarities. All five models had similar variability, however, the Leardini model showed high standard deviations in plantarflexion/dorsiflexion angles. P-value curves for the gait cycle were used to assess kinematic similarities. The duPont and Oxford models had the most similar kinematics. All models demonstrated similar marker placement variability. Lower variability was noted in the sagittal and coronal planes compared to rotation in the transverse plane, suggesting a higher minimal detectable change when clinically considering rotation and a need for additional research. Between the five models, the duPont and Oxford shared the most kinematic similarities. While patterns of movement were very similar between all models, offsets were often present and need to be considered when evaluating published data. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Multigrid one shot methods for optimal control problems: Infinite dimensional control

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Taasan, Shlomo

    1994-01-01

    The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.

  6. Two-Dimensional Nonlinear Finite Element Analysis of CMC Microstructures

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Goldberg, Robert K.; Bonacuse, Peter J.

    2011-01-01

    Detailed two-dimensional finite element analyses of the cross-sections of a model CVI (chemical vapor infiltrated) SiC/SiC (silicon carbide fiber in a silicon carbide matrix) ceramic matrix composites are performed. High resolution images of the cross-section of this composite material are generated using serial sectioning of the test specimens. These images are then used to develop very detailed finite element models of the cross-sections using the public domain software OOF2 (Object Oriented Analysis of Material Microstructures). Examination of these images shows that these microstructures have significant variability and irregularity. How these variabilities manifest themselves in the variability in effective properties as well as the stress distribution, damage initiation and damage progression is the overall objective of this work. Results indicate that even though the macroscopic stress-strain behavior of various sections analyzed is very similar, each section has a very distinct damage pattern when subjected to in-plane tensile loads and this damage pattern seems to follow the unique architectural and microstructural details of the analyzed sections.

  7. The Effect of Biological Movement Variability on the Performance of the Golf Swing in High- and Low-Handicapped Players

    ERIC Educational Resources Information Center

    Bradshaw, Elizabeth J.; Keogh, Justin W. L.; Hume, Patria A.; Maulder, Peter S.; Nortje, Jacques; Marnewick, Michel

    2009-01-01

    The purpose of this study was to examine the role of neuromotor noise on golf swing performance in high- and low-handicap players. Selected two-dimensional kinematic measures of 20 male golfers (n = 10 per high- or low-handicap group) performing 10 golf swings with a 5-iron club was obtained through video analysis. Neuromotor noise was calculated…

  8. Integrated Microfluidic Variable Optical Attenuator

    DTIC Science & Technology

    2005-11-28

    Quantum Electron. 5, pp. 1289–1297 (1999). 5. G. Z. Xiao, Z. Zhang, and C. P. Grover, “A variable optical attenuator based on a straight polymer –silica...1998). 18. Y. Huang, G.T. Paloczi, J. K. S. Poon, and A. Yariv, “Bottom-up soft-lithographic fabrication of three- dimensional multilayer polymer ...quality without damaging polymer materials under high temperatures, resulting in a core index of 1.561 and cladding index of 1.546. The refractive

  9. Variable screening via quantile partial correlation

    PubMed Central

    Ma, Shujie; Tsai, Chih-Ling

    2016-01-01

    In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683

  10. THREE-DIMENSIONAL COMPUTATIONAL FLUID DYNAMICS SIMULATIONS OF LOCAL PARTICLE DEPOSITION PATTERNS IN LUNG AIRWAYS

    EPA Science Inventory

    EPA has identified respirable particulate matter (PM) as a significant threat to human health, particularly in the elderly, in children, and in persons with respiratory disease. However, deposition of PM in the respiratory system is highly variable, depending upon particle chara...

  11. Dimensional reduction for a SIR type model

    NASA Astrophysics Data System (ADS)

    Cahyono, Edi; Soeharyadi, Yudi; Mukhsar

    2018-03-01

    Epidemic phenomena are often modeled in the form of dynamical systems. Such model has also been used to model spread of rumor, spread of extreme ideology, and dissemination of knowledge. Among the simplest is SIR (susceptible, infected and recovered) model, a model that consists of three compartments, and hence three variables. The variables are functions of time which represent the number of subpopulations, namely suspect, infected and recovery. The sum of the three is assumed to be constant. Hence, the model is actually two dimensional which sits in three-dimensional ambient space. This paper deals with the reduction of a SIR type model into two variables in two-dimensional ambient space to understand the geometry and dynamics better. The dynamics is studied, and the phase portrait is presented. The two dimensional model preserves the equilibrium and the stability. The model has been applied for knowledge dissemination, which has been the interest of knowledge management.

  12. Comprehensive two-dimensional gas chromatography for the analysis of Fischer-Tropsch oil products.

    PubMed

    van der Westhuizen, Rina; Crous, Renier; de Villiers, André; Sandra, Pat

    2010-12-24

    The Fischer-Tropsch (FT) process involves a series of catalysed reactions of carbon monoxide and hydrogen, originating from coal, natural gas or biomass, leading to a variety of synthetic chemicals and fuels. The benefits of comprehensive two-dimensional gas chromatography (GC×GC) compared to one-dimensional GC (1D-GC) for the detailed investigation of the oil products of low and high temperature FT processes are presented. GC×GC provides more accurate quantitative data to construct Anderson-Schultz-Flory (ASF) selectivity models that correlate the FT product distribution with reaction variables. On the other hand, the high peak capacity and sensitivity of GC×GC allow the detailed study of components present at trace level. Analyses of the aromatic and oxygenated fractions of a high temperature FT (HT-FT) process are presented. GC×GC data have been used to optimise or tune the HT-FT process by using a lab-scale micro-FT-reactor. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Path Finding on High-Dimensional Free Energy Landscapes

    NASA Astrophysics Data System (ADS)

    Díaz Leines, Grisell; Ensing, Bernd

    2012-07-01

    We present a method for determining the average transition path and the free energy along this path in the space of selected collective variables. The formalism is based upon a history-dependent bias along a flexible path variable within the metadynamics framework but with a trivial scaling of the cost with the number of collective variables. Controlling the sampling of the orthogonal modes recovers the average path and the minimum free energy path as the limiting cases. The method is applied to resolve the path and the free energy of a conformational transition in alanine dipeptide.

  14. Characterization of eco-hydraulic habitats for examining biogeochemical processes in rivers

    NASA Astrophysics Data System (ADS)

    McPhillips, L. E.; O'Connor, B. L.; Harvey, J. W.

    2009-12-01

    Spatial variability in biogeochemical reaction rates in streams is often attributed to sediment characteristics such as particle size, organic material content, and biota attached to or embedded within the sediments. Also important in controlling biogeochemical reaction rates are hydraulic conditions, which influence mass transfer of reactants from the stream to the bed, as well as hyporheic exchange within near-surface sediments. This combination of physical and ecological variables has the potential to create habitats that are unique not only in sediment texture but also in their biogeochemical processes and metabolism rates. In this study, we examine the two-dimensional (2D) variability of these habitats in an agricultural river in central Iowa. The streambed substratum was assessed using a grid-based survey identifying dominant particle size classes, as well as aerial coverage of green algae, benthic organic material, and coarse woody debris. Hydraulic conditions were quantified using a calibrated 2D model, and hyporheic exchange was assessed using a scaling relationship based on sediment and hydraulic characteristics. Point-metabolism rates were inferred from measured sediment dissolved oxygen profiles using an effective diffusion model and compared to traditional whole-stream measurements of metabolism. The 185 m study reach had contrasting geomorphologic and hydraulic characteristics in the upstream and downstream portions of an otherwise relatively straight run of a meandering river. The upstream portion contained a large central gravel bar (50 m in length) flanked by riffle-run segments and the downstream portion contained a deeper, fairly uniform channel cross-section. While relatively high flow velocities and gravel sediments were characteristic of the study river, the upstream island bar separated channels that differed with sandy gravels on one side and cobbley gravels on the other. Additionally, green algae was almost exclusively found in riffle portions of the cobbley gravel channel sediments while fine benthic organic material was concentrated at channel margins, regardless of the underlying sediments. A high degree of spatial variability in hyporheic exchange potential was the result of the complex 2D nature of topography and hydraulics. However, sediment texture classifications did a reasonable job in characterizing variability in hyporheic exchange potential because sediment texture mapping incorporates qualitative aspects of bed shear stress and hydraulic conductivity that control hyporheic exchange. Together these variables greatly influenced point-metabolism measurements in different sediment texture habitats separated by only 1 to 2 m. Results from this study suggest that spatial variability and complex interactions between geomorphology, hydraulics, and biological communities generate eco-hydraulic habitats that control variability in biogeochemical processes. The processes controlling variability are highly two-dimensional in nature and are not often accounted for in traditional one-dimensional analysis approaches of biogeochemical processes.

  15. An efficient and robust algorithm for two dimensional time dependent incompressible Navier-Stokes equations: High Reynolds number flows

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1991-01-01

    An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.

  16. Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Freels, J. D.

    1989-01-01

    A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.

  17. Combining techniques for screening and evaluating interaction terms on high-dimensional time-to-event data.

    PubMed

    Sariyar, Murat; Hoffmann, Isabell; Binder, Harald

    2014-02-26

    Molecular data, e.g. arising from microarray technology, is often used for predicting survival probabilities of patients. For multivariate risk prediction models on such high-dimensional data, there are established techniques that combine parameter estimation and variable selection. One big challenge is to incorporate interactions into such prediction models. In this feasibility study, we present building blocks for evaluating and incorporating interactions terms in high-dimensional time-to-event settings, especially for settings in which it is computationally too expensive to check all possible interactions. We use a boosting technique for estimation of effects and the following building blocks for pre-selecting interactions: (1) resampling, (2) random forests and (3) orthogonalization as a data pre-processing step. In a simulation study, the strategy that uses all building blocks is able to detect true main effects and interactions with high sensitivity in different kinds of scenarios. The main challenge are interactions composed of variables that do not represent main effects, but our findings are also promising in this regard. Results on real world data illustrate that effect sizes of interactions frequently may not be large enough to improve prediction performance, even though the interactions are potentially of biological relevance. Screening interactions through random forests is feasible and useful, when one is interested in finding relevant two-way interactions. The other building blocks also contribute considerably to an enhanced pre-selection of interactions. We determined the limits of interaction detection in terms of necessary effect sizes. Our study emphasizes the importance of making full use of existing methods in addition to establishing new ones.

  18. Study of flexural rigidity of weavable powder-coated towpreg

    NASA Technical Reports Server (NTRS)

    Hirt, Douglas E.; Marchello, Joseph M.; Baucom, Robert M.

    1990-01-01

    An effort has been made to weave powder-impregnated tow into a two-dimensional preform, controlling process variables to obtain high flexural rigidity in the warp direction and greater flexibility in the fill direction. The resulting prepregs have been consolidated into laminates with LaRC-TPI matrices. Complementary SEM and DSC studies have been performed to deepen understanding of the relationship between tow flexibility and heat treatment. Attention is also given to the oven temperature and residence time variables' effects on power/fiber fusion.

  19. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    PubMed Central

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2014-01-01

    Objective Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than three, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance DataHigh was developed to fulfill a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity. PMID:24216250

  20. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity

    NASA Astrophysics Data System (ADS)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-12-01

    Objective. Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  1. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    PubMed

    Cowley, Benjamin R; Kaufman, Matthew T; Butler, Zachary S; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2013-12-01

    Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  2. Some elements of a theory of multidimensional complex variables. I - General theory. II - Expansions of analytic functions and application to fluid flows

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1989-01-01

    The paper introduces a new theory of N-dimensional complex variables and analytic functions which, for N greater than 2, is both a direct generalization and a close analog of the theory of ordinary complex variables. The algebra in the present theory is a commutative ring, not a field. Functions of a three-dimensional variable were defined and the definition of the derivative then led to analytic functions.

  3. DENSITY-DEPENDENT FLOW IN ONE-DIMENSIONAL VARIABLY-SATURATED MEDIA

    EPA Science Inventory

    A one-dimensional finite element is developed to simulate density-dependent flow of saltwater in variably saturated media. The flow and solute equations were solved in a coupled mode (iterative), in a partially coupled mode (non-iterative), and in a completely decoupled mode. P...

  4. A numerical algorithm for optimal feedback gains in high dimensional linear quadratic regulator problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1991-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problem is proposed. The method, which combines use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite-dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantages of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed, and numerical evidence of the efficacy of these ideas is presented.

  5. A numerical algorithm for optimal feedback gains in high dimensional LQR problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1986-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.

  6. Invited Review: A review of deterministic effects in cyclic variability of internal combustion engines

    DOE PAGES

    Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...

    2015-02-18

    Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less

  7. Variable Selection for Support Vector Machines in Moderately High Dimensions

    PubMed Central

    Zhang, Xiang; Wu, Yichao; Wang, Lan; Li, Runze

    2015-01-01

    Summary The support vector machine (SVM) is a powerful binary classification tool with high accuracy and great flexibility. It has achieved great success, but its performance can be seriously impaired if many redundant covariates are included. Some efforts have been devoted to studying variable selection for SVMs, but asymptotic properties, such as variable selection consistency, are largely unknown when the number of predictors diverges to infinity. In this work, we establish a unified theory for a general class of nonconvex penalized SVMs. We first prove that in ultra-high dimensions, there exists one local minimizer to the objective function of nonconvex penalized SVMs possessing the desired oracle property. We further address the problem of nonunique local minimizers by showing that the local linear approximation algorithm is guaranteed to converge to the oracle estimator even in the ultra-high dimensional setting if an appropriate initial estimator is available. This condition on initial estimator is verified to be automatically valid as long as the dimensions are moderately high. Numerical examples provide supportive evidence. PMID:26778916

  8. Effect of Dimensional Salience and Salience of Variability on Problem Solving: A Developmental Study

    ERIC Educational Resources Information Center

    Zelniker, Tamar; And Others

    1975-01-01

    A matching task was presented to 120 subjects from 6 to 20 years of age to investigate the relative influence of dimensional salience and salience of variability on problem solving. The task included four dimensions: form, color, number, and position. (LLK)

  9. Normalization of High Dimensional Genomics Data Where the Distribution of the Altered Variables Is Skewed

    PubMed Central

    Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per

    2011-01-01

    Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher sensitivity and lower bias than can be attained using standard and invariant normalization methods. PMID:22132175

  10. Impact of interannual variability (1979-1986) of transport and temperature on ozone as computed using a two-dimensional photochemical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackman, C.H.; Douglass, A.R., Chandra, S.; Stolarski, R.S.

    1991-03-20

    Eight years of NMC (National Meteorological Center) temperature and SBUV (solar backscattered ultraviolet) ozone data were used to calculate the monthly mean heating rates and residual circulation for use in a two-dimensional photochemical model in order to examine the interannual variability of modeled ozone. Fairly good correlations were found in the interannual behavior of modeled and measured SBUV ozone in the upper stratosphere at middle to low latitudes, where temperature dependent photochemistry is thought to dominate ozone behavior. The calculated total ozone is found to be more sensitive to the interannual residual circulation changes than to the interannual temperature changes.more » The magnitude of the modeled ozone variability is similar to the observed variability, but the observed and modeled year to year deviations are mostly uncorrelated. The large component of the observed total ozone variability at low latitudes due to the quasi-biennial oscillation (QBO) is not seen in the modeled total ozone, as only a small QBO signal is present in the heating rates, temperatures, and monthly mean residual circulation. Large interanual changes in tropospheric dynamics are believed to influence the interannual variability in the total ozone, especially at middle and high latitudes. Since these tropospheric changes and most of the QBO forcing are not included in the model formulation, it is not surprising that the interannual variability in total ozione is not well represented in the model computations.« less

  11. Large Eddy Simulation of Spatially Developing Turbulent Reacting Shear Layers with the One-Dimensional Turbulence Model

    NASA Astrophysics Data System (ADS)

    Hoffie, Andreas Frank

    Large eddy simulation (LES) combined with the one-dimensional turbulence (ODT) model is used to simulate spatially developing turbulent reacting shear layers with high heat release and high Reynolds numbers. The LES-ODT results are compared to results from direct numerical simulations (DNS), for model development and validation purposes. The LES-ODT approach is based on LES solutions for momentum and pressure on a coarse grid and solutions for momentum and reactive scalars on a fine, one-dimensional, but three-dimensionally coupled ODT subgrid, which is embedded into the LES computational domain. Although one-dimensional, all three velocity components are transported along the ODT domain. The low-dimensional spatial and temporal resolution of the subgrid scales describe a new modeling paradigm, referred to as autonomous microstructure evolution (AME) models, which resolve the multiscale nature of turbulence down to the Kolmogorv scales. While this new concept aims to mimic the turbulent cascade and to reduce the number of input parameters, AME enables also regime-independent combustion modeling, capable to simulate multiphysics problems simultaneously. The LES as well as the one-dimensional transport equations are solved using an incompressible, low Mach number approximation, however the effects of heat release are accounted for through variable density computed by the ideal gas equation of state, based on temperature variations. The computations are carried out on a three-dimensional structured mesh, which is stretched in the transverse direction. While the LES momentum equation is integrated with a third-order Runge-Kutta time-integration, the time integration at the ODT level is accomplished with an explicit Forward-Euler method. Spatial finite-difference schemes of third (LES) and first (ODT) order are utilized and a fully consistent fractional-step method at the LES level is used. Turbulence closure at the LES level is achieved by utilizing the Smagorinsky model. The chemical reaction is simulated with a global single-step, second-order equilibrium reaction with an Arrhenius reaction rate. The two benchmark cases of constant density reacting and variable density non-reacting shear layers used to determine ODT parameters yield perfect agreement with regards to first and second-order flow statistics as well as shear layer growth rate. The variable density non-reacting shear layer also serves as a testing case for the LES-ODT model to simulate passive scalar mixing. The variable density, reacting shear layer cases only agree reasonably well and indicate that more work is necessary to improve variable density coupling of ODT and LES. The disagreement is attributed to the fact that the ODT filtered density is kept constant across the Runge-Kutta steps. Furthermore, a more in-depth knowledge of large scale and subgrid turbulent kinetic energy (TKE) spectra at several downstream locations as well as TKE budgets need to be studied to obtain a better understanding about the model as well as about the flow under investigation. The local Reynolds number based on the one-percent thickness at the exit is Redelta ≈ 5300, for the constant density reacting and for the variable density non-reacting case. For the variable density reacting shear layer, the Reynolds number based on the 1% thickness is Redelta ≈ 2370. The variable density reacting shear layers show suppressed growth rates due to density variations caused by heat release. This has also been reported in literature. A Lewis number parameter study is performed to extract non-unity Lewis number effects. An increase in the Lewis number leads to a further suppression of the growth rate, however to an increase spread of second-order flow statistics. Major focus and challenge of this work is to improve and advance the three-dimensional coupling of the one-dimensional ODT domains while keeping the solution correct. This entails major restructuring of the model. The turbulent reacting shear layer poses a physical challenge to the model because of its nature being a statistically stationary, non-decaying inhomogeneous and anisotropic turbulent flow. This challenge also requires additions to the eddy sampling procedure. Besides physical advancements, the LES-ODT code is also improved regarding its ability to use general cuboid geometries, an array structure that allows to apply boundary conditions based on ghost-cells and non-uniform structured meshes. The use of transverse grid-stretching requires the implementation of the ODT triplet map on a stretched grid. Further, advancing subroutine structure handling with global variables that enable serial code speed-up and parallelization with OpenMP are undertaken. Porting the code to a higher-level language, object oriented, finite-volume based CFD platform, like OpenFoam for example that allows more advanced array and parallelization features with graphics processing units (GPUs) as well as parallelization with the message passing interface (MPI) to simulate complex geometries is recommended for future work.

  12. Retention modelling of polychlorinated biphenyls in comprehensive two-dimensional gas chromatography.

    PubMed

    D'Archivio, Angelo Antonio; Incani, Angela; Ruggieri, Fabrizio

    2011-01-01

    In this paper, we use a quantitative structure-retention relationship (QSRR) method to predict the retention times of polychlorinated biphenyls (PCBs) in comprehensive two-dimensional gas chromatography (GC×GC). We analyse the GC×GC retention data taken from the literature by comparing predictive capability of different regression methods. The various models are generated using 70 out of 209 PCB congeners in the calibration stage, while their predictive performance is evaluated on the remaining 139 compounds. The two-dimensional chromatogram is initially estimated by separately modelling retention times of PCBs in the first and in the second column ((1) t (R) and (2) t (R), respectively). In particular, multilinear regression (MLR) combined with genetic algorithm (GA) variable selection is performed to extract two small subsets of predictors for (1) t (R) and (2) t (R) from a large set of theoretical molecular descriptors provided by the popular software Dragon, which after removal of highly correlated or almost constant variables consists of 237 structure-related quantities. Based on GA-MLR analysis, a four-dimensional and a five-dimensional relationship modelling (1) t (R) and (2) t (R), respectively, are identified. Single-response partial least square (PLS-1) regression is alternatively applied to independently model (1) t (R) and (2) t (R) without the need for preliminary GA variable selection. Further, we explore the possibility of predicting the two-dimensional chromatogram of PCBs in a single calibration procedure by using a two-response PLS (PLS-2) model or a feed-forward artificial neural network (ANN) with two output neurons. In the first case, regression is carried out on the full set of 237 descriptors, while the variables previously selected by GA-MLR are initially considered as ANN inputs and subjected to a sensitivity analysis to remove the redundant ones. Results show PLS-1 regression exhibits a noticeably better descriptive and predictive performance than the other investigated approaches. The observed values of determination coefficients for (1) t (R) and (2) t (R) in calibration (0.9999 and 0.9993, respectively) and prediction (0.9987 and 0.9793, respectively) provided by PLS-1 demonstrate that GC×GC behaviour of PCBs is properly modelled. In particular, the predicted two-dimensional GC×GC chromatogram of 139 PCBs not involved in the calibration stage closely resembles the experimental one. Based on the above lines of evidence, the proposed approach ensures accurate simulation of the whole GC×GC chromatogram of PCBs using experimental determination of only 1/3 retention data of representative congeners.

  13. High-Rate Field Demonstration of Large-Alphabet Quantum Key Distribution

    DTIC Science & Technology

    2016-12-13

    COW , 2015 This work Figure 4: Comparison of our P&M DO-QKD results to previously published QKD system records, chosen to represent either secure...record for continuous-variable QKD (33). BBM92: secure throughput record for two-dimensional entanglement-based QKD (34). COW : distance record for QKD (19). 15

  14. High-Rate Field Demonstration of Large-Alphabet Quantum Key Distribution

    DTIC Science & Technology

    2016-10-12

    BBM92, 2009 COW , 2015 This work FIG. 4. Comparison of our P&M DO-QKD results to previously published QKD system records, chosen to represent either...distance record for continuous-variable QKD [29]. BBM92: secure throughput record for two-dimensional entanglement-based QKD [30]. COW : distance record for

  15. High-Rate Field Demonstration of Large-Alphabet Quantum Key Distribution

    DTIC Science & Technology

    2016-12-08

    COW , 2015 This work Figure 4: Comparison of our P&M DO-QKD results to previously published QKD system records, chosen to represent either secure...record for continuous-variable QKD (33). BBM92: secure throughput record for two-dimensional entanglement-based QKD (34). COW : distance record for QKD (19). 15

  16. Anonymous voting for multi-dimensional CV quantum system

    NASA Astrophysics Data System (ADS)

    Rong-Hua, Shi; Yi, Xiao; Jin-Jing, Shi; Ying, Guo; Moon-Ho, Lee

    2016-06-01

    We investigate the design of anonymous voting protocols, CV-based binary-valued ballot and CV-based multi-valued ballot with continuous variables (CV) in a multi-dimensional quantum cryptosystem to ensure the security of voting procedure and data privacy. The quantum entangled states are employed in the continuous variable quantum system to carry the voting information and assist information transmission, which takes the advantage of the GHZ-like states in terms of improving the utilization of quantum states by decreasing the number of required quantum states. It provides a potential approach to achieve the efficient quantum anonymous voting with high transmission security, especially in large-scale votes. Project supported by the National Natural Science Foundation of China (Grant Nos. 61272495, 61379153, and 61401519), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130162110012), and the MEST-NRF of Korea (Grant No. 2012-002521).

  17. Dimensionality reduction in epidemic spreading models

    NASA Astrophysics Data System (ADS)

    Frasca, M.; Rizzo, A.; Gallo, L.; Fortuna, L.; Porfiri, M.

    2015-09-01

    Complex dynamical systems often exhibit collective dynamics that are well described by a reduced set of key variables in a low-dimensional space. Such a low-dimensional description offers a privileged perspective to understand the system behavior across temporal and spatial scales. In this work, we propose a data-driven approach to establish low-dimensional representations of large epidemic datasets by using a dimensionality reduction algorithm based on isometric features mapping (ISOMAP). We demonstrate our approach on synthetic data for epidemic spreading in a population of mobile individuals. We find that ISOMAP is successful in embedding high-dimensional data into a low-dimensional manifold, whose topological features are associated with the epidemic outbreak. Across a range of simulation parameters and model instances, we observe that epidemic outbreaks are embedded into a family of closed curves in a three-dimensional space, in which neighboring points pertain to instants that are close in time. The orientation of each curve is unique to a specific outbreak, and the coordinates correlate with the number of infected individuals. A low-dimensional description of epidemic spreading is expected to improve our understanding of the role of individual response on the outbreak dynamics, inform the selection of meaningful global observables, and, possibly, aid in the design of control and quarantine procedures.

  18. Modellierung dreidimensionaler Strahlungsfelder im frühen Universum %t Modelling three dimensional radiation fields in the early universe

    NASA Astrophysics Data System (ADS)

    Meinköhn, Erik

    2002-11-01

    The present work aims at the modelling of three-dimensional radiation fields in gas clouds from the early universe, in particular as to the influence of varying distributions of density and velocity. In observations of high-redshift gas clouds, the Lyα transition from the first excited energy level to the ground state of the hydrogen atom is usually found to be the only prominent emission lines in the entire spectrum. It is a well-known assumption that high-redshifted hydrogen clouds are the precursors of present-day galaxies. Thus, the investigation of the Lyα line is of paramount importance of the theory of galaxy formation and evolution. The observed Lyα line - or rather, to be precise, its profile - reveals both the complexity of the spatial distribution and of the kinematics of the interstellar gas, and also the nature of the photon source. In this thesis we have developed a code which is capable of solving the three-dimensional frequency-dependent radiative transfer equation for arbitrarily nonrelativistically moving media. The numerical treatment of the associated partial integro-differential equation is an extremely challenging task, since radiation intensity depends on 6 variables, namely 3 space variables, 2 variables describing the direction of photon propagation, and the frequency. With the goal of a quantitative comparison with observational data in mind, the implementation of very efficient methods for a sufficiently accurate solution of the complex radiative transfer problems turned out to be a necessity. The size of the resulting linear system of equations makes the use of parallelization techniques and grid refinement strategies indispensable.

  19. Two-dimensional numerical model for the high electron mobility transistor

    NASA Astrophysics Data System (ADS)

    Loret, Dany

    1987-11-01

    A two-dimensional numerical drift-diffusion model for the High Electron Mobility Transistor (HEMT) is presented. Special attention is paid to the modeling of the current flow over the heterojunction. A finite difference scheme is used to solve the equations, and a variable mesh spacing was implemented to cope with the strong variations of functions near the heterojunction. Simulation results are compared to experimental data for a 0.7 μm gate length device. Small-signal transconductances and cut-off frequency obtained from the 2-D model agree well with the experimental values from S-parameter measurements. It is shown that the numerical models give good insight into device behaviour, including important parasitic effects such as electron injection into the bulk GaAs.

  20. Nonparametric regression applied to quantitative structure-activity relationships

    PubMed

    Constans; Hirst

    2000-03-01

    Several nonparametric regressors have been applied to modeling quantitative structure-activity relationship (QSAR) data. The simplest regressor, the Nadaraya-Watson, was assessed in a genuine multivariate setting. Other regressors, the local linear and the shifted Nadaraya-Watson, were implemented within additive models--a computationally more expedient approach, better suited for low-density designs. Performances were benchmarked against the nonlinear method of smoothing splines. A linear reference point was provided by multilinear regression (MLR). Variable selection was explored using systematic combinations of different variables and combinations of principal components. For the data set examined, 47 inhibitors of dopamine beta-hydroxylase, the additive nonparametric regressors have greater predictive accuracy (as measured by the mean absolute error of the predictions or the Pearson correlation in cross-validation trails) than MLR. The use of principal components did not improve the performance of the nonparametric regressors over use of the original descriptors, since the original descriptors are not strongly correlated. It remains to be seen if the nonparametric regressors can be successfully coupled with better variable selection and dimensionality reduction in the context of high-dimensional QSARs.

  1. One hundred years of Arctic ice cover variations as simulated by a one-dimensional, ice-ocean model

    NASA Astrophysics Data System (ADS)

    Hakkinen, S.; Mellor, G. L.

    1990-09-01

    A one-dimensional ice-ocean model consisting of a second moment, turbulent closure, mixed layer model and a three-layer snow-ice model has been applied to the simulation of Arctic ice mass and mixed layer properties. The results for the climatological seasonal cycle are discussed first and include the salt and heat balance in the upper ocean. The coupled model is then applied to the period 1880-1985, using the surface air temperature fluctuations from Hansen et al. (1983) and from Wigley et al. (1981). The analysis of the simulated large variations of the Arctic ice mass during this period (with similar changes in the mixed layer salinity) shows that the variability in the summer melt determines to a high degree the variability in the average ice thickness. The annual oceanic heat flux from the deep ocean and the maximum freezing rate and associated nearly constant minimum surface salinity flux did not vary significantly interannually. This also implies that the oceanic influence on the Arctic ice mass is minimal for the range of atmospheric variability tested.

  2. Variable-Range Hopping through Marginally Localized Phonons

    NASA Astrophysics Data System (ADS)

    Banerjee, Sumilan; Altman, Ehud

    2016-03-01

    We investigate the effect of coupling Anderson localized particles in one dimension to a system of marginally localized phonons having a symmetry protected delocalized mode at zero frequency. This situation is naturally realized for electrons coupled to phonons in a disordered nanowire as well as for ultracold fermions coupled to phonons of a superfluid in a one-dimensional disordered trap. To determine if the coupled system can be many-body localized we analyze the phonon-mediated hopping transport for both the weak and strong coupling regimes. We show that the usual variable-range hopping mechanism involving a low-order phonon process is ineffective at low temperature due to discreteness of the bath at the required energy. Instead, the system thermalizes through a many-body process involving exchange of a diverging number n ∝-log T of phonons in the low temperature limit. This effect leads to a highly singular prefactor to Mott's well-known formula and strongly suppresses the variable range hopping rate. Finally, we comment on possible implications of this physics in higher dimensional electron-phonon coupled systems.

  3. Exact traveling-wave and spatiotemporal soliton solutions to the generalized (3+1)-dimensional Schrödinger equation with polynomial nonlinearity of arbitrary order.

    PubMed

    Petrović, Nikola Z; Belić, Milivoj; Zhong, Wei-Ping

    2011-02-01

    We obtain exact traveling wave and spatiotemporal soliton solutions to the generalized (3+1)-dimensional nonlinear Schrödinger equation with variable coefficients and polynomial Kerr nonlinearity of an arbitrarily high order. Exact solutions, given in terms of Jacobi elliptic functions, are presented for the special cases of cubic-quintic and septic models. We demonstrate that the widely used method for finding exact solutions in terms of Jacobi elliptic functions is not applicable to the nonlinear Schrödinger equation with saturable nonlinearity. ©2011 American Physical Society

  4. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  5. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  6. Using sketch-map coordinates to analyze and bias molecular dynamics simulations

    PubMed Central

    Tribello, Gareth A.; Ceriotti, Michele; Parrinello, Michele

    2012-01-01

    When examining complex problems, such as the folding of proteins, coarse grained descriptions of the system drive our investigation and help us to rationalize the results. Oftentimes collective variables (CVs), derived through some chemical intuition about the process of interest, serve this purpose. Because finding these CVs is the most difficult part of any investigation, we recently developed a dimensionality reduction algorithm, sketch-map, that can be used to build a low-dimensional map of a phase space of high-dimensionality. In this paper we discuss how these machine-generated CVs can be used to accelerate the exploration of phase space and to reconstruct free-energy landscapes. To do so, we develop a formalism in which high-dimensional configurations are no longer represented by low-dimensional position vectors. Instead, for each configuration we calculate a probability distribution, which has a domain that encompasses the entirety of the low-dimensional space. To construct a biasing potential, we exploit an analogy with metadynamics and use the trajectory to adaptively construct a repulsive, history-dependent bias from the distributions that correspond to the previously visited configurations. This potential forces the system to explore more of phase space by making it desirable to adopt configurations whose distributions do not overlap with the bias. We apply this algorithm to a small model protein and succeed in reproducing the free-energy surface that we obtain from a parallel tempering calculation. PMID:22427357

  7. Exploring high dimensional free energy landscapes: Temperature accelerated sliced sampling

    NASA Astrophysics Data System (ADS)

    Awasthi, Shalini; Nair, Nisanth N.

    2017-03-01

    Biased sampling of collective variables is widely used to accelerate rare events in molecular simulations and to explore free energy surfaces. However, computational efficiency of these methods decreases with increasing number of collective variables, which severely limits the predictive power of the enhanced sampling approaches. Here we propose a method called Temperature Accelerated Sliced Sampling (TASS) that combines temperature accelerated molecular dynamics with umbrella sampling and metadynamics to sample the collective variable space in an efficient manner. The presented method can sample a large number of collective variables and is advantageous for controlled exploration of broad and unbound free energy basins. TASS is also shown to achieve quick free energy convergence and is practically usable with ab initio molecular dynamics techniques.

  8. Yielding physically-interpretable emulators - A Sparse PCA approach

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.

  9. TPSLVM: a dimensionality reduction algorithm based on thin plate splines.

    PubMed

    Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming

    2014-10-01

    Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

  10. Liquid Jets in Crossflow at Elevated Temperatures and Pressures

    NASA Astrophysics Data System (ADS)

    Amighi, Amirreza

    An experimental study on the characterization of liquid jets injected into subsonic air crossflows is conducted. The aim of the study is to relate the droplet size and other attributes of the spray, such as breakup length, position, plume width, and time to flow parameters, including jet and air velocities, pressure and temperature as well as non-dimensional variables. Furthermore, multiple expressions are defined that would summarize the general behavior of the spray. For this purpose, an experimental setup is developed, which could withstand high temperatures and pressures to simulate conditions close to those experienced inside gas turbine engines. Images are captured using a laser based shadowgraphy system similar to a 2D PIV system. Image processing is extensively used to measure droplet size and boundaries of the spray. In total 209 different conditions are tested and over 72,000 images are captured and processed. The crossflow air temperatures are 25°C, 200°C, and 300°C; absolute crossflow air pressures are 2.1, 3.8, and 5.2 bars. Various liquid and gas velocities are tested for each given temperature and pressure in order to study the breakup mechanisms and regimes. Effects of dimensional and non-dimensional variables on droplet size are presented in detail. Several correlations for the mean droplet size, which are generated in this process, are presented. In addition, the influence of non-dimensional variables on the breakup length, time, plume area, angle, width and mean jet surface thickness are discussed and individual correlations are provided for each parameter. The influence of each individual parameter on the droplet sizes is discussed for a better understanding of the fragmentation process. Finally, new correlations for the centerline, windward and leeward trajectories are presented and compared to the previously reported correlations.

  11. Investigation of Acoustic Vector Sensor Data Processing in the Presence of Highly Variable Bathymetry

    DTIC Science & Technology

    2014-06-01

    shelf 10 region to the north of the canyon. The impact of this 3-dimensional (3D) variable bathymetry, which may be combined with the effects of...weaker arrivals at large negative angles, consistent with the earliest bottom reflections on the left. The impact of the bottom-path reflections from...nzout*(nrout+1)*ny))),’bof’); for ifr =1:64, for ir=1:nrout+1, for iy=1:ny, data=fread(fid3,2*nzout,’float32’); fwrite(fid,data

  12. New horizons for study of the cardiopulmonary and circulatory systems. [image reconstruction techniques

    NASA Technical Reports Server (NTRS)

    Wood, E. H.

    1976-01-01

    The paper discusses the development of computer-controlled three-dimensional reconstruction techniques designed to determine the dynamic changes in the true shape and dimensions of the epi- and endocardial surfaces of the heart, along with variable time base (stop-action to real-time) displays of the transmural distribution of the coronary microcirculation and the three-dimensional anatomy of the macrovasculature in all regions of the body throughout individual cardiac and/or respiratory cycles. A technique for reconstructing a cross section of the heart from multiplanar videoroentgenograms is outlined. The capability of high spatial and high temporal resolution scanning videodensitometry makes possible measurement of the appearance, mean transit and clearance of roentgen opaque substances in three-dimensional space through the myocardium with a degree of simultaneous anatomic and temporal resolution not obtainable by current isotope techniques. The distribution of a variety of selected chemical elements or biologic materials within a body portion can also be determined.

  13. Use of Transition Modeling to Enable the Computation of Losses for Variable-Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    To investigate the penalties associated with using a variable speed power turbine (VSPT) in a rotorcraft capable of vertical takeoff and landing, various analysis tools are required. Such analysis tools must be able to model the flow accurately within the operating envelope of VSPT. For power turbines low Reynolds numbers and a wide range of the incidence angles, positive and negative, due to the variation in the shaft speed at relatively fixed corrected flows, characterize this envelope. The flow in the turbine passage is expected to be transitional and separated at high incidence. The turbulence model of Walters and Leylek was implemented in the NASA Glenn-HT code to enable a more accurate analysis of such flows. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Heat transfer computations were performed because it is a good marker for transition. The final goal is to be able to compute the aerodynamic losses. Armed with the new transition model, total pressure losses for three-dimensional flow of an Energy Efficient Engine (E3) tip section cascade for a range of incidence angles were computed in anticipation of the experimental data. The results obtained form a loss bucket for the chosen blade.

  14. Three-Dimensional Analysis of the Fundus of the Human Internal Acoustic Canal.

    PubMed

    Schart-Morén, Nadine; Larsson, Sune; Rask-Andersen, Helge; Li, Hao

    Documentation of the nerve components in the internal acoustic canal is essential before cochlea implantation surgery. Interpretations may be challenged by wide anatomical variations of the VIIIth nerve and their ramifications. Malformations may further defy proper nerve identification. Using microcomputed tomography, we analyzed the fundus bone channels in an archival collection of 113 macerated human temporal bones and 325 plastic inner molds. Data were subsequently processed by volume-rendering software using a bony tissue algorithm. Three-dimensional reconstructions were made, and through orthogonal sections, the topographic anatomy was established. The technique provided additional information regarding the anatomy of the nerve foramina/channels of the human fundus region, including variations and destinations. Channel anastomosis were found beyond the level of the fundus. A foramen of the transverse crest was identified. Three-dimensional reconstructions and cropping outlined the bone canals and demonstrated the highly variable VIIIth nerve anatomy at the fundus of the human inner acoustic canal. Myriad channel interconnections suggested an intricate system of neural interactive pathways in humans. Particularly striking was the variable anatomy of the saccule nerve channels. The results may assist in the preoperative interpretation of the VIIIth nerve anatomy.

  15. Comparison of Two- and Three-Dimensional Methods for Analysis of Trunk Kinematic Variables in the Golf Swing.

    PubMed

    Smith, Aimée C; Roberts, Jonathan R; Wallace, Eric S; Kong, Pui; Forrester, Stephanie E

    2016-02-01

    Two-dimensional methods have been used to compute trunk kinematic variables (flexion/extension, lateral bend, axial rotation) and X-factor (difference in axial rotation between trunk and pelvis) during the golf swing. Recent X-factor studies advocated three-dimensional (3D) analysis due to the errors associated with two-dimensional (2D) methods, but this has not been investigated for all trunk kinematic variables. The purpose of this study was to compare trunk kinematic variables and X-factor calculated by 2D and 3D methods to examine how different approaches influenced their profiles during the swing. Trunk kinematic variables and X-factor were calculated for golfers from vectors projected onto the global laboratory planes and from 3D segment angles. Trunk kinematic variable profiles were similar in shape; however, there were statistically significant differences in trunk flexion (-6.5 ± 3.6°) at top of backswing and trunk right-side lateral bend (8.7 ± 2.9°) at impact. Differences between 2D and 3D X-factor (approximately 16°) could largely be explained by projection errors introduced to the 2D analysis through flexion and lateral bend of the trunk and pelvis segments. The results support the need to use a 3D method for kinematic data calculation to accurately analyze the golf swing.

  16. Predictors of Absenteeism Severity in Truant Youth: A Dimensional and Categorical Analysis

    ERIC Educational Resources Information Center

    Skedgell, Kyleigh; Kearney, Christopher A.

    2016-01-01

    The present study examined the relationship between school absenteeism severity and specific clinical and family variables in middle and high school youth aged 11-19 years recruited from two truancy settings. School absenteeism severity was defined as a percentage of full school days missed from the current academic year at the time of assessment…

  17. Probabilistic Gait Classification in Children with Cerebral Palsy: A Bayesian Approach

    ERIC Educational Resources Information Center

    Van Gestel, Leen; De Laet, Tinne; Di Lello, Enrico; Bruyninckx, Herman; Molenaers, Guy; Van Campenhout, Anja; Aertbelien, Erwin; Schwartz, Mike; Wambacq, Hans; De Cock, Paul; Desloovere, Kaat

    2011-01-01

    Three-dimensional gait analysis (3DGA) generates a wealth of highly variable data. Gait classifications help to reduce, simplify and interpret this vast amount of 3DGA data and thereby assist and facilitate clinical decision making in the treatment of CP. CP gait is often a mix of several clinically accepted distinct gait patterns. Therefore,…

  18. The development of a three-dimensional partially elliptic flow computer program for combustor research

    NASA Technical Reports Server (NTRS)

    Pan, Y. S.

    1978-01-01

    A three dimensional, partially elliptic, computer program was developed. Without requiring three dimensional computer storage locations for all flow variables, the partially elliptic program is capable of predicting three dimensional combustor flow fields with large downstream effects. The program requires only slight increase of computer storage over the parabolic flow program from which it was developed. A finite difference formulation for a three dimensional, fully elliptic, turbulent, reacting, flow field was derived. Because of the negligible diffusion effects in the main flow direction in a supersonic combustor, the set of finite-difference equations can be reduced to a partially elliptic form. Only the pressure field was governed by an elliptic equation and requires three dimensional storage; all other dependent variables are governed by parabolic equations. A numerical procedure which combines a marching integration scheme with an iterative scheme for solving the elliptic pressure was adopted.

  19. Groundwater-fed irrigation impacts spatially distributed temporal scaling behavior of the natural system: a spatio-temporal framework for understanding water management impacts

    NASA Astrophysics Data System (ADS)

    Condon, Laura E.; Maxwell, Reed M.

    2014-03-01

    Regional scale water management analysis increasingly relies on integrated modeling tools. Much recent work has focused on groundwater-surface water interactions and feedbacks. However, to our knowledge, no study has explicitly considered impacts of management operations on the temporal dynamics of the natural system. Here, we simulate twenty years of hourly moisture dependent, groundwater-fed irrigation using a three-dimensional, fully integrated, hydrologic model (ParFlow-CLM). Results highlight interconnections between irrigation demand, groundwater oscillation frequency and latent heat flux variability not previously demonstrated. Additionally, the three-dimensional model used allows for novel consideration of spatial patterns in temporal dynamics. Latent heat flux and water table depth both display spatial organization in temporal scaling, an important finding given the spatial homogeneity and weak scaling observed in atmospheric forcings. Pumping and irrigation amplify high frequency (sub-annual) variability while attenuating low frequency (inter-annual) variability. Irrigation also intensifies scaling within irrigated areas, essentially increasing temporal memory in both the surface and the subsurface. These findings demonstrate management impacts that extend beyond traditional water balance considerations to the fundamental behavior of the system itself. This is an important step to better understanding groundwater’s role as a buffer for natural variability and the impact that water management has on this capacity.

  20. Dimensional analysis of flame angles versus wind speed

    Treesearch

    Robert E. Martin; Mark A. Finney; Domingo M. Molina; David B. Sapsis; Scott L. Stephens; Joe H. Scott; David R. Weise

    1991-01-01

    Dimensional analysis has potential to help explain and predict physical phenomena, but has been used very little in studies of wildland fire behavior. By combining variables into dimensionless groups, the number of variables to be handled and the experiments to be run is greatly reduced. A low velocity wind tunnel was constructed, and methyl, ethyl, and isopropyl...

  1. Separation of the atmospheric variability into non-Gaussian multidimensional sources by projection pursuit techniques

    NASA Astrophysics Data System (ADS)

    Pires, Carlos A. L.; Ribeiro, Andreia F. S.

    2017-02-01

    We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.

  2. A numerical study of the 2- and 3-dimensional unsteady Navier-Stokes equations in velocity-vorticity variables using compact difference schemes

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.; Grosch, C. E.

    1984-01-01

    A compact finite-difference approximation to the unsteady Navier-Stokes equations in velocity-vorticity variables is used to numerically simulate a number of flows. These include two-dimensional laminar flow of a vortex evolving over a flat plate with an embedded cavity, the unsteady flow over an elliptic cylinder, and aspects of the transient dynamics of the flow over a rearward facing step. The methodology required to extend the two-dimensional formulation to three-dimensions is presented.

  3. Dynamically Intuitive and Potentially Predicatable Three-Dimensional Structures in the Low Frequency Flow Variability of the Extratropical Northern Hemisphere

    NASA Astrophysics Data System (ADS)

    Wettstein, J. J.; Li, C.; Bradshaw, S.

    2016-12-01

    Canonical tropospheric climate variability patterns and their corresponding indices are ubiquitous, yet a firm dynamical interpretation has remained elusive for many of even the leading extratropical patterns. Part of the lingering difficulty in understanding and predicting atmospheric low frequency variability is the fact that the identification itself of the different patterns is indistinct. This study characterizes three-dimensional structures in the low frequency variability of the extratropical zonal wind field within the entire period of record of the ERA-Interim reanalysis and suggests the foundations for a new paradigm in identifying and predicting extratropical atmospheric low-frequency variability. In concert with previous results, there is a surprisingly rich three-dimensional structure to the variance of the zonal wind field that is not (cannot be) captured by traditional identification protocols that explore covariance of pressure in the lower troposphere, flow variability in the zonal mean or, for that matter, in any variable on any planar surface. Correspondingly, many of the pressure-based canonical indices of low frequency atmospheric variability exhibit inconsistent relationships to physically intuitive reorganizations of the subtropical and polar front jets and with other forcing mechanisms. Different patterns exhibit these inconsistencies to a greater or lesser extent. The three-dimensional variance of the zonal wind field is, by contrast, naturally organized around dynamically intuitive atmospheric redistributions that have a surprisingly large amount of physically intuitive information in the vertical. These conclusions are robust in a variety of seasons and also in intra-seasonal and inter-annual explorations. Similar results and conclusions are also derived using detrended data, other reanalyses, and state-of-the-art coupled climate model output. In addition to providing a clearer perspective on the distinct three-dimensional patterns of atmospheric low frequency variability, the time evolution and potential predictability of the resultant patterns can be explored with much greater clarity because of an intrinsic link between the patterns and the requisite conservation of momentum (i.e. to the primitive equations and candidate forcing mechanisms).

  4. Low-frequency high-definition power Doppler in visualizing and defining fetal pulmonary venous connections.

    PubMed

    Liu, Lin; He, Yihua; Li, Zhian; Gu, Xiaoyan; Zhang, Ye; Zhang, Lianzhong

    2014-07-01

    The use of low-frequency high-definition power Doppler in assessing and defining pulmonary venous connections was investigated. Study A included 260 fetuses at gestational ages ranging from 18 to 36 weeks. Pulmonary veins were assessed by performing two-dimensional B-mode imaging, color Doppler flow imaging (CDFI), and low-frequency high-definition power Doppler. A score of 1 was assigned if one pulmonary vein was visualized, 2 if two pulmonary veins were visualized, 3 if three pulmonary veins were visualized, and 4 if four pulmonary veins were visualized. The detection rate between Exam-1 and Exam-2 (intra-observer variability) and between Exam-1 and Exam-3 (inter-observer variability) was compared. In study B, five cases with abnormal pulmonary venous connection were diagnosed and compared to their anatomical examination. In study A, there was a significant difference between CDFI and low-frequency high-definition power Doppler for the four pulmonary veins observed (P < 0.05). The detection rate of each pulmonary vein when employing low-frequency high-definition power Doppler was higher than that when employing two-dimensional B-mode imaging or CDFI. There was no significant difference between the intra- and inter-observer variabilities using low-frequency high-definition power Doppler display of pulmonary veins (P > 0.05). The coefficient correlation between Exam-1 and Exam-2 was 0.844, and the coefficient correlation between Exam-1 and Exam-3 was 0.821. In study B, one case of total anomalous pulmonary venous return and four cases of partial anomalous pulmonary venous return were diagnosed by low-frequency high-definition power Doppler and confirmed by autopsy. The assessment of pulmonary venous connections by low-frequency high-definition power Doppler is advantageous. Pulmonary venous anatomy can and should be monitored during fetal heart examination.

  5. Evaluation of training nurses to perform semi-automated three-dimensional left ventricular ejection fraction using a customised workstation-based training protocol.

    PubMed

    Guppy-Coles, Kristyan B; Prasad, Sandhir B; Smith, Kym C; Hillier, Samuel; Lo, Ada; Atherton, John J

    2015-06-01

    We aimed to determine the feasibility of training cardiac nurses to evaluate left ventricular function utilising a semi-automated, workstation-based protocol on three dimensional echocardiography images. Assessment of left ventricular function by nurses is an attractive concept. Recent developments in three dimensional echocardiography coupled with border detection assistance have reduced inter- and intra-observer variability and analysis time. This could allow abbreviated training of nurses to assess cardiac function. A comparative, diagnostic accuracy study evaluating left ventricular ejection fraction assessment utilising a semi-automated, workstation-based protocol performed by echocardiography-naïve nurses on previously acquired three dimensional echocardiography images. Nine cardiac nurses underwent two brief lectures about cardiac anatomy, physiology and three dimensional left ventricular ejection fraction assessment, before a hands-on demonstration in 20 cases. We then selected 50 cases from our three dimensional echocardiography library based on optimal image quality with a broad range of left ventricular ejection fractions, which was quantified by two experienced sonographers and the average used as the comparator for the nurses. Nurses independently measured three dimensional left ventricular ejection fraction using the Auto lvq package with semi-automated border detection. The left ventricular ejection fraction range was 25-72% (70% with a left ventricular ejection fraction <55%). All nurses showed excellent agreement with the sonographers. Minimal intra-observer variability was noted on both short-term (same day) and long-term (>2 weeks later) retest. It is feasible to train nurses to measure left ventricular ejection fraction utilising a semi-automated, workstation-based protocol on previously acquired three dimensional echocardiography images. Further study is needed to determine the feasibility of training nurses to acquire three dimensional echocardiography images on real-world patients to measure left ventricular ejection fraction. Nurse-performed evaluation of left ventricular function could facilitate the broader application of echocardiography to allow cost-effective screening and monitoring for left ventricular dysfunction in high-risk populations. © 2014 John Wiley & Sons Ltd.

  6. Quantity-activity relationship of denitrifying bacteria and environmental scaling in streams of a forested watershed

    USGS Publications Warehouse

    O'Connor, B.L.; Hondzo, Miki; Dobraca, D.; LaPara, T.M.; Finlay, J.A.; Brezonik, P.L.

    2006-01-01

    The spatial variability of subreach denitrification rates in streams was evaluated with respect to controlling environmental conditions, molecular examination of denitrifying bacteria, and dimensional analysis. Denitrification activities ranged from 0 and 800 ng-N gsed-1 d-1 with large variations observed within short distances (<50 m) along stream reaches. A log-normal probability distribution described the range in denitrification activities and was used to define low (16% of the probability distributibn), medium (68%), and high (16%) denitrification potential groups. Denitrifying bacteria were quantified using a competitive polymerase chain reaction (cPCR) technique that amplified the nirK gene that encodes for nitrite reductase. Results showed a range of nirK quantities from 103 to 107 gene-copy-number gsed.-1 A nonparametric statistical test showed no significant difference in nirK quantifies among stream reaches, but revealed that samples with a high denitrification potential had significantly higher nirK quantities. Denitrification activity was positively correlated with nirK quantities with scatter in the data that can be attributed to varying environmental conditions along stream reaches. Dimensional analysis was used to evaluate denitrification activities according to environmental variables that describe fluid-flow properties, nitrate and organic material quantities, and dissolved oxygen flux. Buckingham's pi theorem was used to generate dimensionless groupings and field data were used to determine scaling parameters. The resulting expressions between dimensionless NO3- flux and dimensionless groupings of environmental variables showed consistent scaling, which indicates that the subreach variability in denitrification rates can be predicted by the controlling physical, chemical, and microbiological conditions. Copyright 2006 by the American Geophysical Union.

  7. A priori and a posteriori analyses of the flamelet/progress variable approach for supersonic combustion

    NASA Astrophysics Data System (ADS)

    Saghafian, Amirreza; Pitsch, Heinz

    2012-11-01

    A compressible flamelet/progress variable approach (CFPV) has been devised for high-speed flows. Temperature is computed from the transported total energy and tabulated species mass fractions and the source term of the progress variable is rescaled with pressure and temperature. The combustion is thus modeled by three additional scalar equations and a chemistry table that is computed in a pre-processing step. Three-dimensional direct numerical simulation (DNS) databases of reacting supersonic turbulent mixing layer with detailed chemistry are analyzed to assess the underlying assumptions of CFPV. Large eddy simulations (LES) of the same configuration using the CFPV method have been performed and compared with the DNS results. The LES computations are based on the presumed subgrid PDFs of mixture fraction and progress variable, beta function and delta function respectively, which are assessed using DNS databases. The flamelet equation budget is also computed to verify the validity of CFPV method for high-speed flows.

  8. Parametric analysis of diffuser requirements for high expansion ratio space engine

    NASA Technical Reports Server (NTRS)

    Wojciechowski, C. J.; Anderson, P. G.

    1981-01-01

    A supersonic diffuser ejector design computer program was developed. Using empirically modified one dimensional flow methods the diffuser ejector geometry is specified by the code. The design code results for calculations up to the end of the diffuser second throat were verified. Diffuser requirements for sea level testing of high expansion ratio space engines were defined. The feasibility of an ejector system using two commonly available turbojet engines feeding two variable area ratio ejectors was demonstrated.

  9. Differences in wrist mechanics during the golf swing based on golf handicap.

    PubMed

    Fedorcik, Gregory G; Queen, Robin M; Abbey, Alicia N; Moorman, Claude T; Ruch, David S

    2012-05-01

    Variation in swing mechanics between golfers of different skill levels has been previously reported. To investigate if differences in three-dimensional wrist kinematics and the angle of golf club descent between low and high handicap golfers. A descriptive laboratory study was performed with twenty-eight male golfers divided into two groups, low handicap golfers (handicap = 0-5, n = 15) and high handicap golfers (handicap ≥ 10, n = 13). Bilateral peak three-dimensional wrist mechanics, bilateral wrist mechanics at ball contact (BC), peak angle of descent from the end of the backswing to ball contact, and the angle of descent when the forearm was parallel to the ground (DEC-PAR) were determined using an 8 camera motion capture system. Independent t-tests were completed for each study variable (α = 0.05). Pearson correlation coefficients were determined between golf handicap and each of the study variables. The peak lead arm radial deviation (5.7 degrees, p = 0.008), lead arm radial deviation at ball contact (7.1 degrees, p = 0.001), and DEC-PAR (15.8 degrees, p = 0.002) were significantly greater in the high handicap group. In comparison with golfers with a low handicap, golfers with a high handicap have increased radial deviation during the golf swing and at ball contact. Copyright © 2011 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  10. Visualization of Potential Energy Function Using an Isoenergy Approach and 3D Prototyping

    ERIC Educational Resources Information Center

    Teplukhin, Alexander; Babikov, Dmitri

    2015-01-01

    In our three-dimensional world, one can plot, see, and comprehend a function of two variables at most, V(x,y). One cannot plot a function of three or more variables. For this reason, visualization of the potential energy function in its full dimensionality is impossible even for the smallest polyatomic molecules, such as triatomics. This creates…

  11. Using Dynamic Mathematics Software to Teach One-Variable Inequalities by the View of Semiotic Registers

    ERIC Educational Resources Information Center

    Kabaca, Tolga

    2013-01-01

    Solution set of any inequality or compound inequality, which has one-variable, lies in the real line which is one dimensional. So a difficulty appears when computer assisted graphical representation is intended to use for teaching these topics. Sketching a one-dimensional graph by using computer software is not a straightforward work. In this…

  12. Forms of null Lagrangians in field theories of continuum mechanics

    NASA Astrophysics Data System (ADS)

    Kovalev, V. A.; Radaev, Yu. N.

    2012-02-01

    The divergence representation of a null Lagrangian that is regular in a star-shaped domain is used to obtain its general expression containing field gradients of order ≤ 1 in the case of spacetime of arbitrary dimension. It is shown that for a static three-component field in the three-dimensional space, a null Lagrangian can contain up to 15 independent elements in total. The general form of a null Lagrangian in the four-dimensional Minkowski spacetime is obtained (the number of physical field variables is assumed arbitrary). A complete theory of the null Lagrangian for the n-dimensional spacetime manifold (including the four-dimensional Minkowski spacetime as a special case) is given. Null Lagrangians are then used as a basis for solving an important variational problem of an integrating factor. This problem involves searching for factors that depend on the spacetime variables, field variables, and their gradients and, for a given system of partial differential equations, ensure the equality between the scalar product of a vector multiplier by the system vector and some divergence expression for arbitrary field variables and, hence, allow one to formulate a divergence conservation law on solutions to the system.

  13. Development and external validation of new ultrasound-based mathematical models for preoperative prediction of high-risk endometrial cancer.

    PubMed

    Van Holsbeke, C; Ameye, L; Testa, A C; Mascilini, F; Lindqvist, P; Fischerova, D; Frühauf, F; Fransis, S; de Jonge, E; Timmerman, D; Epstein, E

    2014-05-01

    To develop and validate strategies, using new ultrasound-based mathematical models, for the prediction of high-risk endometrial cancer and compare them with strategies using previously developed models or the use of preoperative grading only. Women with endometrial cancer were prospectively examined using two-dimensional (2D) and three-dimensional (3D) gray-scale and color Doppler ultrasound imaging. More than 25 ultrasound, demographic and histological variables were analyzed. Two logistic regression models were developed: one 'objective' model using mainly objective variables; and one 'subjective' model including subjective variables (i.e. subjective impression of myometrial and cervical invasion, preoperative grade and demographic variables). The following strategies were validated: a one-step strategy using only preoperative grading and two-step strategies using preoperative grading as the first step and one of the new models, subjective assessment or previously developed models as a second step. One-hundred and twenty-five patients were included in the development set and 211 were included in the validation set. The 'objective' model retained preoperative grade and minimal tumor-free myometrium as variables. The 'subjective' model retained preoperative grade and subjective assessment of myometrial invasion. On external validation, the performance of the new models was similar to that on the development set. Sensitivity for the two-step strategy with the 'objective' model was 78% (95% CI, 69-84%) at a cut-off of 0.50, 82% (95% CI, 74-88%) for the strategy with the 'subjective' model and 83% (95% CI, 75-88%) for that with subjective assessment. Specificity was 68% (95% CI, 58-77%), 72% (95% CI, 62-80%) and 71% (95% CI, 61-79%) respectively. The two-step strategies detected up to twice as many high-risk cases as preoperative grading only. The new models had a significantly higher sensitivity than did previously developed models, at the same specificity. Two-step strategies with 'new' ultrasound-based models predict high-risk endometrial cancers with good accuracy and do this better than do previously developed models. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.

  14. Starspot detection and properties

    NASA Astrophysics Data System (ADS)

    Savanov, I. S.

    2013-07-01

    I review the currently available techniques for the starspots detection including the one-dimensional spot modelling of photometric light curves. Special attention will be paid to the modelling of photospheric activity based on the high-precision light curves obtained with space missions MOST, CoRoT, and Kepler. Physical spot parameters (temperature, sizes and variability time scales including short-term activity cycles) are discussed.

  15. Comparisons of interlaboratory swellometer testing of two water-repellent preservative formulations for millwork

    Treesearch

    Elmer L. Schmidt; Timothy P. Murphy; Charles N. Cheeks; Alan S. Ross; T. S. (Eugene) Chiu; R. Sam Williams

    2002-01-01

    Water-repellency of preservative formulations used in the millwork industry has long been evaluated by measurement of the dimensional changes in wood treated and then submerged in water according to guidelines published by the millwork industry. Perceptions that this swellometer test was highly variable led to a round-robin test of one solvent-borne and one waterborne...

  16. Detection of Answer Copying Based on the Structure of a High-Stakes Test

    ERIC Educational Resources Information Center

    Belov, Dmitry I.

    2011-01-01

    This article presents the Variable Match Index (VM-Index), a new statistic for detecting answer copying. The power of the VM-Index relies on two-dimensional conditioning as well as the structure of the test. The asymptotic distribution of the VM-Index is analyzed by reduction to Poisson trials. A computational study comparing the VM-Index with the…

  17. Multivariate bias adjustment of high-dimensional climate simulations: the Rank Resampling for Distributions and Dependences (R2D2) bias correction

    NASA Astrophysics Data System (ADS)

    Vrac, Mathieu

    2018-06-01

    Climate simulations often suffer from statistical biases with respect to observations or reanalyses. It is therefore common to correct (or adjust) those simulations before using them as inputs into impact models. However, most bias correction (BC) methods are univariate and so do not account for the statistical dependences linking the different locations and/or physical variables of interest. In addition, they are often deterministic, and stochasticity is frequently needed to investigate climate uncertainty and to add constrained randomness to climate simulations that do not possess a realistic variability. This study presents a multivariate method of rank resampling for distributions and dependences (R2D2) bias correction allowing one to adjust not only the univariate distributions but also their inter-variable and inter-site dependence structures. Moreover, the proposed R2D2 method provides some stochasticity since it can generate as many multivariate corrected outputs as the number of statistical dimensions (i.e., number of grid cell × number of climate variables) of the simulations to be corrected. It is based on an assumption of stability in time of the dependence structure - making it possible to deal with a high number of statistical dimensions - that lets the climate model drive the temporal properties and their changes in time. R2D2 is applied on temperature and precipitation reanalysis time series with respect to high-resolution reference data over the southeast of France (1506 grid cell). Bivariate, 1506-dimensional and 3012-dimensional versions of R2D2 are tested over a historical period and compared to a univariate BC. How the different BC methods behave in a climate change context is also illustrated with an application to regional climate simulations over the 2071-2100 period. The results indicate that the 1d-BC basically reproduces the climate model multivariate properties, 2d-R2D2 is only satisfying in the inter-variable context, 1506d-R2D2 strongly improves inter-site properties and 3012d-R2D2 is able to account for both. Applications of the proposed R2D2 method to various climate datasets are relevant for many impact studies. The perspectives of improvements are numerous, such as introducing stochasticity in the dependence itself, questioning its stability assumption, and accounting for temporal properties adjustment while including more physics in the adjustment procedures.

  18. Computational modeling of unsteady third-grade fluid flow over a vertical cylinder: A study of heat transfer visualization

    NASA Astrophysics Data System (ADS)

    Reddy, G. Janardhana; Hiremath, Ashwini; Kumar, Mahesh

    2018-03-01

    The present paper aims to investigate the effect of Prandtl number for unsteady third-grade fluid flow over a uniformly heated vertical cylinder using Bejan's heat function concept. The mathematical model of this problem is given by highly time-dependent non-linear coupled equations and are resolved by an efficient unconditionally stable implicit scheme. The time histories of average values of momentum and heat transport coefficients as well as the steady-state flow variables are displayed graphically for distinct values of non-dimensional control parameters arising in the system. As the non-dimensional parameter value gets amplified, the time taken for the fluid flow variables to attain the time-independent state is decreasing. The dimensionless heat function values are closely associated with an overall rate of heat transfer. Thermal energy transfer visualization implies that the heat function contours are compact in the neighborhood of the leading edge of the hot cylindrical wall. It is noticed that the deviations of flow-field variables from the hot wall for a non-Newtonian third-grade fluid flow are significant compared to the usual Newtonian fluid flow.

  19. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less

  20. Network-based regularization for matched case-control analysis of high-dimensional DNA methylation data.

    PubMed

    Sun, Hokeun; Wang, Shuang

    2013-05-30

    The matched case-control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case-control studies with high-dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network-based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non-tumor tissues or between pre-treatment and post-treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network-based penalty that encourages a grouping effect of (1) linked Cytosine-phosphate-Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high-dimensional variable selection problems for matched case-control data. We further investigated the benefits of utilizing biological group or graph information for matched case-control data. We applied the proposed method to a genome-wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non-tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  2. A velocity-pressure integrated, mixed interpolation, Galerkin finite element method for high Reynolds number laminar flows

    NASA Technical Reports Server (NTRS)

    Kim, Sang-Wook

    1988-01-01

    A velocity-pressure integrated, mixed interpolation, Galerkin finite element method for the Navier-Stokes equations is presented. In the method, the velocity variables were interpolated using complete quadratic shape functions and the pressure was interpolated using linear shape functions. For the two dimensional case, the pressure is defined on a triangular element which is contained inside the complete biquadratic element for velocity variables; and for the three dimensional case, the pressure is defined on a tetrahedral element which is again contained inside the complete tri-quadratic element. Thus the pressure is discontinuous across the element boundaries. Example problems considered include: a cavity flow for Reynolds number of 400 through 10,000; a laminar backward facing step flow; and a laminar flow in a square duct of strong curvature. The computational results compared favorable with those of the finite difference methods as well as experimental data available. A finite elememt computer program for incompressible, laminar flows is presented.

  3. A non-local computational boundary condition for duct acoustics

    NASA Technical Reports Server (NTRS)

    Zorumski, William E.; Watson, Willie R.; Hodge, Steve L.

    1994-01-01

    A non-local boundary condition is formulated for acoustic waves in ducts without flow. The ducts are two dimensional with constant area, but with variable impedance wall lining. Extension of the formulation to three dimensional and variable area ducts is straightforward in principle, but requires significantly more computation. The boundary condition simulates a nonreflecting wave field in an infinite duct. It is implemented by a constant matrix operator which is applied at the boundary of the computational domain. An efficient computational solution scheme is developed which allows calculations for high frequencies and long duct lengths. This computational solution utilizes the boundary condition to limit the computational space while preserving the radiation boundary condition. The boundary condition is tested for several sources. It is demonstrated that the boundary condition can be applied close to the sound sources, rendering the computational domain small. Computational solutions with the new non-local boundary condition are shown to be consistent with the known solutions for nonreflecting wavefields in an infinite uniform duct.

  4. Review of literature on the finite-element solution of the equations of two-dimensional surface-water flow in the horizontal plane

    USGS Publications Warehouse

    Lee, Jonathan K.; Froehlich, David C.

    1987-01-01

    Published literature on the application of the finite-element method to solving the equations of two-dimensional surface-water flow in the horizontal plane is reviewed in this report. The finite-element method is ideally suited to modeling two-dimensional flow over complex topography with spatially variable resistance. A two-dimensional finite-element surface-water flow model with depth and vertically averaged velocity components as dependent variables allows the user great flexibility in defining geometric features such as the boundaries of a water body, channels, islands, dikes, and embankments. The following topics are reviewed in this report: alternative formulations of the equations of two-dimensional surface-water flow in the horizontal plane; basic concepts of the finite-element method; discretization of the flow domain and representation of the dependent flow variables; treatment of boundary conditions; discretization of the time domain; methods for modeling bottom, surface, and lateral stresses; approaches to solving systems of nonlinear equations; techniques for solving systems of linear equations; finite-element alternatives to Galerkin's method of weighted residuals; techniques of model validation; and preparation of model input data. References are listed in the final chapter.

  5. Consistency of clinical biomechanical measures between three different institutions: implications for multi-center biomechanical and epidemiological research.

    PubMed

    Myer, Gregory D; Wordeman, Samuel C; Sugimoto, Dai; Bates, Nathaniel A; Roewer, Benjamin D; Medina McKeon, Jennifer M; DiCesare, Christopher A; Di Stasi, Stephanie L; Barber Foss, Kim D; Thomas, Staci M; Hewett, Timothy E

    2014-05-01

    Multi-center collaborations provide a powerful alternative to overcome the inherent limitations to single-center investigations. Specifically, multi-center projects can support large-scale prospective, longitudinal studies that investigate relatively uncommon outcomes, such as anterior cruciate ligament injury. This project was conceived to assess within- and between-center reliability of an affordable, clinical nomogram utilizing two-dimensional video methods to screen for risk of knee injury. The authors hypothesized that the two-dimensional screening methods would provide good-to-excellent reliability within and between institutions for assessment of frontal and sagittal plane biomechanics. Nineteen female, high school athletes participated. Two-dimensional video kinematics of the lower extremity during a drop vertical jump task were collected on all 19 study participants at each of the three facilities. Within-center and between-center reliability were assessed with intra- and inter-class correlation coefficients. Within-center reliability of the clinical nomogram variables was consistently excellent, but between-center reliability was fair-to-good. Within-center intra-class correlation coefficient for all nomogram variables combined was 0.98, while combined between-center inter-class correlation coefficient was 0.63. Injury risk screening protocols were reliable within and repeatable between centers. These results demonstrate the feasibility of multi-site biomechanical studies and establish a framework for further dissemination of injury risk screening algorithms. Specifically, multi-center studies may allow for further validation and optimization of two-dimensional video screening tools. 2b.

  6. Reconstruction of three-dimensional porous media using generative adversarial neural networks

    NASA Astrophysics Data System (ADS)

    Mosser, Lukas; Dubrule, Olivier; Blunt, Martin J.

    2017-10-01

    To evaluate the variability of multiphase flow properties of porous media at the pore scale, it is necessary to acquire a number of representative samples of the void-solid structure. While modern x-ray computer tomography has made it possible to extract three-dimensional images of the pore space, assessment of the variability in the inherent material properties is often experimentally not feasible. We present a method to reconstruct the solid-void structure of porous media by applying a generative neural network that allows an implicit description of the probability distribution represented by three-dimensional image data sets. We show, by using an adversarial learning approach for neural networks, that this method of unsupervised learning is able to generate representative samples of porous media that honor their statistics. We successfully compare measures of pore morphology, such as the Euler characteristic, two-point statistics, and directional single-phase permeability of synthetic realizations with the calculated properties of a bead pack, Berea sandstone, and Ketton limestone. Results show that generative adversarial networks can be used to reconstruct high-resolution three-dimensional images of porous media at different scales that are representative of the morphology of the images used to train the neural network. The fully convolutional nature of the trained neural network allows the generation of large samples while maintaining computational efficiency. Compared to classical stochastic methods of image reconstruction, the implicit representation of the learned data distribution can be stored and reused to generate multiple realizations of the pore structure very rapidly.

  7. Band gaps in grid structure with periodic local resonator subsystems

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaoqin; Wang, Jun; Wang, Rongqi; Lin, Jieqiong

    2017-09-01

    The grid structure is widely used in architectural and mechanical field for its high strength and saving material. This paper will present a study on an acoustic metamaterial beam (AMB) based on the normal square grid structure with local resonators owning both flexible band gaps and high static stiffness, which have high application potential in vibration control. Firstly, the AMB with variable cross-section frame is analytically modeled by the beam-spring-mass model that is provided by using the extended Hamilton’s principle and Bloch’s theorem. The above model is used for computing the dispersion relation of the designed AMB in terms of the design parameters, and the influences of relevant parameters on band gaps are discussed. Then a two-dimensional finite element model of the AMB is built and analyzed in COMSOL Multiphysics, both the dispersion properties of unit cell and the wave attenuation in a finite AMB have fine agreement with the derived model. The effects of design parameters of the two-dimensional model in band gaps are further examined, and the obtained results can well verify the analytical model. Finally, the wave attenuation performances in three-dimensional AMBs with equal and unequal thickness are presented and discussed.

  8. Analysis and Design of High-Order Parallel Resonant Converters

    NASA Astrophysics Data System (ADS)

    Batarseh, Issa Eid

    1990-01-01

    In this thesis, a special state variable transformation technique has been derived for the analysis of high order dc-to-dc resonant converters. Converters comprised of high order resonant tanks have the advantage of utilizing the parasitic elements by making them part of the resonant tank. A new set of state variables is defined in order to make use of two-dimensional state-plane diagrams in the analysis of high order converters. Such a method has been successfully used for the analysis of the conventional Parallel Resonant Converters (PRC). Consequently, two -dimensional state-plane diagrams are used to analyze the steady state response for third and fourth order PRC's when these converters are operated in the continuous conduction mode. Based on this analysis, a set of control characteristic curves for the LCC-, LLC- and LLCC-type PRC are presented from which various converter design parameters are obtained. Various design curves for component value selections and device ratings are given. This analysis of high order resonant converters shows that the addition of the reactive components to the resonant tank results in converters with better performance characteristics when compared with the conventional second order PRC. Complete design procedure along with design examples for 2nd, 3rd and 4th order converters are presented. Practical power supply units, normally used for computer applications, were built and tested by using the LCC-, LLC- and LLCC-type commutation schemes. In addition, computer simulation results are presented for these converters in order to verify the theoretical results.

  9. Three dimensional simulation of spatial and temporal variability of stratospheric hydrogen chloride

    NASA Technical Reports Server (NTRS)

    Kaye, Jack A.; Rood, Richard B.; Jackman, Charles H.; Allen, Dale J.; Larson, Edmund M.

    1989-01-01

    Spatial and temporal variability of atmospheric HCl columns are calculated for January 1979 using a three-dimensional chemistry-transport model designed to provide the best possible representation of stratospheric transport. Large spatial and temporal variability of the HCl columns is shown to be correlated with lower stratospheric potential vorticity and thus to be of dynamical origin. Systematic longitudinal structure is correlated with planetary wave structure. These results can help place spatially and temporally isolated column and profile measurements in a regional and/or global perspective.

  10. VALIDITY OF A TWO-DIMENSIONAL MODEL FOR VARIABLE-DENSITY HYDRODYNAMIC CIRCULATION

    EPA Science Inventory

    A three-dimensional model of temperatures and currents has been formulated to assist in the analysis and interpretation of the dynamics of stratified lakes. In this model, nonlinear eddy coefficients for viscosity and conductivities are included. A two-dimensional model (one vert...

  11. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators.

    PubMed

    Yin, Kedong; Yang, Benshuo; Li, Xuemei

    2018-01-24

    In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making.

  12. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators

    PubMed Central

    Yin, Kedong; Yang, Benshuo

    2018-01-01

    In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making. PMID:29364849

  13. Multiplexed mass cytometry profiling of cellular states perturbed by small-molecule regulators

    PubMed Central

    Bodenmiller, Bernd; Zunder, Eli R.; Finck, Rachel; Chen, Tiffany J.; Savig, Erica S.; Bruggner, Robert V.; Simonds, Erin F.; Bendall, Sean C.; Sachs, Karen; Krutzik, Peter O.; Nolan, Garry P.

    2013-01-01

    The ability to comprehensively explore the impact of bio-active molecules on human samples at the single-cell level can provide great insight for biomedical research. Mass cytometry enables quantitative single-cell analysis with deep dimensionality, but currently lacks high-throughput capability. Here we report a method termed mass-tag cellular barcoding (MCB) that increases mass cytometry throughput by sample multiplexing. 96-well format MCB was used to characterize human peripheral blood mononuclear cell (PBMC) signaling dynamics, cell-to-cell communication, the signaling variability between 8 donors, and to define the impact of 27 inhibitors on this system. For each compound, 14 phosphorylation sites were measured in 14 PBMC types, resulting in 18,816 quantified phosphorylation levels from each multiplexed sample. This high-dimensional systems-level inquiry allowed analysis across cell-type and signaling space, reclassified inhibitors, and revealed off-target effects. MCB enables high-content, high-throughput screening, with potential applications for drug discovery, pre-clinical testing, and mechanistic investigation of human disease. PMID:22902532

  14. Photographic investigation into the mechanism of combustion in irregular detonation waves

    NASA Astrophysics Data System (ADS)

    Kiyanda, C. B.; Higgins, A. J.

    2013-03-01

    Irregular detonations are supersonic combustion waves in which the inherent multi-dimensional structure is highly variable. In such waves, it is questionable whether auto-ignition induced by shock compression is the only combustion mechanism present. Through the use of high-speed schlieren and self-emitted light photography, the velocity of the different components of detonation waves in a {{ CH}}_4+2{ O}_2 mixture is analyzed. The observed burn-out of unreacted pockets is hypothesized to be due to turbulent combustion.

  15. Neural Network and Nearest Neighbor Algorithms for Enhancing Sampling of Molecular Dynamics.

    PubMed

    Galvelis, Raimondas; Sugita, Yuji

    2017-06-13

    The free energy calculations of complex chemical and biological systems with molecular dynamics (MD) are inefficient due to multiple local minima separated by high-energy barriers. The minima can be escaped using an enhanced sampling method such as metadynamics, which apply bias (i.e., importance sampling) along a set of collective variables (CV), but the maximum number of CVs (or dimensions) is severely limited. We propose a high-dimensional bias potential method (NN2B) based on two machine learning algorithms: the nearest neighbor density estimator (NNDE) and the artificial neural network (ANN) for the bias potential approximation. The bias potential is constructed iteratively from short biased MD simulations accounting for correlation among CVs. Our method is capable of achieving ergodic sampling and calculating free energy of polypeptides with up to 8-dimensional bias potential.

  16. Three-dimensional benchmark for variable-density flow and transport simulation: matching semi-analytic stability modes for steady unstable convection in an inclined porous box

    USGS Publications Warehouse

    Voss, Clifford I.; Simmons, Craig T.; Robinson, Neville I.

    2010-01-01

    This benchmark for three-dimensional (3D) numerical simulators of variable-density groundwater flow and solute or energy transport consists of matching simulation results with the semi-analytical solution for the transition from one steady-state convective mode to another in a porous box. Previous experimental and analytical studies of natural convective flow in an inclined porous layer have shown that there are a variety of convective modes possible depending on system parameters, geometry and inclination. In particular, there is a well-defined transition from the helicoidal mode consisting of downslope longitudinal rolls superimposed upon an upslope unicellular roll to a mode consisting of purely an upslope unicellular roll. Three-dimensional benchmarks for variable-density simulators are currently (2009) lacking and comparison of simulation results with this transition locus provides an unambiguous means to test the ability of such simulators to represent steady-state unstable 3D variable-density physics.

  17. Rapid in vitro labeling procedures for two-dimensional gel fingerprinting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Y.F.; Fowlks, E.R.

    1982-01-15

    Improvements of existing in vitro procedures for labeling RNA radioactively, and modifications of the two-dimensional polyacrylamide gel electrophoresis system for making RNA fingerprints are described. These improvements are (a) inactivation of phosphatase with nitric acid at pH 2.0 eliminated the phenol-cholorform extraction step during 5'-end labeling with polynucleotide kinase and (..gamma..-/sup 32/P)ATP; (b) ZnSO/sub 4/ inactivation of R Nase T/sub 1/ results in a highly efficient procedure for 3'-end labeling with T4 ligase and (5'-/sup 32/P)pCp; and (c) a rapid 4-min procedure for variable quantity range of /sup 125/I and RNA results in a qualitative and quantitative sample for high-molecularmore » weight RNA fingerprinting. Thus, these in vitro procedures become rapid and reproducible when combined with two-dimensional gel electrophoresis which eliminates simultaneously labeled impurities. Each labeling procedure is compared, using tobacco mosaic virus, Brome mosaic virus, and polio RNA. A series of Ap-rich oligonucleotides was discovered in the inner genome of Brome mosaic Virus RNA-3.« less

  18. Predicting Viral Infection From High-Dimensional Biomarker Trajectories

    PubMed Central

    Chen, Minhua; Zaas, Aimee; Woods, Christopher; Ginsburg, Geoffrey S.; Lucas, Joseph; Dunson, David; Carin, Lawrence

    2013-01-01

    There is often interest in predicting an individual’s latent health status based on high-dimensional biomarkers that vary over time. Motivated by time-course gene expression array data that we have collected in two influenza challenge studies performed with healthy human volunteers, we develop a novel time-aligned Bayesian dynamic factor analysis methodology. The time course trajectories in the gene expressions are related to a relatively low-dimensional vector of latent factors, which vary dynamically starting at the latent initiation time of infection. Using a nonparametric cure rate model for the latent initiation times, we allow selection of the genes in the viral response pathway, variability among individuals in infection times, and a subset of individuals who are not infected. As we demonstrate using held-out data, this statistical framework allows accurate predictions of infected individuals in advance of the development of clinical symptoms, without labeled data and even when the number of biomarkers vastly exceeds the number of individuals under study. Biological interpretation of several of the inferred pathways (factors) is provided. PMID:23704802

  19. Wavepacket dynamics and the multi-configurational time-dependent Hartree approach

    NASA Astrophysics Data System (ADS)

    Manthe, Uwe

    2017-06-01

    Multi-configurational time-dependent Hartree (MCTDH) based approaches are efficient, accurate, and versatile methods for high-dimensional quantum dynamics simulations. Applications range from detailed investigations of polyatomic reaction processes in the gas phase to high-dimensional simulations studying the dynamics of condensed phase systems described by typical solid state physics model Hamiltonians. The present article presents an overview of the different areas of application and provides a comprehensive review of the underlying theory. The concepts and guiding ideas underlying the MCTDH approach and its multi-mode and multi-layer extensions are discussed in detail. The general structure of the equations of motion is highlighted. The representation of the Hamiltonian and the correlated discrete variable representation (CDVR), which provides an efficient multi-dimensional quadrature in MCTDH calculations, are discussed. Methods which facilitate the calculation of eigenstates, the evaluation of correlation functions, and the efficient representation of thermal ensembles in MCTDH calculations are described. Different schemes for the treatment of indistinguishable particles in MCTDH calculations and recent developments towards a unified multi-layer MCTDH theory for systems including bosons and fermions are discussed.

  20. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  1. High- and low-level hierarchical classification algorithm based on source separation process

    NASA Astrophysics Data System (ADS)

    Loghmari, Mohamed Anis; Karray, Emna; Naceur, Mohamed Saber

    2016-10-01

    High-dimensional data applications have earned great attention in recent years. We focus on remote sensing data analysis on high-dimensional space like hyperspectral data. From a methodological viewpoint, remote sensing data analysis is not a trivial task. Its complexity is caused by many factors, such as large spectral or spatial variability as well as the curse of dimensionality. The latter describes the problem of data sparseness. In this particular ill-posed problem, a reliable classification approach requires appropriate modeling of the classification process. The proposed approach is based on a hierarchical clustering algorithm in order to deal with remote sensing data in high-dimensional space. Indeed, one obvious method to perform dimensionality reduction is to use the independent component analysis process as a preprocessing step. The first particularity of our method is the special structure of its cluster tree. Most of the hierarchical algorithms associate leaves to individual clusters, and start from a large number of individual classes equal to the number of pixels; however, in our approach, leaves are associated with the most relevant sources which are represented according to mutually independent axes to specifically represent some land covers associated with a limited number of clusters. These sources contribute to the refinement of the clustering by providing complementary rather than redundant information. The second particularity of our approach is that at each level of the cluster tree, we combine both a high-level divisive clustering and a low-level agglomerative clustering. This approach reduces the computational cost since the high-level divisive clustering is controlled by a simple Boolean operator, and optimizes the clustering results since the low-level agglomerative clustering is guided by the most relevant independent sources. Then at each new step we obtain a new finer partition that will participate in the clustering process to enhance semantic capabilities and give good identification rates.

  2. Improved high-dimensional prediction with Random Forests by the use of co-data.

    PubMed

    Te Beest, Dennis E; Mes, Steven W; Wilting, Saskia M; Brakenhoff, Ruud H; van de Wiel, Mark A

    2017-12-28

    Prediction in high dimensional settings is difficult due to the large number of variables relative to the sample size. We demonstrate how auxiliary 'co-data' can be used to improve the performance of a Random Forest in such a setting. Co-data are incorporated in the Random Forest by replacing the uniform sampling probabilities that are used to draw candidate variables by co-data moderated sampling probabilities. Co-data here are defined as any type information that is available on the variables of the primary data, but does not use its response labels. These moderated sampling probabilities are, inspired by empirical Bayes, learned from the data at hand. We demonstrate the co-data moderated Random Forest (CoRF) with two examples. In the first example we aim to predict the presence of a lymph node metastasis with gene expression data. We demonstrate how a set of external p-values, a gene signature, and the correlation between gene expression and DNA copy number can improve the predictive performance. In the second example we demonstrate how the prediction of cervical (pre-)cancer with methylation data can be improved by including the location of the probe relative to the known CpG islands, the number of CpG sites targeted by a probe, and a set of p-values from a related study. The proposed method is able to utilize auxiliary co-data to improve the performance of a Random Forest.

  3. Genome Data Exploration Using Correspondence Analysis

    PubMed Central

    Tekaia, Fredj

    2016-01-01

    Recent developments of sequencing technologies that allow the production of massive amounts of genomic and genotyping data have highlighted the need for synthetic data representation and pattern recognition methods that can mine and help discovering biologically meaningful knowledge included in such large data sets. Correspondence analysis (CA) is an exploratory descriptive method designed to analyze two-way data tables, including some measure of association between rows and columns. It constructs linear combinations of variables, known as factors. CA has been used for decades to study high-dimensional data, and remarkable inferences from large data tables were obtained by reducing the dimensionality to a few orthogonal factors that correspond to the largest amount of variability in the data. Herein, I review CA and highlight its use by considering examples in handling high-dimensional data that can be constructed from genomic and genetic studies. Examples in amino acid compositions of large sets of species (viruses, phages, yeast, and fungi) as well as an example related to pairwise shared orthologs in a set of yeast and fungal species, as obtained from their proteome comparisons, are considered. For the first time, results show striking segregations between yeasts and fungi as well as between viruses and phages. Distributions obtained from shared orthologs show clusters of yeast and fungal species corresponding to their phylogenetic relationships. A direct comparison with the principal component analysis method is discussed using a recently published example of genotyping data related to newly discovered traces of an ancient hominid that was compared to modern human populations in the search for ancestral similarities. CA offers more detailed results highlighting links between modern humans and the ancient hominid and their characterizations. Compared to the popular principal component analysis method, CA allows easier and more effective interpretation of results, particularly by the ability of relating individual patterns with their corresponding characteristic variables. PMID:27279736

  4. High-Rate Field Demonstration of Large-Alphabet Quantum Key Distribution

    DTIC Science & Technology

    2017-05-22

    QKD, 2015 MDI−QKD, 2016 CV/GMCS, 2016 BBM92, 2009 COW , 2015 This work Figure 4: Comparison of our P&M DO-QKD results to previously published QKD system...device-independent QKD (39). CV/GMCS: distance record for continuous-variable QKD (40). BBM92: secure throughput record for two-dimensional entanglement-based QKD (41). COW : distance record for QKD (20). 17

  5. Integrating diffusion maps with umbrella sampling: Application to alanine dipeptide

    NASA Astrophysics Data System (ADS)

    Ferguson, Andrew L.; Panagiotopoulos, Athanassios Z.; Debenedetti, Pablo G.; Kevrekidis, Ioannis G.

    2011-04-01

    Nonlinear dimensionality reduction techniques can be applied to molecular simulation trajectories to systematically extract a small number of variables with which to parametrize the important dynamical motions of the system. For molecular systems exhibiting free energy barriers exceeding a few kBT, inadequate sampling of the barrier regions between stable or metastable basins can lead to a poor global characterization of the free energy landscape. We present an adaptation of a nonlinear dimensionality reduction technique known as the diffusion map that extends its applicability to biased umbrella sampling simulation trajectories in which restraining potentials are employed to drive the system into high free energy regions and improve sampling of phase space. We then propose a bootstrapped approach to iteratively discover good low-dimensional parametrizations by interleaving successive rounds of umbrella sampling and diffusion mapping, and we illustrate the technique through a study of alanine dipeptide in explicit solvent.

  6. New generic indexing technology

    NASA Technical Reports Server (NTRS)

    Freeston, Michael

    1996-01-01

    There has been no fundamental change in the dynamic indexing methods supporting database systems since the invention of the B-tree twenty-five years ago. And yet the whole classical approach to dynamic database indexing has long since become inappropriate and increasingly inadequate. We are moving rapidly from the conventional one-dimensional world of fixed-structure text and numbers to a multi-dimensional world of variable structures, objects and images, in space and time. But, even before leaving the confines of conventional database indexing, the situation is highly unsatisfactory. In fact, our research has led us to question the basic assumptions of conventional database indexing. We have spent the past ten years studying the properties of multi-dimensional indexing methods, and in this paper we draw the strands of a number of developments together - some quite old, some very new, to show how we now have the basis for a new generic indexing technology for the next generation of database systems.

  7. Verification of low-Mach number combustion codes using the method of manufactured solutions

    NASA Astrophysics Data System (ADS)

    Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz

    2007-11-01

    Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.

  8. Surface-Sensitive Microwear Texture Analysis of Attrition and Erosion.

    PubMed

    Ranjitkar, S; Turan, A; Mann, C; Gully, G A; Marsman, M; Edwards, S; Kaidonis, J A; Hall, C; Lekkas, D; Wetselaar, P; Brook, A H; Lobbezoo, F; Townsend, G C

    2017-03-01

    Scale-sensitive fractal analysis of high-resolution 3-dimensional surface reconstructions of wear patterns has advanced our knowledge in evolutionary biology, and has opened up opportunities for translatory applications in clinical practice. To elucidate the microwear characteristics of attrition and erosion in worn natural teeth, we scanned 50 extracted human teeth using a confocal profiler at a high optical resolution (X-Y, 0.17 µm; Z < 3 nm). Our hypothesis was that microwear complexity would be greater in erosion and that anisotropy would be greater in attrition. The teeth were divided into 4 groups, including 2 wear types (attrition and erosion) and 2 locations (anterior and posterior teeth; n = 12 for each anterior group, n = 13 for each posterior group) for 2 tissue types (enamel and dentine). The raw 3-dimensional data cloud was subjected to a newly developed rigorous standardization technique to reduce interscanner variability as well as to filter anomalous scanning data. Linear mixed effects (regression) analyses conducted separately for the dependent variables, complexity and anisotropy, showed the following effects of the independent variables: significant interactions between wear type and tissue type ( P = 0.0157 and P = 0.0003, respectively) and significant effects of location ( P < 0.0001 and P = 0.0035, respectively). There were significant associations between complexity and anisotropy when the dependent variable was either complexity ( P = 0.0003) or anisotropy ( P = 0.0014). Our findings of greater complexity in erosion and greater anisotropy in attrition confirm our hypothesis. The greatest geometric means were noted in dentine erosion for complexity and dentine attrition for anisotropy. Dentine also exhibited microwear characteristics that were more consistent with wear types than enamel. Overall, our findings could complement macrowear assessment in dental clinical practice and research and could assist in the early detection and management of pathologic tooth wear.

  9. Sampling saddle points on a free energy surface

    NASA Astrophysics Data System (ADS)

    Samanta, Amit; Chen, Ming; Yu, Tang-Qing; Tuckerman, Mark; E, Weinan

    2014-04-01

    Many problems in biology, chemistry, and materials science require knowledge of saddle points on free energy surfaces. These saddle points act as transition states and are the bottlenecks for transitions of the system between different metastable states. For simple systems in which the free energy depends on a few variables, the free energy surface can be precomputed, and saddle points can then be found using existing techniques. For complex systems, where the free energy depends on many degrees of freedom, this is not feasible. In this paper, we develop an algorithm for finding the saddle points on a high-dimensional free energy surface "on-the-fly" without requiring a priori knowledge the free energy function itself. This is done by using the general strategy of the heterogeneous multi-scale method by applying a macro-scale solver, here the gentlest ascent dynamics algorithm, with the needed force and Hessian values computed on-the-fly using a micro-scale model such as molecular dynamics. The algorithm is capable of dealing with problems involving many coarse-grained variables. The utility of the algorithm is illustrated by studying the saddle points associated with (a) the isomerization transition of the alanine dipeptide using two coarse-grained variables, specifically the Ramachandran dihedral angles, and (b) the beta-hairpin structure of the alanine decamer using 20 coarse-grained variables, specifically the full set of Ramachandran angle pairs associated with each residue. For the alanine decamer, we obtain a detailed network showing the connectivity of the minima obtained and the saddle-point structures that connect them, which provides a way to visualize the gross features of the high-dimensional surface.

  10. Dimensionality Assessment of Ordered Polytomous Items with Parallel Analysis

    ERIC Educational Resources Information Center

    Timmerman, Marieke E.; Lorenzo-Seva, Urbano

    2011-01-01

    Parallel analysis (PA) is an often-recommended approach for assessment of the dimensionality of a variable set. PA is known in different variants, which may yield different dimensionality indications. In this article, the authors considered the most appropriate PA procedure to assess the number of common factors underlying ordered polytomously…

  11. Data re-arranging techniques leading to proper variable selections in high energy physics

    NASA Astrophysics Data System (ADS)

    Kůs, Václav; Bouř, Petr

    2017-12-01

    We introduce a new data based approach to homogeneity testing and variable selection carried out in high energy physics experiments, where one of the basic tasks is to test the homogeneity of weighted samples, mainly the Monte Carlo simulations (weighted) and real data measurements (unweighted). This technique is called ’data re-arranging’ and it enables variable selection performed by means of the classical statistical homogeneity tests such as Kolmogorov-Smirnov, Anderson-Darling, or Pearson’s chi-square divergence test. P-values of our variants of homogeneity tests are investigated and the empirical verification through 46 dimensional high energy particle physics data sets is accomplished under newly proposed (equiprobable) quantile binning. Particularly, the procedure of homogeneity testing is applied to re-arranged Monte Carlo samples and real DATA sets measured at the particle accelerator Tevatron in Fermilab at DØ experiment originating from top-antitop quark pair production in two decay channels (electron, muon) with 2, 3, or 4+ jets detected. Finally, the variable selections in the electron and muon channels induced by the re-arranging procedure for homogeneity testing are provided for Tevatron top-antitop quark data sets.

  12. Model-Free Conditional Independence Feature Screening For Ultrahigh Dimensional Data.

    PubMed

    Wang, Luheng; Liu, Jingyuan; Li, Yong; Li, Runze

    2017-03-01

    Feature screening plays an important role in ultrahigh dimensional data analysis. This paper is concerned with conditional feature screening when one is interested in detecting the association between the response and ultrahigh dimensional predictors (e.g., genetic makers) given a low-dimensional exposure variable (such as clinical variables or environmental variables). To this end, we first propose a new index to measure conditional independence, and further develop a conditional screening procedure based on the newly proposed index. We systematically study the theoretical property of the proposed procedure and establish the sure screening and ranking consistency properties under some very mild conditions. The newly proposed screening procedure enjoys some appealing properties. (a) It is model-free in that its implementation does not require a specification on the model structure; (b) it is robust to heavy-tailed distributions or outliers in both directions of response and predictors; and (c) it can deal with both feature screening and the conditional screening in a unified way. We study the finite sample performance of the proposed procedure by Monte Carlo simulations and further illustrate the proposed method through two real data examples.

  13. Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Haddad, George

    2007-01-01

    This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant

  14. Three-Dimensional Magnetotelluric Imaging of the Cascadia Subduction Zone with an Amphibious Array

    NASA Astrophysics Data System (ADS)

    Egbert, G. D.; Yang, B.; Bedrosian, P.; Kelbert, A.; Key, K.; Livelybrooks, D.; Parris, B. A.; Schultz, A.

    2017-12-01

    We present results from three-dimensional inversion of an amphibious magnetotelluric (MT) array consisting of 71 offshore and 75 onshore sites in the central part of Cascadia, to image down-dip and along strike variations of electrical conductivity, and to constrain the 3D distribution of fluids and melt in the subduction zone. The array is augmented by EarthScope TA MT data and legacy 2D profiles providing sparser coverage of western WA, OR, and northern CA. The prior model for the inversion includes ocean bathymetry, conductive marine sediments, and a resistive subducting plate, with geometry derived from the model of McCrory et al. (2012) and seismic tomography. Highly conductive features appear just above the interface with the a priori resistive plate in three zones. (1) In the area with marine MT data a conductive layer, which we associate with fluid-rich decollement and subduction channel sediments, extends eastward from the trench to underthrust the seaward edge of Siletzia, which is clearly seen as a thick crustal resistor. The downdip extent of the underthrust conductive layer is a remarkably uniform 35 km. (2) High conductivities, consistent with metamorphic fluids associated with eclogitization, occur near the forearc mantle corner. Conductivity is highly variable along strike, organized in a series of E-W to diagonal elongated conductive/resistive structures, whose significance remains enigmatic. (3) High conductivities associated with fluids and melts are found in the backarc, again exhibiting substantial along strike variability.

  15. Some applications of the multi-dimensional fractional order for the Riemann-Liouville derivative

    NASA Astrophysics Data System (ADS)

    Ahmood, Wasan Ajeel; Kiliçman, Adem

    2017-01-01

    In this paper, the aim of this work is to study theorem for the one-dimensional space-time fractional deriative, generalize some function for the one-dimensional fractional by table represents the fractional Laplace transforms of some elementary functions to be valid for the multi-dimensional fractional Laplace transform and give the definition of the multi-dimensional fractional Laplace transform. This study includes that, dedicate the one-dimensional fractional Laplace transform for functions of only one independent variable and develop of the one-dimensional fractional Laplace transform to multi-dimensional fractional Laplace transform based on the modified Riemann-Liouville derivative.

  16. Assimilating bio-optical glider data during a phytoplankton bloom in the southern Ross Sea

    NASA Astrophysics Data System (ADS)

    Kaufman, Daniel E.; Friedrichs, Marjorie A. M.; Hemmings, John C. P.; Smith, Walker O., Jr.

    2018-01-01

    The Ross Sea is a region characterized by high primary productivity in comparison to other Antarctic coastal regions, and its productivity is marked by considerable variability both spatially (1-50 km) and temporally (days to weeks). This variability presents a challenge for inferring phytoplankton dynamics from observations that are limited in time or space, which is often the case due to logistical limitations of sampling. To better understand the spatiotemporal variability in Ross Sea phytoplankton dynamics and to determine how restricted sampling may skew dynamical interpretations, high-resolution bio-optical glider measurements were assimilated into a one-dimensional biogeochemical model adapted for the Ross Sea. The assimilation of data from the entire glider track using the micro-genetic and local search algorithms in the Marine Model Optimization Testbed improves the model-data fit by ˜ 50 %, generating rates of integrated primary production of 104 g C m-2 yr-1 and export at 200 m of 27 g C m-2 yr-1. Assimilating glider data from three different latitudinal bands and three different longitudinal bands results in minimal changes to the simulations, improves the model-data fit with respect to unassimilated data by ˜ 35 %, and confirms that analyzing these glider observations as a time series via a one-dimensional model is reasonable on these scales. Whereas assimilating the full glider data set produces well-constrained simulations, assimilating subsampled glider data at a frequency consistent with cruise-based sampling results in a wide range of primary production and export estimates. These estimates depend strongly on the timing of the assimilated observations, due to the presence of high mesoscale variability in this region. Assimilating surface glider data subsampled at a frequency consistent with available satellite-derived data results in 40 % lower carbon export, primarily resulting from optimized rates generating more slowly sinking diatoms. This analysis highlights the need for the strategic consideration of the impacts of data frequency, duration, and coverage when combining observations with biogeochemical modeling in regions with strong mesoscale variability.

  17. Integrative analysis of gene expression and copy number alterations using canonical correlation analysis.

    PubMed

    Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus

    2010-04-15

    With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.

  18. Data-driven Climate Modeling and Prediction

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.

    2016-12-01

    Global climate models aim to simulate a broad range of spatio-temporal scales of climate variability with state vector having many millions of degrees of freedom. On the other hand, while detailed weather prediction out to a few days requires high numerical resolution, it is fairly clear that a major fraction of large-scale climate variability can be predicted in a much lower-dimensional phase space. Low-dimensional models can simulate and predict this fraction of climate variability, provided they are able to account for linear and nonlinear interactions between the modes representing large scales of climate dynamics, as well as their interactions with a much larger number of modes representing fast and small scales. This presentation will highlight several new applications by Multilayered Stochastic Modeling (MSM) [Kondrashov, Chekroun and Ghil, 2015] framework that has abundantly proven its efficiency in the modeling and real-time forecasting of various climate phenomena. MSM is a data-driven inverse modeling technique that aims to obtain a low-order nonlinear system of prognostic equations driven by stochastic forcing, and estimates both the dynamical operator and the properties of the driving noise from multivariate time series of observations or a high-end model's simulation. MSM leads to a system of stochastic differential equations (SDEs) involving hidden (auxiliary) variables of fast-small scales ranked by layers, which interact with the macroscopic (observed) variables of large-slow scales to model the dynamics of the latter, and thus convey memory effects. New MSM climate applications focus on development of computationally efficient low-order models by using data-adaptive decomposition methods that convey memory effects by time-embedding techniques, such as Multichannel Singular Spectrum Analysis (M-SSA) [Ghil et al. 2002] and recently developed Data-Adaptive Harmonic (DAH) decomposition method [Chekroun and Kondrashov, 2016]. In particular, new results by DAH-MSM modeling and prediction of Arctic Sea Ice, as well as decadal predictions of near-surface Earth temperatures will be presented.

  19. PRIM versus CART in subgroup discovery: when patience is harmful.

    PubMed

    Abu-Hanna, Ameen; Nannings, Barry; Dongelmans, Dave; Hasman, Arie

    2010-10-01

    We systematically compare the established algorithms CART (Classification and Regression Trees) and PRIM (Patient Rule Induction Method) in a subgroup discovery task on a large real-world high-dimensional clinical database. Contrary to current conjectures, PRIM's performance was generally inferior to CART's. PRIM often considered "peeling of" a large chunk of data at a value of a relevant discrete ordinal variable unattractive, ultimately missing an important subgroup. This finding has considerable significance in clinical medicine where ordinal scores are ubiquitous. PRIM's utility in clinical databases would increase when global information about (ordinal) variables is better put to use and when the search algorithm keeps track of alternative solutions.

  20. A Selective Review of Group Selection in High-Dimensional Models

    PubMed Central

    Huang, Jian; Breheny, Patrick; Ma, Shuangge

    2013-01-01

    Grouping structures arise naturally in many statistical modeling problems. Several methods have been proposed for variable selection that respect grouping structure in variables. Examples include the group LASSO and several concave group selection methods. In this article, we give a selective review of group selection concerning methodological developments, theoretical properties and computational algorithms. We pay particular attention to group selection methods involving concave penalties. We address both group selection and bi-level selection methods. We describe several applications of these methods in nonparametric additive models, semiparametric regression, seemingly unrelated regressions, genomic data analysis and genome wide association studies. We also highlight some issues that require further study. PMID:24174707

  1. Low frequency acoustic properties of Posidonia oceanica seagrass leaf blades

    PubMed Central

    Johnson, Jay R.; Venegas, Gabriel R.; Wilson, Preston S.; Hermand, Jean-Pierre

    2017-01-01

    The acoustics of seagrass meadows impacts naval and oceanographic sonar applications. To study this environment, a one-dimensional resonator was used to assess the low-frequency (1–5 kHz) acoustic response of the leaf blades of the Mediterranean seagrass Posidonia oceanica in water. Three separate collections of plants from Crete, Greece, and Sicily, Italy were investigated. A high consistency in effective sound speed was observed within each collection while a strong variability was observed between different collections. Average size, mass, and epiphytic coverage within each collection were quantified, and discoloration and stiffness are discussed qualitatively with respect to the observed acoustic variability. PMID:28618796

  2. A particle swarm optimization variant with an inner variable learning strategy.

    PubMed

    Wu, Guohua; Pedrycz, Witold; Ma, Manhao; Qiu, Dishan; Li, Haifeng; Liu, Jin

    2014-01-01

    Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.

  3. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  4. Three-Dimensional Navier-Stokes Calculations Using the Modified Space-Time CESE Method

    NASA Technical Reports Server (NTRS)

    Chang, Chau-lyan

    2007-01-01

    The space-time conservation element solution element (CESE) method is modified to address the robustness issues of high-aspect-ratio, viscous, near-wall meshes. In this new approach, the dependent variable gradients are evaluated using element edges and the corresponding neighboring solution elements while keeping the original flux integration procedure intact. As such, the excellent flux conservation property is retained and the new edge-based gradients evaluation significantly improves the robustness for high-aspect ratio meshes frequently encountered in three-dimensional, Navier-Stokes calculations. The order of accuracy of the proposed method is demonstrated for oblique acoustic wave propagation, shock-wave interaction, and hypersonic flows over a blunt body. The confirmed second-order convergence along with the enhanced robustness in handling hypersonic blunt body flow calculations makes the proposed approach a very competitive CFD framework for 3D Navier-Stokes simulations.

  5. OBSERVATIONAL SIGNATURES OF CONVECTIVELY DRIVEN WAVES IN MASSIVE STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aerts, C.; Rogers, T. M.

    We demonstrate observational evidence for the occurrence of convectively driven internal gravity waves (IGWs) in young massive O-type stars observed with high-precision CoRoT space photometry. This evidence results from a comparison between velocity spectra based on two-dimensional hydrodynamical simulations of IGWs in a differentially rotating massive star and the observed spectra. We also show that the velocity spectra caused by IGWs may lead to detectable line-profile variability and explain the occurrence of macroturbulence in the observed line profiles of OB stars. Our findings provide predictions that can readily be tested by including a sample of bright, slowly and rapidly rotatingmore » OB-type stars in the scientific program of the K2 mission accompanied by high-precision spectroscopy and their confrontation with multi-dimensional hydrodynamic simulations of IGWs for various masses and ages.« less

  6. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  7. Can a four-dimensional model of occupational commitment help to explain intent to leave the emergency medical service occupation?

    PubMed

    Blau, Gary; Chapman, Susan; Pred, Robert S; Lopez, Andrea

    2009-01-01

    Using a sample of 854 emergency medical service (EMS) respondents, this study supported a four-dimension model of occupational commitment, comprised of affective, normative, accumulated costs, and limited alternatives. When personal and job-related variables were controlled, general job satisfaction emerged as a negative correlate of intent to leave. Controlling for personal, job-related, and job satisfaction variables, affective and limited alternatives commitment were each significant negative correlates. There were small but significant interactive effects among the commitment dimensions in accounting for additional intent to leave variance, including a four-way interaction. "High" versus "low" cumulative commitment subgroups were created by selecting respondents who were equal to or above ("high") versus below ("low") the median on each of the four occupational commitment dimensions. A t-test indicated that low cumulative commitment EMS respondents were more likely to intend to leave than high cumulative commitment EMS respondents.

  8. Color visualization for fluid flow prediction

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Speray, D. E.

    1982-01-01

    High-resolution raster scan color graphics allow variables to be presented as a continuum, in a color-coded picture that is referenced to a geometry such as a flow field grid or a boundary surface. Software is used to map a scalar variable such as pressure or temperature, defined on a two-dimensional slice of a flow field. The geometric shape is preserved in the resulting picture, and the relative magnitude of the variable is color-coded onto the geometric shape. The primary numerical process for color coding is an efficient search along a raster scan line to locate the quadrilteral block in the grid that bounds each pixel on the line. Tension spline interpolation is performed relative to the grid for specific values of the scalar variable, which is then color coded. When all pixels for the field of view are color-defined, a picture is played back from a memory device onto a television screen.

  9. High Sensitivity Gas Detection Using a Macroscopic Three-Dimensional Graphene Foam Network

    PubMed Central

    Yavari, Fazel; Chen, Zongping; Thomas, Abhay V.; Ren, Wencai; Cheng, Hui-Ming; Koratkar, Nikhil

    2011-01-01

    Nanostructures are known to be exquisitely sensitive to the chemical environment and offer ultra-high sensitivity for gas-sensing. However, the fabrication and operation of devices that use individual nanostructures for sensing is complex, expensive and suffers from poor reliability due to contamination and large variability from sample-to-sample. By contrast, conventional solid-state and conducting-polymer sensors offer excellent reliability but suffer from reduced sensitivity at room-temperature. Here we report a macro graphene foam-like three-dimensional network which combines the best of both worlds. The walls of the foam are comprised of few-layer graphene sheets resulting in high sensitivity; we demonstrate parts-per-million level detection of NH3 and NO2 in air at room-temperature. Further, the foam is a mechanically robust and flexible macro-scale network that is easy to contact (without Lithography) and can rival the durability and affordability of traditional sensors. Moreover, Joule-heating expels chemisorbed molecules from the foam's surface leading to fully-reversible and low-power operation. PMID:22355681

  10. Dynamical properties and extremes of Northern Hemisphere climate fields over the past 60 years

    NASA Astrophysics Data System (ADS)

    Faranda, Davide; Messori, Gabriele; Alvarez-Castro, M. Carmen; Yiou, Pascal

    2017-12-01

    Atmospheric dynamics are described by a set of partial differential equations yielding an infinite-dimensional phase space. However, the actual trajectories followed by the system appear to be constrained to a finite-dimensional phase space, i.e. a strange attractor. The dynamical properties of this attractor are difficult to determine due to the complex nature of atmospheric motions. A first step to simplify the problem is to focus on observables which affect - or are linked to phenomena which affect - human welfare and activities, such as sea-level pressure, 2 m temperature, and precipitation frequency. We make use of recent advances in dynamical systems theory to estimate two instantaneous dynamical properties of the above fields for the Northern Hemisphere: local dimension and persistence. We then use these metrics to characterize the seasonality of the different fields and their interplay. We further analyse the large-scale anomaly patterns corresponding to phase-space extremes - namely time steps at which the fields display extremes in their instantaneous dynamical properties. The analysis is based on the NCEP/NCAR reanalysis data, over the period 1948-2013. The results show that (i) despite the high dimensionality of atmospheric dynamics, the Northern Hemisphere sea-level pressure and temperature fields can on average be described by roughly 20 degrees of freedom; (ii) the precipitation field has a higher dimensionality; and (iii) the seasonal forcing modulates the variability of the dynamical indicators and affects the occurrence of phase-space extremes. We further identify a number of robust correlations between the dynamical properties of the different variables.

  11. Bedload and Total Load Sediment Transport Equations for Rough Open-Channel Flow

    NASA Astrophysics Data System (ADS)

    Abrahams, A. D.; Gao, P.

    2001-12-01

    The total sediment load transported by an open-channel flow may be divided into bedload and suspended load. Bedload transport occurs by saltation at low shear stress and by sheetflow at high shear stress. Dimensional analysis is used to identify the dimensionless variables that control the transport rate of noncohesive sediments over a plane bed, and regression analysis is employed to isolate the significant variables and determine the values of the coefficients. In the general bedload transport equation (i.e. for saltation and sheetflow) the dimensionless bedload transport rate is a function of the dimensionless shear stress, the friction factor, and an efficiency coefficient. For sheetflow the last term approaches 1, so that the bedload transport rate becomes a function of just the dimensionless shear stress and the friction factor. The dimensional analysis indicates that the dimensionless total load transport rate is a function of the dimensionless bedload transport rate and the dimensionless settling velocity of the sediment. Predicted values of the transport rates are graphed against the computed values of these variables for 505 flume experiments reported in the literature. These graphs indicate that the equations developed in this study give good unbiased predictions of both the bedload transport rate and total load transport rate over a wide range of conditions.

  12. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,”more » has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer« less

  13. A data-driven approach for modeling post-fire debris-flow volumes and their uncertainty

    USGS Publications Warehouse

    Friedel, Michael J.

    2011-01-01

    This study demonstrates the novel application of genetic programming to evolve nonlinear post-fire debris-flow volume equations from variables associated with a data-driven conceptual model of the western United States. The search space is constrained using a multi-component objective function that simultaneously minimizes root-mean squared and unit errors for the evolution of fittest equations. An optimization technique is then used to estimate the limits of nonlinear prediction uncertainty associated with the debris-flow equations. In contrast to a published multiple linear regression three-variable equation, linking basin area with slopes greater or equal to 30 percent, burn severity characterized as area burned moderate plus high, and total storm rainfall, the data-driven approach discovers many nonlinear and several dimensionally consistent equations that are unbiased and have less prediction uncertainty. Of the nonlinear equations, the best performance (lowest prediction uncertainty) is achieved when using three variables: average basin slope, total burned area, and total storm rainfall. Further reduction in uncertainty is possible for the nonlinear equations when dimensional consistency is not a priority and by subsequently applying a gradient solver to the fittest solutions. The data-driven modeling approach can be applied to nonlinear multivariate problems in all fields of study.

  14. Geomorphic control of landscape carbon accumulation

    USGS Publications Warehouse

    Rosenbloom, N.A.; Harden, J.W.; Neff, J.C.; Schimel, D.S.

    2006-01-01

    We use the CREEP process-response model to simulate soil organic carbon accumulation in an undisturbed prairie site in Iowa. Our primary objectives are to identify spatial patterns of carbon accumulation, and explore the effect of erosion on basin-scale C accumulation. Our results point to two general findings. First, redistribution of soil carbon by erosion results in a net increase in basin-wide carbon storage relative to a noneroding environment. Landscape-average mean residence times are increased in an eroding landscape owing to the burial/preservation of otherwise labile C. Second, field observations taken along a slope transect may overlook significant intraslope variations in carbon accumulation. Spatial patterns of modeled deep C accumulation are complex. While surface carbon with its relatively short equilibration time is predictable from surface properties, deep carbon is strongly influenced by the landscape's geomorphic and climatic history, resulting in wide spatial variability. Convergence and divergence associated with upland swales and interfluves result in bimodal carbon distributions in upper and mid slopes; variability in carbon storage within modeled mid slopes was as high as simulated differences between erosional shoulders and depositional valley bottoms. The bimodality of mid-slope C variability in the model suggests that a three-dimensional sampling strategy is preferable over the traditional two-dimensional analog or "catena" approach. Copyright 2006 by the American Geophysical Union.

  15. Turbulent mass flux closure modeling for variable density turbulence in the wake of an air-entraining transom stern

    NASA Astrophysics Data System (ADS)

    Hendrickson, Kelli; Yue, Dick

    2016-11-01

    This work presents the development and a priori testing of closure models for the incompressible highly-variable density turbulent (IHVDT) flow in the near wake region of a transom stern. This complex, three-dimensional flow includes three regions with distinctly different flow behavior: (i) the convergent corner waves that originate from the body and collide on the ship center plane; (ii) the "rooster tail" that forms from the collision; and (iii) the diverging wave train. The characteristics of these regions involve violent free-surface flows and breaking waves with significant turbulent mass flux (TMF) at Atwood number At = (ρ2 -ρ1) / (ρ2 +ρ1) 1 for which there is little guidance in turbulence closure modeling for the momentum and scalar transport along the wake. Utilizing datasets from high-resolution simulations of the near wake of a canonical three-dimensional transom stern using conservative Volume-of-Fluid (cVOF), implicit Large Eddy Simulation (iLES), and Boundary Data Immersion Method (BDIM), we develop explicit algebraic turbulent mass flux closure models that incorporate the most relevant physical processes. Performance of these models in predicting the turbulent mass flux in all three regions of the wake will be presented. Office of Naval Research.

  16. Modeling variable density turbulence in the wake of an air-entraining transom stern

    NASA Astrophysics Data System (ADS)

    Hendrickson, Kelli; Yue, Dick

    2015-11-01

    This work presents a priori testing of closure models for the incompressible highly-variable density turbulent (IHVDT) flows in the near wake region of a transom stern. This three-dimensional flow is comprised of convergent corner waves that originate from the body and collide on the ship center plane forming the ``rooster tail'' that then widens to form the divergent wave train. These violent free-surface flows and breaking waves are characterized by significant turbulent mass flux (TMF) at Atwood number At = (ρ2 -ρ1) / (ρ2 +ρ1) ~ 1 for which there is little guidance in turbulence closure modeling for the momentum and scalar transport along the wake. To whit, this work utilizes high-resolution simulations of the near wake of a canonical three-dimensional transom stern using conservative Volume-of-Fluid (cVOF), implicit Large Eddy Simulation (iLES), and Boundary Data Immersion Method (BDIM) to capture the turbulence and large scale air entrainment. Analysis of the simulation results across and along the wake for the TMF budget and turbulent anisotropy provide the physical basis of the development of multiphase turbulence closure models. Performance of isotropic and anisotropic turbulent mass flux closure models will be presented. Sponsored by the Office of Naval Research.

  17. Application of 3-D Urbanization Index to Assess Impact of Urbanization on Air Temperature

    NASA Astrophysics Data System (ADS)

    Wu, Chih-Da; Lung, Shih-Chun Candice

    2016-04-01

    The lack of appropriate methodologies and indicators to quantify three-dimensional (3-D) building constructions poses challenges to authorities and urban planners when formulating polices to reduce health risks due to heat stress. This study evaluated the applicability of an innovative three-dimensional Urbanization Index (3DUI), based on remote sensing database, with a 5 m spatial resolution of 3-D man-made constructions to representing intra-urban variability of air temperature by assessing correlation of 3DUI with air temperature from a 3-D perspective. The results showed robust high correlation coefficients, ranging from 0.83 to 0.85, obtained within the 1,000 m circular buffer around weather stations regardless of season, year, or spatial location. Our findings demonstrated not only the strength of 3DUI in representing intra-urban air-temperature variability, but also its great potential for heat stress assessment within cities. In view of the maximum correlation between building volumes within the 1,000 m circular buffer and ambient air temperature, urban planning should consider setting ceilings for man-made construction volume in each 2 × 2 km2 residential community for thermal environment regulation, especially in Asian metropolis with high population density in city centers.

  18. Predicting Achievable Fundamental Frequency Ranges in Vocalization Across Species

    PubMed Central

    Titze, Ingo; Riede, Tobias; Mau, Ted

    2016-01-01

    Vocal folds are used as sound sources in various species, but it is unknown how vocal fold morphologies are optimized for different acoustic objectives. Here we identify two main variables affecting range of vocal fold vibration frequency, namely vocal fold elongation and tissue fiber stress. A simple vibrating string model is used to predict fundamental frequency ranges across species of different vocal fold sizes. While average fundamental frequency is predominantly determined by vocal fold length (larynx size), range of fundamental frequency is facilitated by (1) laryngeal muscles that control elongation and by (2) nonlinearity in tissue fiber tension. One adaptation that would increase fundamental frequency range is greater freedom in joint rotation or gliding of two cartilages (thyroid and cricoid), so that vocal fold length change is maximized. Alternatively, tissue layers can develop to bear a disproportionate fiber tension (i.e., a ligament with high density collagen fibers), increasing the fundamental frequency range and thereby vocal versatility. The range of fundamental frequency across species is thus not simply one-dimensional, but can be conceptualized as the dependent variable in a multi-dimensional morphospace. In humans, this could allow for variations that could be clinically important for voice therapy and vocal fold repair. Alternative solutions could also have importance in vocal training for singing and other highly-skilled vocalizations. PMID:27309543

  19. Application of 3-D Urbanization Index to Assess Impact of Urbanization on Air Temperature

    PubMed Central

    Wu, Chih-Da; Lung, Shih-Chun Candice

    2016-01-01

    The lack of appropriate methodologies and indicators to quantify three-dimensional (3-D) building constructions poses challenges to authorities and urban planners when formulating polices to reduce health risks due to heat stress. This study evaluated the applicability of an innovative three-dimensional Urbanization Index (3DUI), based on remote sensing database, with a 5 m spatial resolution of 3-D man-made constructions to representing intra-urban variability of air temperature by assessing correlation of 3DUI with air temperature from a 3-D perspective. The results showed robust high correlation coefficients, ranging from 0.83 to 0.85, obtained within the 1,000 m circular buffer around weather stations regardless of season, year, or spatial location. Our findings demonstrated not only the strength of 3DUI in representing intra-urban air-temperature variability, but also its great potential for heat stress assessment within cities. In view of the maximum correlation between building volumes within the 1,000 m circular buffer and ambient air temperature, urban planning should consider setting ceilings for man-made construction volume in each 2 × 2 km2 residential community for thermal environment regulation, especially in Asian metropolis with high population density in city centers. PMID:27079537

  20. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  1. Optimal dimensionality reduction of complex dynamics: the chess game as diffusion on a free-energy landscape.

    PubMed

    Krivov, Sergei V

    2011-07-01

    Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game--the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.

  2. Three dimensional elements with Lagrange multipliers for the modified couple stress theory

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Rok; Lee, Byung-Chai

    2018-07-01

    Three dimensional mixed elements for the modified couple stress theory are proposed. The C1 continuity for the displacement field, which is required because of the curvature term in the variational form of the theory, is satisfied weakly by introducing a supplementary rotation as an independent variable and constraining the relation between the rotation and the displacement with a Lagrange multiplier vector. An additional constraint about the deviatoric curvature is also considered for three dimensional problems. Weak forms with one constraint and two constraints are derived, and four elements satisfying convergence criteria are developed by applying different approximations to each field of independent variables. The elements pass a [InlineEquation not available: see fulltext.] patch test for three dimensional problems. Numerical examples show that the additional constraint could be considered essential for the three dimensional elements, and one of the elements is recommended for practical applications via the comparison of the performances of the elements. In addition, all the proposed elements can represent the size effect well.

  3. Optimal dimensionality reduction of complex dynamics: The chess game as diffusion on a free-energy landscape

    NASA Astrophysics Data System (ADS)

    Krivov, Sergei V.

    2011-07-01

    Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game—the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.

  4. Multiple-input multiple-output causal strategies for gene selection.

    PubMed

    Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John

    2011-11-25

    Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.

  5. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    PubMed

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  6. Characterizing Arctic Sea Ice Topography Using High-Resolution IceBridge Data

    NASA Technical Reports Server (NTRS)

    Petty, Alek; Tsamados, Michel; Kurtz, Nathan; Farrell, Sinead; Newman, Thomas; Harbeck, Jeremy; Feltham, Daniel; Richter-Menge, Jackie

    2016-01-01

    We present an analysis of Arctic sea ice topography using high resolution, three-dimensional, surface elevation data from the Airborne Topographic Mapper, flown as part of NASA's Operation IceBridge mission. Surface features in the sea ice cover are detected using a newly developed surface feature picking algorithm. We derive information regarding the height, volume and geometry of surface features from 2009-2014 within the Beaufort/Chukchi and Central Arctic regions. The results are delineated by ice type to estimate the topographic variability across first-year and multi-year ice regimes.

  7. A MacCormack-TVD finite difference method to simulate the mass flow in mountainous terrain with variable computational domain

    NASA Astrophysics Data System (ADS)

    Ouyang, Chaojun; He, Siming; Xu, Qiang; Luo, Yu; Zhang, Wencheng

    2013-03-01

    A two-dimensional mountainous mass flow dynamic procedure solver (Massflow-2D) using the MacCormack-TVD finite difference scheme is proposed. The solver is implemented in Matlab on structured meshes with variable computational domain. To verify the model, a variety of numerical test scenarios, namely, the classical one-dimensional and two-dimensional dam break, the landslide in Hong Kong in 1993 and the Nora debris flow in the Italian Alps in 2000, are executed, and the model outputs are compared with published results. It is established that the model predictions agree well with both the analytical solution as well as the field observations.

  8. Some theorems and properties of multi-dimensional fractional Laplace transforms

    NASA Astrophysics Data System (ADS)

    Ahmood, Wasan Ajeel; Kiliçman, Adem

    2016-06-01

    The aim of this work is to study theorems and properties for the one-dimensional fractional Laplace transform, generalize some properties for the one-dimensional fractional Lapalce transform to be valid for the multi-dimensional fractional Lapalce transform and is to give the definition of the multi-dimensional fractional Lapalce transform. This study includes: dedicate the one-dimensional fractional Laplace transform for functions of only one independent variable with some of important theorems and properties and develop of some properties for the one-dimensional fractional Laplace transform to multi-dimensional fractional Laplace transform. Also, we obtain a fractional Laplace inversion theorem after a short survey on fractional analysis based on the modified Riemann-Liouville derivative.

  9. Three Dimensional Variable-Wavelength X-Ray Bragg Coherent Diffraction Imaging

    DOE PAGES

    Cha, W.; Ulvestad, A.; Allain, M.; ...

    2016-11-23

    Here, we present and demonstrate a formalism by which three-dimensional (3D) Bragg x-ray coherent diffraction imaging (BCDI) can be implemented without moving the sample by scanning the energy of the incident x-ray beam. This capability is made possible by introducing a 3D Fourier transform that accounts for x-ray wavelength variability. We also demonstrate the approach by inverting coherent Bragg diffraction patterns from a gold nanocrystal measured with an x-ray energy scan. Furthermore, variable-wavelength BCDI will expand the breadth of feasible in situ 3D strain imaging experiments towards more diverse materials environments, especially where sample manipulation is difficult.

  10. Computing Shapes Of Cascade Diffuser Blades

    NASA Technical Reports Server (NTRS)

    Tran, Ken; Prueger, George H.

    1993-01-01

    Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.

  11. Three Dimensional Variable-Wavelength X-Ray Bragg Coherent Diffraction Imaging

    NASA Astrophysics Data System (ADS)

    Cha, W.; Ulvestad, A.; Allain, M.; Chamard, V.; Harder, R.; Leake, S. J.; Maser, J.; Fuoss, P. H.; Hruszkewycz, S. O.

    2016-11-01

    We present and demonstrate a formalism by which three-dimensional (3D) Bragg x-ray coherent diffraction imaging (BCDI) can be implemented without moving the sample by scanning the energy of the incident x-ray beam. This capability is made possible by introducing a 3D Fourier transform that accounts for x-ray wavelength variability. We demonstrate the approach by inverting coherent Bragg diffraction patterns from a gold nanocrystal measured with an x-ray energy scan. Variable-wavelength BCDI will expand the breadth of feasible in situ 3D strain imaging experiments towards more diverse materials environments, especially where sample manipulation is difficult.

  12. Three Dimensional Variable-Wavelength X-Ray Bragg Coherent Diffraction Imaging.

    PubMed

    Cha, W; Ulvestad, A; Allain, M; Chamard, V; Harder, R; Leake, S J; Maser, J; Fuoss, P H; Hruszkewycz, S O

    2016-11-25

    We present and demonstrate a formalism by which three-dimensional (3D) Bragg x-ray coherent diffraction imaging (BCDI) can be implemented without moving the sample by scanning the energy of the incident x-ray beam. This capability is made possible by introducing a 3D Fourier transform that accounts for x-ray wavelength variability. We demonstrate the approach by inverting coherent Bragg diffraction patterns from a gold nanocrystal measured with an x-ray energy scan. Variable-wavelength BCDI will expand the breadth of feasible in situ 3D strain imaging experiments towards more diverse materials environments, especially where sample manipulation is difficult.

  13. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    PubMed

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Periodicity in Attachment Organelle Revealed by Electron Cryotomography Suggests Conformational Changes in Gliding Mechanism of Mycoplasma pneumoniae

    PubMed Central

    Kawamoto, Akihiro; Matsuo, Lisa; Kato, Takayuki; Yamamoto, Hiroki

    2016-01-01

    ABSTRACT Mycoplasma pneumoniae, a pathogenic bacterium, glides on host surfaces using a unique mechanism. It forms an attachment organelle at a cell pole as a protrusion comprised of knoblike surface structures and an internal core. Here, we analyzed the three-dimensional structure of the organelle in detail by electron cryotomography. On the surface, knoblike particles formed a two-dimensional array, albeit with limited regularity. Analyses using a nonbinding mutant and an antibody showed that the knoblike particles correspond to a naplike structure that has been observed by negative-staining electron microscopy and is likely to be formed as a complex of P1 adhesin, the key protein for binding and gliding. The paired thin and thick plates feature a rigid hexagonal lattice and striations with highly variable repeat distances, respectively. The combination of variable and invariant structures in the internal core and the P1 adhesin array on the surface suggest a model in which axial extension and compression of the thick plate along a rigid thin plate is coupled with attachment to and detachment from the substrate during gliding. PMID:27073090

  15. Two-dimensional longitudinal strain assessment in the presence of myocardial contrast agents is only feasible with speckle-tracking after microbubble destruction.

    PubMed

    Cavalcante, João L; Collier, Patrick; Plana, Juan C; Agler, Deborah; Thomas, James D; Marwick, Thomas H

    2012-12-01

    Longitudinal strain (LS) imaging is an important tool for the quantification of left ventricular function and deformation, but its assessment is challenging in the presence of echocardiographic contrast agents (CAs). The aim of this study was to test the hypothesis that destruction of microbubbles using high mechanical index (MI) could allow the measurement of LS. LS was measured using speckle strain (speckle-tracking LS [STLS]) and Velocity Vector Imaging (VVI) before and after CA administration in 30 consecutive patients. Low MI was used for left ventricular opacification and three-dimensional high MI for microbubble destruction. Four different settings were tested over 60 sec: (1) baseline LS without contrast, (2) LS after CA administration with low MI (0.3), (3) LS after CA administration with high MI (0.9), and (4) LS after microbubble destruction with high MI and three-dimensional imaging. Baseline feasibility of LS assessment (99.3% and 98.2% with STLS and VVI, respectively) was reduced after CA administration using STLS at low (69%, P < .0001) and high (95.4%, P = .0002) MI as well as with VVI (93.8%, P = .004, and 84.7%, P < .0001, respectively). STLS assessment was feasible with high MI after microbubble destruction (1.7% of uninterpretable segments vs 0.7%, P = .26) but not using VVI (7.2% vs 1.8%, P < .001). Regardless of which microbubbles or image settings were used, VVI was associated with significant variability and overestimation of global LS (for low MI, +4.7%, P < .01; for high MI, +3.3%, P < .001; for high MI after microbubble destruction, +1.3%, P = .04). LS assessment is most feasible without contrast. If a CA is necessary, the calculation of LS is feasible using the speckle-tracking method, if three-dimensional imaging is used as a tool for microbubble destruction 1 min after CA administration. Copyright © 2012. Published by Mosby, Inc.

  16. Influence of rice straw cooking conditions in the soda-ethanol-water pulping on the mechanical properties of produced paper sheets.

    PubMed

    Navaee-Ardeh, S; Mohammadi-Rovshandeh, J; Pourjoozi, M

    2004-03-01

    A normalized design was used to examine the influence of independent variables (alcohol concentration, cooking time and temperature) in the catalytic soda-ethanol pulping of rice straw on various mechanical properties (breaking length, burst, tear index and folding endurance) of paper sheets obtained from each pulping process. An equation of each dependent variable as a function of cooking variables (independent variables) was obtained by multiple non-linear regression using the least square method by MATLAB software for developing of empirical models. The ranges of alcohol concentration, cooking time and temperature were 40-65% (w/w), 150-180 min and 195-210 degrees C, respectively. Three-dimensional graphs of dependent variables were also plotted versus independent variables. The optimum values of breaking length, burst and tear index and folding endurance were 4683.7 (m), 30.99 (kN/g), 376.93 (mN m2/g) and 27.31, respectively. However, short cooking time (150 min), high ethanol concentration (65%) and high temperature (210 degrees C) could be used to produce papers with suitable burst and tear index. However, for papers with best breaking length and folding endurance low temperature (195 degrees C) was desirable. Differences between optimum values of dependent variables obtained by normalized design and experimental data were less than 20%.

  17. A Comparative Distributed Evaluation of the NWS-RDHM using Shape Matching and Traditional Measures with In Situ and Remotely Sensed Information

    NASA Astrophysics Data System (ADS)

    KIM, J.; Bastidas, L. A.

    2011-12-01

    We evaluate, calibrate and diagnose the performance of National Weather Service RDHM distributed model over the Durango River Basin in Colorado using simultaneously in situ and remotely sensed information from different discharge gaging stations (USGS), information about snow cover (SCV) and snow water equivalent (SWE) in situ from several SNOTEL sites and snow information distributed over the catchment from remotely sensed information (NOAA-NASA). In the process of evaluation we attempt to establish the optimal degree of parameter distribution over the catchment by calibration. A multi-criteria approach based on traditional measures (RMSE) and similarity based pattern comparisons using the Hausdorff and Earth Movers Distance approaches is used for the overall evaluation of the model performance. These pattern based approaches (shape matching) are found to be extremely relevant to account for the relatively large degree of inaccuracy in the remotely sensed SWE (judged inaccurate in terms of the value but reliable in terms of the distribution pattern) and the high reliability of the SCV (yes/no situation) while at the same time allow for an evaluation that quantifies the accuracy of the model over the entire catchment considering the different types of observations. The Hausdorff norm, due to its intrinsically multi-dimensional nature, allows for the incorporation of variables such as the terrain elevation as one of the variables for evaluation. The EMD, because of its extremely high computational overburden, requires the mapping of the set of evaluation variables into a two dimensional matrix for computation.

  18. Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM

    NASA Astrophysics Data System (ADS)

    Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak

    2015-04-01

    Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.

  19. Megavoltage computed tomography image guidance with helical tomotherapy in patients with vertebral tumors: analysis of factors influencing interobserver variability.

    PubMed

    Levegrün, Sabine; Pöttgen, Christoph; Jawad, Jehad Abu; Berkovic, Katharina; Hepp, Rodrigo; Stuschke, Martin

    2013-02-01

    To evaluate megavoltage computed tomography (MVCT)-based image guidance with helical tomotherapy in patients with vertebral tumors by analyzing factors influencing interobserver variability, considered as quality criterion of image guidance. Five radiation oncologists retrospectively registered 103 MVCTs in 10 patients to planning kilovoltage CTs by rigid transformations in 4 df. Interobserver variabilities were quantified using the standard deviations (SDs) of the distributions of the correction vector components about the observers' fraction mean. To assess intraobserver variabilities, registrations were repeated after ≥4 weeks. Residual deviations after setup correction due to uncorrectable rotational errors and elastic deformations were determined at 3 craniocaudal target positions. To differentiate observer-related variations in minimizing these residual deviations across the 3-dimensional MVCT from image resolution effects, 2-dimensional registrations were performed in 30 single transverse and sagittal MVCT slices. Axial and longitudinal MVCT image resolutions were quantified. For comparison, image resolution of kilovoltage cone-beam CTs (CBCTs) and interobserver variability in registrations of 43 CBCTs were determined. Axial MVCT image resolution is 3.9 lp/cm. Longitudinal MVCT resolution amounts to 6.3 mm, assessed as full-width at half-maximum of thin objects in MVCTs with finest pitch. Longitudinal CBCT resolution is better (full-width at half-maximum, 2.5 mm for CBCTs with 1-mm slices). In MVCT registrations, interobserver variability in the craniocaudal direction (SD 1.23 mm) is significantly larger than in the lateral and ventrodorsal directions (SD 0.84 and 0.91 mm, respectively) and significantly larger compared with CBCT alignments (SD 1.04 mm). Intraobserver variabilities are significantly smaller than corresponding interobserver variabilities (variance ratio [VR] 1.8-3.1). Compared with 3-dimensional registrations, 2-dimensional registrations have significantly smaller interobserver variability in the lateral and ventrodorsal directions (VR 3.8 and 2.8, respectively) but not in the craniocaudal direction (VR 0.75). Tomotherapy image guidance precision is affected by image resolution and residual deviations after setup correction. Eliminating the effect of residual deviations yields small interobserver variabilities with submillimeter precision in the axial plane. In contrast, interobserver variability in the craniocaudal direction is dominated by the poorer longitudinal MVCT image resolution. Residual deviations after image guidance exist and need to be considered when dose gradients ultimately achievable with image guided radiation therapy techniques are analyzed. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Active Subspaces of Airfoil Shape Parameterizations

    NASA Astrophysics Data System (ADS)

    Grey, Zachary J.; Constantine, Paul G.

    2018-05-01

    Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.

  1. Exploring the CAESAR database using dimensionality reduction techniques

    NASA Astrophysics Data System (ADS)

    Mendoza-Schrock, Olga; Raymer, Michael L.

    2012-06-01

    The Civilian American and European Surface Anthropometry Resource (CAESAR) database containing over 40 anthropometric measurements on over 4000 humans has been extensively explored for pattern recognition and classification purposes using the raw, original data [1-4]. However, some of the anthropometric variables would be impossible to collect in an uncontrolled environment. Here, we explore the use of dimensionality reduction methods in concert with a variety of classification algorithms for gender classification using only those variables that are readily observable in an uncontrolled environment. Several dimensionality reduction techniques are employed to learn the underlining structure of the data. These techniques include linear projections such as the classical Principal Components Analysis (PCA) and non-linear (manifold learning) techniques, such as Diffusion Maps and the Isomap technique. This paper briefly describes all three techniques, and compares three different classifiers, Naïve Bayes, Adaboost, and Support Vector Machines (SVM), for gender classification in conjunction with each of these three dimensionality reduction approaches.

  2. Dynamic-focusing microscope objective for optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Murali, Supraja; Rolland, Jannick

    2007-01-01

    Optical Coherence Tomography (OCT) is a novel optical imaging technique that has assumed significant importance in bio-medical imaging in the last two decades because it is non-invasive and provides accurate, high resolution images of three dimensional cross-sections of body tissue, exceeding the capabilities of the current predominant imaging technique - ultrasound. In this paper, the application of high resolution OCT, known as optical coherence microscopy (OCM) is investigated for in vivo detection of abnormal skin pathology for the early diagnosis of cancer. A main challenge in OCM is maintaining invariant resolution throughout the sample. The technology presented is based on a dynamic focusing microscope imaging probe conceived for skin imaging and the detection of abnormalities in the epithelium. A novel method for dynamic focusing in the biological sample is presented using variable-focus lens technology to obtain three dimensional images with invariant resolution throughout the cross-section and depth of the sample is presented and discussed. A low coherence broadband source centered at near IR wavelengths is used to illuminate the sample. The design, analysis and predicted performance of the dynamic focusing microscope objective designed for dynamic three dimensional imaging at 5μm resolution for the chosen broadband spectrum is presented.

  3. A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles

    NASA Astrophysics Data System (ADS)

    Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.

    The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.

  4. Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds

    NASA Astrophysics Data System (ADS)

    Abdo, Mohammad Gamal Mohammad Mostafa

    This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).

  5. Variables in Color Perception of Young Children

    ERIC Educational Resources Information Center

    Gaines, Rosslyn

    1972-01-01

    Study investigated the effect of the stimulus variables of value, chroma, and hue in relation to sex, intelligence, and dimensional attention of kindergarten children using two reward conditions. (Author)

  6. Electrochemical state and internal variables estimation using a reduced-order physics-based model of a lithium-ion cell and an extended Kalman filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stetzel, KD; Aldrich, LL; Trimboli, MS

    2015-03-15

    This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables.more » (C) 2014 Elsevier B.V. All rights reserved.« less

  7. Prediction model of sinoatrial node field potential using high order partial least squares.

    PubMed

    Feng, Yu; Cao, Hui; Zhang, Yanbin

    2015-01-01

    High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).

  8. Periodic, complexiton solutions and stability for a (2+1)-dimensional variable-coefficient Gross-Pitaevskii equation in the Bose-Einstein condensation

    NASA Astrophysics Data System (ADS)

    Yin, Hui-Min; Tian, Bo; Zhao, Xin-Chao

    2018-06-01

    This paper presents an investigation of a (2 + 1)-dimensional variable-coefficient Gross-Pitaevskii equation in the Bose-Einstein condensation. Periodic and complexiton solutions are obtained. Solitons solutions are also gotten through the periodic solutions. Numerical solutions via the split step method are stable. Effects of the weak and strong modulation instability on the solitons are shown: the weak modulation instability permits an observable soliton, and the strong one overwhelms its development.

  9. Continuous-variable gate decomposition for the Bose-Hubbard model

    NASA Astrophysics Data System (ADS)

    Kalajdzievski, Timjan; Weedbrook, Christian; Rebentrost, Patrick

    2018-06-01

    In this work, we decompose the time evolution of the Bose-Hubbard model into a sequence of logic gates that can be implemented on a continuous-variable photonic quantum computer. We examine the structure of the circuit that represents this time evolution for one-dimensional and two-dimensional lattices. The elementary gates needed for the implementation are counted as a function of lattice size. We also include the contribution of the leading dipole interaction term which may be added to the Hamiltonian and its corresponding circuit.

  10. An advanced stochastic weather generator for simulating 2-D high-resolution climate variables

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Fatichi, Simone; Paschalis, Athanasios; Molnar, Peter; Burlando, Paolo

    2017-07-01

    A new stochastic weather generator, Advanced WEather GENerator for a two-dimensional grid (AWE-GEN-2d) is presented. The model combines physical and stochastic approaches to simulate key meteorological variables at high spatial and temporal resolution: 2 km × 2 km and 5 min for precipitation and cloud cover and 100 m × 100 m and 1 h for near-surface air temperature, solar radiation, vapor pressure, atmospheric pressure, and near-surface wind. The model requires spatially distributed data for the calibration process, which can nowadays be obtained by remote sensing devices (weather radar and satellites), reanalysis data sets and ground stations. AWE-GEN-2d is parsimonious in terms of computational demand and therefore is particularly suitable for studies where exploring internal climatic variability at multiple spatial and temporal scales is fundamental. Applications of the model include models of environmental systems, such as hydrological and geomorphological models, where high-resolution spatial and temporal meteorological forcing is crucial. The weather generator was calibrated and validated for the Engelberg region, an area with complex topography in the Swiss Alps. Model test shows that the climate variables are generated by AWE-GEN-2d with a level of accuracy that is sufficient for many practical applications.

  11. Reinforcement Learning Trees

    PubMed Central

    Zhu, Ruoqing; Zeng, Donglin; Kosorok, Michael R.

    2015-01-01

    In this paper, we introduce a new type of tree-based method, reinforcement learning trees (RLT), which exhibits significantly improved performance over traditional methods such as random forests (Breiman, 2001) under high-dimensional settings. The innovations are three-fold. First, the new method implements reinforcement learning at each selection of a splitting variable during the tree construction processes. By splitting on the variable that brings the greatest future improvement in later splits, rather than choosing the one with largest marginal effect from the immediate split, the constructed tree utilizes the available samples in a more efficient way. Moreover, such an approach enables linear combination cuts at little extra computational cost. Second, we propose a variable muting procedure that progressively eliminates noise variables during the construction of each individual tree. The muting procedure also takes advantage of reinforcement learning and prevents noise variables from being considered in the search for splitting rules, so that towards terminal nodes, where the sample size is small, the splitting rules are still constructed from only strong variables. Last, we investigate asymptotic properties of the proposed method under basic assumptions and discuss rationale in general settings. PMID:26903687

  12. Variables separation and superintegrability of the nine-dimensional MICZ-Kepler problem

    NASA Astrophysics Data System (ADS)

    Phan, Ngoc-Hung; Le, Dai-Nam; Thoi, Tuan-Quoc N.; Le, Van-Hoang

    2018-03-01

    The nine-dimensional MICZ-Kepler problem is of recent interest. This is a system describing a charged particle moving in the Coulomb field plus the field of a SO(8) monopole in a nine-dimensional space. Interestingly, this problem is equivalent to a 16-dimensional harmonic oscillator via the Hurwitz transformation. In the present paper, we report on the multiseparability, a common property of superintegrable systems, and the superintegrability of the problem. First, we show the solvability of the Schrödinger equation of the problem by the variables separation method in different coordinates. Second, based on the SO(10) symmetry algebra of the system, we construct explicitly a set of seventeen invariant operators, which are all in the second order of the momentum components, satisfying the condition of superintegrability. The found number 17 coincides with the prediction of (2n - 1) law of maximal superintegrability order in the case n = 9. Until now, this law is accepted to apply only to scalar Hamiltonian eigenvalue equations in n-dimensional space; therefore, our results can be treated as evidence that this definition of superintegrability may also apply to some vector equations such as the Schrödinger equation for the nine-dimensional MICZ-Kepler problem.

  13. Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap

    NASA Astrophysics Data System (ADS)

    Spiwok, Vojtěch; Králová, Blanka

    2011-12-01

    Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling.

  14. Individuals at high risk for suicide are categorically distinct from those at low risk.

    PubMed

    Witte, Tracy K; Holm-Denoma, Jill M; Zuromski, Kelly L; Gauthier, Jami M; Ruscio, John

    2017-04-01

    Although suicide risk is often thought of as existing on a graded continuum, its latent structure (i.e., whether it is categorical or dimensional) has not been empirically determined. Knowledge about the latent structure of suicide risk holds implications for suicide risk assessments, targeted suicide interventions, and suicide research. Our objectives were to determine whether suicide risk can best be understood as a categorical (i.e., taxonic) or dimensional entity, and to validate the nature of any obtained taxon. We conducted taxometric analyses of cross-sectional, baseline data from 16 independent studies funded by the Military Suicide Research Consortium. Participants (N = 1,773) primarily consisted of military personnel, and most had a history of suicidal behavior. The Comparison Curve Fit Index values for MAMBAC (.85), MAXEIG (.77), and L-Mode (.62) all strongly supported categorical (i.e., taxonic) structure for suicide risk. Follow-up analyses comparing the taxon and complement groups revealed substantially larger effect sizes for the variables most conceptually similar to suicide risk compared with variables indicating general distress. Pending replication and establishment of the predictive validity of the taxon, our results suggest the need for a fundamental shift in suicide risk assessment, treatment, and research. Specifically, suicide risk assessments could be shortened without sacrificing validity, the most potent suicide interventions could be allocated to individuals in the high-risk group, and research should generally be conducted on individuals in the high-risk group. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Initial development of the two-dimensional ejector shear layer - Experimental results

    NASA Technical Reports Server (NTRS)

    Benjamin, M. A.; Dufflocq, M.; Roan, V. P.

    1993-01-01

    An experimental investigation designed to study the development of shear layers in a two-dimensional single-nozzle ejector has been completed. In this study, combinations of air/air, argon/air, helium/air, and air/helium were used as the supersonic primary and subsonic secondary, respectively. Mixing of the gases occurred in a constant-area tube 39.1 mm high by 25.4 mm wide, where the inlet static pressure was maintained at 35 kPa. The cases studied resulted in convective Mach numbers between 0.058 and 1.64, density ratios between 0.102 and 3.49, and velocity ratios between 0.065 and 0.811. The resulting data shows the differences in the shear-layer development for the various combinations of independent variables utilized in the investigation. The normalized growth-rates in the near-field were found to be similar to two-dimensional mixing layers. These results have enhanced the ability to analyze and design ejector systems as well as providing a better understanding of the physics.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levegruen, Sabine, E-mail: sabine.levegruen@uni-due.de; Poettgen, Christoph; Abu Jawad, Jehad

    Purpose: To evaluate megavoltage computed tomography (MVCT)-based image guidance with helical tomotherapy in patients with vertebral tumors by analyzing factors influencing interobserver variability, considered as quality criterion of image guidance. Methods and Materials: Five radiation oncologists retrospectively registered 103 MVCTs in 10 patients to planning kilovoltage CTs by rigid transformations in 4 df. Interobserver variabilities were quantified using the standard deviations (SDs) of the distributions of the correction vector components about the observers' fraction mean. To assess intraobserver variabilities, registrations were repeated after {>=}4 weeks. Residual deviations after setup correction due to uncorrectable rotational errors and elastic deformations were determinedmore » at 3 craniocaudal target positions. To differentiate observer-related variations in minimizing these residual deviations across the 3-dimensional MVCT from image resolution effects, 2-dimensional registrations were performed in 30 single transverse and sagittal MVCT slices. Axial and longitudinal MVCT image resolutions were quantified. For comparison, image resolution of kilovoltage cone-beam CTs (CBCTs) and interobserver variability in registrations of 43 CBCTs were determined. Results: Axial MVCT image resolution is 3.9 lp/cm. Longitudinal MVCT resolution amounts to 6.3 mm, assessed as full-width at half-maximum of thin objects in MVCTs with finest pitch. Longitudinal CBCT resolution is better (full-width at half-maximum, 2.5 mm for CBCTs with 1-mm slices). In MVCT registrations, interobserver variability in the craniocaudal direction (SD 1.23 mm) is significantly larger than in the lateral and ventrodorsal directions (SD 0.84 and 0.91 mm, respectively) and significantly larger compared with CBCT alignments (SD 1.04 mm). Intraobserver variabilities are significantly smaller than corresponding interobserver variabilities (variance ratio [VR] 1.8-3.1). Compared with 3-dimensional registrations, 2-dimensional registrations have significantly smaller interobserver variability in the lateral and ventrodorsal directions (VR 3.8 and 2.8, respectively) but not in the craniocaudal direction (VR 0.75). Conclusion: Tomotherapy image guidance precision is affected by image resolution and residual deviations after setup correction. Eliminating the effect of residual deviations yields small interobserver variabilities with submillimeter precision in the axial plane. In contrast, interobserver variability in the craniocaudal direction is dominated by the poorer longitudinal MVCT image resolution. Residual deviations after image guidance exist and need to be considered when dose gradients ultimately achievable with image guided radiation therapy techniques are analyzed.« less

  17. Boundary Conditions for Infinite Conservation Laws

    NASA Astrophysics Data System (ADS)

    Rosenhaus, V.; Bruzón, M. S.; Gandarias, M. L.

    2016-12-01

    Regular soliton equations (KdV, sine-Gordon, NLS) are known to possess infinite sets of local conservation laws. Some other classes of nonlinear PDE possess infinite-dimensional symmetries parametrized by arbitrary functions of independent or dependent variables; among them are Zabolotskaya-Khokhlov, Kadomtsev-Petviashvili, Davey-Stewartson equations and Born-Infeld equation. Boundary conditions were shown to play an important role for the existence of local conservation laws associated with infinite-dimensional symmetries. In this paper, we analyze boundary conditions for the infinite conserved densities of regular soliton equations: KdV, potential KdV, Sine-Gordon equation, and nonlinear Schrödinger equation, and compare them with boundary conditions for the conserved densities obtained from infinite-dimensional symmetries with arbitrary functions of independent and dependent variables.

  18. Thrust performance of a variable-geometry, divergent exhaust nozzle on a turbojet engine at altitude

    NASA Technical Reports Server (NTRS)

    Straight, D. M.; Collom, R. R.

    1983-01-01

    A variable geometry, low aspect ratio, nonaxisymmetric, two dimensional, convergent-divergent exhaust nozzle was tested at simulated altitude on a turbojet engine to obtain baseline axial, dry thrust performance over wide ranges of operating nozzle pressure ratios, throat areas, and internal expansion area ratios. The thrust data showed good agreement with theory and scale model test results after the data were corrected for seal leakage and coolant losses. Wall static pressure profile data were also obtained and compared with one dimensional theory and scale model data. The pressure data indicate greater three dimensional flow effects in the full scale tests than with models. The leakage and coolant penalties were substantial, and the method to determine them is included.

  19. Measuring monotony in two-dimensional samples

    NASA Astrophysics Data System (ADS)

    Kachapova, Farida; Kachapov, Ilias

    2010-04-01

    This note introduces a monotony coefficient as a new measure of the monotone dependence in a two-dimensional sample. Some properties of this measure are derived. In particular, it is shown that the absolute value of the monotony coefficient for a two-dimensional sample is between |r| and 1, where r is the Pearson's correlation coefficient for the sample; that the monotony coefficient equals 1 for any monotone increasing sample and equals -1 for any monotone decreasing sample. This article contains a few examples demonstrating that the monotony coefficient is a more accurate measure of the degree of monotone dependence for a non-linear relationship than the Pearson's, Spearman's and Kendall's correlation coefficients. The monotony coefficient is a tool that can be applied to samples in order to find dependencies between random variables; it is especially useful in finding couples of dependent variables in a big dataset of many variables. Undergraduate students in mathematics and science would benefit from learning and applying this measure of monotone dependence.

  20. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis

    PubMed Central

    Gong, Xiajing; Hu, Meng

    2018-01-01

    Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640

  1. Knee joint kinetics in response to multiple three-dimensional printed, customised foot orthoses for the treatment of medial compartment knee osteoarthritis.

    PubMed

    Allan, Richard; Woodburn, James; Telfer, Scott; Abbott, Mandy; Steultjens, Martijn Pm

    2017-06-01

    The knee adduction moment is consistently used as a surrogate measure of medial compartment loading. Foot orthoses are designed to reduce knee adduction moment via lateral wedging. The 'dose' of wedging required to optimally unload the affected compartment is unknown and variable between individuals. This study explores a personalised approach via three-dimensional printed foot orthotics to assess the biomechanical response when two design variables are altered: orthotic length and lateral wedging. Foot orthoses were created for 10 individuals with symptomatic medial knee osteoarthritis and 10 controls. Computer-aided design software was used to design four full and four three-quarter-length foot orthoses per participant each with lateral posting of 0° 'neutral', 5° rearfoot, 10° rearfoot and 5° forefoot/10° rearfoot. Three-dimensional printers were used to manufacture all foot orthoses. Three-dimensional gait analyses were performed and selected knee kinetics were analysed: first peak knee adduction moment, second peak knee adduction moment, first knee flexion moment and knee adduction moment impulse. Full-length foot orthoses provided greater reductions in first peak knee adduction moment (p = 0.038), second peak knee adduction moment (p = 0.018) and knee adduction moment impulse (p = 0.022) compared to three-quarter-length foot orthoses. Dose effect of lateral wedging was found for first peak knee adduction moment (p < 0.001), second peak knee adduction moment (p < 0.001) and knee adduction moment impulse (p < 0.001) indicating greater unloading for higher wedging angles. Significant interaction effects were found for foot orthosis length and participant group in second peak knee adduction moment (p = 0.028) and knee adduction moment impulse (p = 0.036). Significant interaction effects were found between orthotic length and wedging condition for second peak knee adduction moment (p = 0.002). No significant changes in first knee flexion moment were found. Individual heterogeneous responses to foot orthosis conditions were observed for first peak knee adduction moment, second peak knee adduction moment and knee adduction moment impulse. Biomechanical response is highly variable with personalised foot orthoses. Findings indicate that the tailoring of a personalised intervention could provide an additional benefit over standard interventions and that a three-dimensional printing approach to foot orthosis manufacturing is a viable alternative to the standard methods.

  2. Multimodal, high-dimensional, model-based, Bayesian inverse problems with applications in biomechanics

    NASA Astrophysics Data System (ADS)

    Franck, I. M.; Koutsourelakis, P. S.

    2017-01-01

    This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.

  3. Data-driven clustering of rain events: microphysics information derived from macro-scale observations

    NASA Astrophysics Data System (ADS)

    Djallel Dilmi, Mohamed; Mallet, Cécile; Barthes, Laurent; Chazottes, Aymeric

    2017-04-01

    Rain time series records are generally studied using rainfall rate or accumulation parameters, which are estimated for a fixed duration (typically 1 min, 1 h or 1 day). In this study we use the concept of rain events. The aim of the first part of this paper is to establish a parsimonious characterization of rain events, using a minimal set of variables selected among those normally used for the characterization of these events. A methodology is proposed, based on the combined use of a genetic algorithm (GA) and self-organizing maps (SOMs). It can be advantageous to use an SOM, since it allows a high-dimensional data space to be mapped onto a two-dimensional space while preserving, in an unsupervised manner, most of the information contained in the initial space topology. The 2-D maps obtained in this way allow the relationships between variables to be determined and redundant variables to be removed, thus leading to a minimal subset of variables. We verify that such 2-D maps make it possible to determine the characteristics of all events, on the basis of only five features (the event duration, the peak rain rate, the rain event depth, the standard deviation of the rain rate event and the absolute rain rate variation of the order of 0.5). From this minimal subset of variables, hierarchical cluster analyses were carried out. We show that clustering into two classes allows the conventional convective and stratiform classes to be determined, whereas classification into five classes allows this convective-stratiform classification to be further refined. Finally, our study made it possible to reveal the presence of some specific relationships between these five classes and the microphysics of their associated rain events.

  4. Using crown condition variables as indicators of forest health

    Treesearch

    Stanley J. Zarnoch; William A. Bechtold; K.W. Stolte

    2004-01-01

    Indicators of forest health used in previous studies have focused on crown variables analyzed individually at the tree level by summarizing over all species. This approach has the virtue of simplicity but does not account for the three-dimensional attributes of a tree crown, the multivariate nature of the crown variables, or variability among species. To alleviate...

  5. New Patterns of the Two-Dimensional Rogue Waves: (2+1)-Dimensional Maccari System

    NASA Astrophysics Data System (ADS)

    Wang, Gai-Hua; Wang, Li-Hong; Rao, Ji-Guang; He, Jing-Song

    2017-06-01

    The ocean rogue wave is one kind of puzzled destructive phenomenon that has not been understood thoroughly so far. The two-dimensional nature of this wave has inspired the vast endeavors on the recognizing new patterns of the rogue waves based on the dynamical equations with two-spatial variables and one-temporal variable, which is a very crucial step to prevent this disaster event at the earliest stage. Along this issue, we present twelve new patterns of the two-dimensional rogue waves, which are reduced from a rational and explicit formula of the solutions for a (2+1)-dimensional Maccari system. The extreme points (lines) of the first-order lumps (rogue waves) are discussed according to their analytical formulas. For the lower-order rogue waves, we show clearly in formula that parameter b 2 plays a significant role to control these patterns. Supported by the National Natural Science Foundation of China under Grant No. 11671219, the K. C. Wong Magna Fund in Ningbo University, Gai-Hua Wang is also supported by the Scientific Research Foundation of Graduate School of Ningbo University

  6. Design of efficient circularly symmetric two-dimensional variable digital FIR filters.

    PubMed

    Bindima, Thayyil; Elias, Elizabeth

    2016-05-01

    Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability.

  7. Design of efficient circularly symmetric two-dimensional variable digital FIR filters

    PubMed Central

    Bindima, Thayyil; Elias, Elizabeth

    2016-01-01

    Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability. PMID:27222739

  8. Three-dimensional marginal separation

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.

    1988-01-01

    The three dimensional marginal separation of a boundary layer along a line of symmetry is considered. The key equation governing the displacement function is derived, and found to be a nonlinear integral equation in two space variables. This is solved iteratively using a pseudo-spectral approach, based partly in double Fourier space, and partly in physical space. Qualitatively, the results are similar to previously reported two dimensional results (which are also computed to test the accuracy of the numerical scheme); however quantitatively the three dimensional results are much different.

  9. Parallel Planes Information Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian

    2015-12-26

    This software presents a user-provided multivariate dataset as an interactive three dimensional visualization so that the user can explore the correlation between variables in the observations and the distribution of observations among the variables.

  10. A variable resolution nonhydrostatic global atmospheric semi-implicit semi-Lagrangian model

    NASA Astrophysics Data System (ADS)

    Pouliot, George Antoine

    2000-10-01

    The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested. The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high-resolution topographic data set and the variable resolution grid, sets of experiments with increasing resolution were performed over specific regions of interest. Using realistic initial conditions derived from re-analysis fields, nonhydrostatic effects were significant for grid spacings on the order of 0.1 degrees with orographic forcing. If the model code was adapted for use in a message passing interface (MPI) on a parallel supercomputer today, it was estimated that a global grid spacing of 0.1 degrees would be achievable for a global model. In this case, nonhydrostatic effects would be significant for most areas. A variable resolution grid in a global model provides a unified and flexible approach to many climate and numerical weather prediction problems. The ability to configure the model from very fine to very coarse resolutions allows for the simulation of atmospheric phenomena at different scales using the same code. We have developed a dynamical core illustrating the feasibility of using a variable resolution in a global model.

  11. Analysis of Sediment Transport for Rivers in South Korea based on Data Mining technique

    NASA Astrophysics Data System (ADS)

    Jang, Eun-kyung; Ji, Un; Yeo, Woonkwang

    2017-04-01

    The purpose of this study is to calculate of sediment discharge assessment using data mining in South Korea. The Model Tree was selected for this study which is the most suitable technique to explicitly analyze the relationship between input and output variables in various and diverse databases among the Data Mining. In order to derive the sediment discharge equation using the Model Tree of Data Mining used the dimensionless variables used in Engelund and Hansen, Ackers and White, Brownlie and van Rijn equations as the analytical condition. In addition, total of 14 analytical conditions were set considering the conditions dimensional variables and the combination conditions of the dimensionless variables and the dimensional variables according to the relationship between the flow and the sediment transport. For each case, the analysis results were analyzed by mean of discrepancy ratio, root mean square error, mean absolute percent error, correlation coefficient. The results showed that the best fit was obtained by using five dimensional variables such as velocity, depth, slope, width and Median Diameter. And closest approximation to the best goodness-of-fit was estimated from the depth, slope, width, main grain size of bed material and dimensionless tractive force and except for the slope in the single variable. In addition, the three types of Model Tree that are most appropriate are compared with the Ackers and White equation which is the best fit among the existing equations, the mean discrepancy ration and the correlation coefficient of the Model Tree are improved compared to the Ackers and White equation.

  12. Feature weight estimation for gene selection: a local hyperlinear learning approach

    PubMed Central

    2014-01-01

    Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071

  13. Crack-tip-opening angle measurements and crack tunneling under stable tearing in thin sheet 2024-T3 aluminum alloy

    NASA Technical Reports Server (NTRS)

    Dawicke, D. S.; Sutton, M. A.

    1993-01-01

    The stable tearing behavior of thin sheets 2024-T3 aluminum alloy was studied for middle crack tension specimens having initial cracks that were: flat cracks (low fatigue stress) and 45 degrees through-thickness slant cracks (high fatigue stress). The critical crack-tip-opening angle (CTOA) values during stable tearing were measured by two independent methods, optical microscopy and digital image correlation. Results from the two methods agreed well. The CTOA measurements and observations of the fracture surfaces showed that the initial stable tearing behavior of low and high fatigue stress tests is significantly different. The cracks in the low fatigue stress tests underwent a transition from flat-to-slant crack growth, during which the CTOA values were high and significant crack tunneling occurred. After crack growth equal to about the thickness, CTOA reached a constant value of 6 deg and after crack growth equal to about twice the thickness, crack tunneling stabilized. The initial high CTOA values, in the low fatigue crack tests, coincided with large three-dimensional crack front shape changes due to a variation in the through-thickness crack tip constraint. The cracks in the high fatigue stress tests reach the same constant CTOA value after crack growth equal to about the thickness, but produced only a slightly higher CTOA value during initial crack growth. For crack growth on the 45 degree slant, the crack front and local field variables are still highly three-dimensional. However, the constant CTOA values and stable crack front shape may allow the process to be approximated with two-dimensional models.

  14. Interstudy reproducibility of dimensional and functional measurements between cine magnetic resonance studies in the morphologically abnormal left ventricle.

    PubMed

    Semelka, R C; Tomei, E; Wagner, S; Mayo, J; Caputo, G; O'Sullivan, M; Parmley, W W; Chatterjee, K; Wolfe, C; Higgins, C B

    1990-06-01

    The validity of geometric formulas to derive mass and volumes in the morphologically abnormal left ventricle is problematic. Imaging techniques that are tomographic and therefore inherently three-dimensional should be more reliable and reproducible between studies in such ventricles. Determination of reproducibility between studies is essential to define the limits of an imaging technique for evaluating the response to therapy. Sequential cine magnetic resonance (MR) studies were performed on patients with dilated cardiomyopathy (n = 11) and left ventricular hypertrophy (n = 8) within a short interval in order to assess interstudy reproducibility. Left ventricular mass, volumes, ejection fraction, and end-systolic wall stress were determined by two independent observers. Between studies, left ventricular mass was highly reproducible for hypertrophied and dilated ventricles, with percent variability less than 6%. Ejection fraction and end-diastolic volume showed close reproducibility between studies, with percent variability less than 5% End-systolic volume varied by 4.3% and 4.5% in dilated cardiomyopathy and 8.4% and 7.2% in left ventricular hypertrophy for the two observers. End-systolic wall stress, which is derived from multiple measurements, varied the greatest, with percent variability of 17.2% and 15.7% in dilated cardiomyopathy and 14.8% and 13% in left ventricular hypertrophy, respectively. The results of this study demonstrate that mass, volume, and functional measurements are reproducible in morphologically abnormal ventricles.

  15. Two dimensional fully nonlinear numerical wave tank based on the BEM

    NASA Astrophysics Data System (ADS)

    Sun, Zhe; Pang, Yongjie; Li, Hongwei

    2012-12-01

    The development of a two dimensional numerical wave tank (NWT) with a rocker or piston type wavemaker based on the high order boundary element method (BEM) and mixed Eulerian-Lagrangian (MEL) is examined. The cauchy principle value (CPV) integral is calculated by a special Gauss type quadrature and a change of variable. In addition the explicit truncated Taylor expansion formula is employed in the time-stepping process. A modified double nodes method is assumed to tackle the corner problem, as well as the damping zone technique is used to absorb the propagation of the free surface wave at the end of the tank. A variety of waves are generated by the NWT, for example; a monochromatic wave, solitary wave and irregular wave. The results confirm the NWT model is efficient and stable.

  16. A Review on Dimension Reduction

    PubMed Central

    Ma, Yanyuan; Zhu, Liping

    2013-01-01

    Summary Summarizing the effect of many covariates through a few linear combinations is an effective way of reducing covariate dimension and is the backbone of (sufficient) dimension reduction. Because the replacement of high-dimensional covariates by low-dimensional linear combinations is performed with a minimum assumption on the specific regression form, it enjoys attractive advantages as well as encounters unique challenges in comparison with the variable selection approach. We review the current literature of dimension reduction with an emphasis on the two most popular models, where the dimension reduction affects the conditional distribution and the conditional mean, respectively. We discuss various estimation and inference procedures in different levels of detail, with the intention of focusing on their underneath idea instead of technicalities. We also discuss some unsolved problems in this area for potential future research. PMID:23794782

  17. Terminal shock position and restart control of a Mach 2.7, two-dimensional, twin duct mixed compression inlet

    NASA Technical Reports Server (NTRS)

    Cole, G. L.; Neiner, G. H.; Baumbick, R. J.

    1973-01-01

    Experimental results of terminal shock and restart control system tests of a two-dimensional, twin-duct mixed compression inlet are presented. High-response (110-Hz bandwidth) overboard bypass doors were used, both as the variable to control shock position and as the means of disturbing the inlet airflow. An inherent instability in inlet shock position resulted in noisy feedback signals and thus restricted the terminal shock position control performance that was achieved. Proportional-plus-integral type controllers using either throat exit static pressure or shock position sensor feedback gave adequate low-frequency control. The inlet restart control system kept the terminal shock control loop closed throughout the unstart-restart transient. The capability to restart the inlet was non limited by the inlet instability.

  18. Intermittent Lagrangian velocities and accelerations in three-dimensional porous medium flow.

    PubMed

    Holzner, M; Morales, V L; Willmann, M; Dentz, M

    2015-07-01

    Intermittency of Lagrangian velocity and acceleration is a key to understanding transport in complex systems ranging from fluid turbulence to flow in porous media. High-resolution optical particle tracking in a three-dimensional (3D) porous medium provides detailed 3D information on Lagrangian velocities and accelerations. We find sharp transitions close to pore throats, and low flow variability in the pore bodies, which gives rise to stretched exponential Lagrangian velocity and acceleration distributions characterized by a sharp peak at low velocity, superlinear evolution of particle dispersion, and double-peak behavior in the propagators. The velocity distribution is quantified in terms of pore geometry and flow connectivity, which forms the basis for a continuous-time random-walk model that sheds light on the observed Lagrangian flow and transport behaviors.

  19. Principal component analysis on a torus: Theory and application to protein dynamics.

    PubMed

    Sittel, Florian; Filk, Thomas; Stock, Gerhard

    2017-12-28

    A dimensionality reduction method for high-dimensional circular data is developed, which is based on a principal component analysis (PCA) of data points on a torus. Adopting a geometrical view of PCA, various distance measures on a torus are introduced and the associated problem of projecting data onto the principal subspaces is discussed. The main idea is that the (periodicity-induced) projection error can be minimized by transforming the data such that the maximal gap of the sampling is shifted to the periodic boundary. In a second step, the covariance matrix and its eigendecomposition can be computed in a standard manner. Adopting molecular dynamics simulations of two well-established biomolecular systems (Aib 9 and villin headpiece), the potential of the method to analyze the dynamics of backbone dihedral angles is demonstrated. The new approach allows for a robust and well-defined construction of metastable states and provides low-dimensional reaction coordinates that accurately describe the free energy landscape. Moreover, it offers a direct interpretation of covariances and principal components in terms of the angular variables. Apart from its application to PCA, the method of maximal gap shifting is general and can be applied to any other dimensionality reduction method for circular data.

  20. Principal component analysis on a torus: Theory and application to protein dynamics

    NASA Astrophysics Data System (ADS)

    Sittel, Florian; Filk, Thomas; Stock, Gerhard

    2017-12-01

    A dimensionality reduction method for high-dimensional circular data is developed, which is based on a principal component analysis (PCA) of data points on a torus. Adopting a geometrical view of PCA, various distance measures on a torus are introduced and the associated problem of projecting data onto the principal subspaces is discussed. The main idea is that the (periodicity-induced) projection error can be minimized by transforming the data such that the maximal gap of the sampling is shifted to the periodic boundary. In a second step, the covariance matrix and its eigendecomposition can be computed in a standard manner. Adopting molecular dynamics simulations of two well-established biomolecular systems (Aib9 and villin headpiece), the potential of the method to analyze the dynamics of backbone dihedral angles is demonstrated. The new approach allows for a robust and well-defined construction of metastable states and provides low-dimensional reaction coordinates that accurately describe the free energy landscape. Moreover, it offers a direct interpretation of covariances and principal components in terms of the angular variables. Apart from its application to PCA, the method of maximal gap shifting is general and can be applied to any other dimensionality reduction method for circular data.

  1. Hyperspectral target detection using manifold learning and multiple target spectra

    DOE PAGES

    Ziemann, Amanda K.; Theiler, James; Messinger, David W.

    2016-03-31

    Imagery collected from satellites and airborne platforms provides an important tool for remotely analyzing the content of a scene. In particular, the ability to remotely detect a specific material within a scene is of critical importance in nonproliferation and other applications. The sensor systems that process hyperspectral images collect the high-dimensional spectral information necessary to perform these detection analyses. For a d-dimensional hyperspectral image, however, where d is the number of spectral bands, it is common for the data to inherently occupy an m-dimensional space with m << d. In the remote sensing community, this has led to recent interestmore » in the use of manifold learning, which seeks to characterize the embedded lower-dimensional, nonlinear manifold that the data discretely approximate. The research presented in this paper focuses on a graph theory and manifold learning approach to target detection, using an adaptive version of locally linear embedding that is biased to separate target pixels from background pixels. Finally, this approach incorporates multiple target signatures for a particular material, accounting for the spectral variability that is often present within a solid material of interest.« less

  2. Coupled Research in Ocean Acoustics and Signal Processing for the Next Generation of Underwater Acoustic Communication Systems

    DTIC Science & Technology

    2016-08-05

    technique which used unobserved ”intermediate” variables to break a high-dimensional estimation problem such as least- squares (LS) optimization of a large...Least Squares (GEM-LS). The estimator is iterative and the work in this time period focused on characterizing the convergence properties of this...ap- proach by relaxing the statistical assumptions which is termed the Relaxed Approximate Graph-Structured Recursive Least Squares (RAGS-RLS). This

  3. A global multiscale mathematical model for the human circulation with emphasis on the venous system.

    PubMed

    Müller, Lucas O; Toro, Eleuterio F

    2014-07-01

    We present a global, closed-loop, multiscale mathematical model for the human circulation including the arterial system, the venous system, the heart, the pulmonary circulation and the microcirculation. A distinctive feature of our model is the detailed description of the venous system, particularly for intracranial and extracranial veins. Medium to large vessels are described by one-dimensional hyperbolic systems while the rest of the components are described by zero-dimensional models represented by differential-algebraic equations. Robust, high-order accurate numerical methodology is implemented for solving the hyperbolic equations, which are adopted from a recent reformulation that includes variable material properties. Because of the large intersubject variability of the venous system, we perform a patient-specific characterization of major veins of the head and neck using MRI data. Computational results are carefully validated using published data for the arterial system and most regions of the venous system. For head and neck veins, validation is carried out through a detailed comparison of simulation results against patient-specific phase-contrast MRI flow quantification data. A merit of our model is its global, closed-loop character; the imposition of highly artificial boundary conditions is avoided. Applications in mind include a vast range of medical conditions. Of particular interest is the study of some neurodegenerative diseases, whose venous haemodynamic connection has recently been identified by medical researchers. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Asymptotic and spectral analysis of the gyrokinetic-waterbag integro-differential operator in toroidal geometry

    NASA Astrophysics Data System (ADS)

    Besse, Nicolas; Coulette, David

    2016-08-01

    Achieving plasmas with good stability and confinement properties is a key research goal for magnetic fusion devices. The underlying equations are the Vlasov-Poisson and Vlasov-Maxwell (VPM) equations in three space variables, three velocity variables, and one time variable. Even in those somewhat academic cases where global equilibrium solutions are known, studying their stability requires the analysis of the spectral properties of the linearized operator, a daunting task. We have identified a model, for which not only equilibrium solutions can be constructed, but many of their stability properties are amenable to rigorous analysis. It uses a class of solution to the VPM equations (or to their gyrokinetic approximations) known as waterbag solutions which, in particular, are piecewise constant in phase-space. It also uses, not only the gyrokinetic approximation of fast cyclotronic motion around magnetic field lines, but also an asymptotic approximation regarding the magnetic-field-induced anisotropy: the spatial variation along the field lines is taken much slower than across them. Together, these assumptions result in a drastic reduction in the dimensionality of the linearized problem, which becomes a set of two nested one-dimensional problems: an integral equation in the poloidal variable, followed by a one-dimensional complex Schrödinger equation in the radial variable. We show here that the operator associated to the poloidal variable is meromorphic in the eigenparameter, the pulsation frequency. We also prove that, for all but a countable set of real pulsation frequencies, the operator is compact and thus behaves mostly as a finite-dimensional one. The numerical algorithms based on such ideas have been implemented in a companion paper [D. Coulette and N. Besse, "Numerical resolution of the global eigenvalue problem for gyrokinetic-waterbag model in toroidal geometry" (submitted)] and were found to be surprisingly close to those for the original gyrokinetic-Vlasov equations. The purpose of the present paper is to make these new ideas accessible to two readerships: applied mathematicians and plasma physicists.

  5. Crisis route to chaos in semiconductor lasers subjected to external optical feedback

    NASA Astrophysics Data System (ADS)

    Wishon, Michael J.; Locquet, Alexandre; Chang, C. Y.; Choi, D.; Citrin, D. S.

    2018-03-01

    Semiconductor lasers subjected to optical feedback have been intensively used as archetypical testbeds for high-speed (sub-ns) and high-dimensional nonlinear dynamics. By simultaneously extracting all the dynamical variables, we demonstrate that for larger current, the commonly named "quasiperiodic" route is in fact based on mixed external-cavity solutions that lock the oscillation frequency of the intensity, voltage, and separation in optical frequency through a mechanism involving successive rejections along the unstable manifold of an antimode. We show that chaos emerges from a crisis resulting from the inability to maintain locking as the unstable manifold becomes inaccessible.

  6. Behavioral Dynamics in Swimming: The Appropriate Use of Inertial Measurement Units.

    PubMed

    Guignard, Brice; Rouard, Annie; Chollet, Didier; Seifert, Ludovic

    2017-01-01

    Motor control in swimming can be analyzed using low- and high-order parameters of behavior. Low-order parameters generally refer to the superficial aspects of movement (i.e., position, velocity, acceleration), whereas high-order parameters capture the dynamics of movement coordination. To assess human aquatic behavior, both types have usually been investigated with multi-camera systems, as they offer high three-dimensional spatial accuracy. Research in ecological dynamics has shown that movement system variability can be viewed as a functional property of skilled performers, helping them adapt their movements to the surrounding constraints. Yet to determine the variability of swimming behavior, a large number of stroke cycles (i.e., inter-cyclic variability) has to be analyzed, which is impossible with camera-based systems as they simply record behaviors over restricted volumes of water. Inertial measurement units (IMUs) were designed to explore the parameters and variability of coordination dynamics. These light, transportable and easy-to-use devices offer new perspectives for swimming research because they can record low- to high-order behavioral parameters over long periods. We first review how the low-order behavioral parameters (i.e., speed, stroke length, stroke rate) of human aquatic locomotion and their variability can be assessed using IMUs. We then review the way high-order parameters are assessed and the adaptive role of movement and coordination variability in swimming. We give special focus to the circumstances in which determining the variability between stroke cycles provides insight into how behavior oscillates between stable and flexible states to functionally respond to environmental and task constraints. The last section of the review is dedicated to practical recommendations for coaches on using IMUs to monitor swimming performance. We therefore highlight the need for rigor in dealing with these sensors appropriately in water. We explain the fundamental and mandatory steps to follow for accurate results with IMUs, from data acquisition (e.g., waterproofing procedures) to interpretation (e.g., drift correction).

  7. Behavioral Dynamics in Swimming: The Appropriate Use of Inertial Measurement Units

    PubMed Central

    Guignard, Brice; Rouard, Annie; Chollet, Didier; Seifert, Ludovic

    2017-01-01

    Motor control in swimming can be analyzed using low- and high-order parameters of behavior. Low-order parameters generally refer to the superficial aspects of movement (i.e., position, velocity, acceleration), whereas high-order parameters capture the dynamics of movement coordination. To assess human aquatic behavior, both types have usually been investigated with multi-camera systems, as they offer high three-dimensional spatial accuracy. Research in ecological dynamics has shown that movement system variability can be viewed as a functional property of skilled performers, helping them adapt their movements to the surrounding constraints. Yet to determine the variability of swimming behavior, a large number of stroke cycles (i.e., inter-cyclic variability) has to be analyzed, which is impossible with camera-based systems as they simply record behaviors over restricted volumes of water. Inertial measurement units (IMUs) were designed to explore the parameters and variability of coordination dynamics. These light, transportable and easy-to-use devices offer new perspectives for swimming research because they can record low- to high-order behavioral parameters over long periods. We first review how the low-order behavioral parameters (i.e., speed, stroke length, stroke rate) of human aquatic locomotion and their variability can be assessed using IMUs. We then review the way high-order parameters are assessed and the adaptive role of movement and coordination variability in swimming. We give special focus to the circumstances in which determining the variability between stroke cycles provides insight into how behavior oscillates between stable and flexible states to functionally respond to environmental and task constraints. The last section of the review is dedicated to practical recommendations for coaches on using IMUs to monitor swimming performance. We therefore highlight the need for rigor in dealing with these sensors appropriately in water. We explain the fundamental and mandatory steps to follow for accurate results with IMUs, from data acquisition (e.g., waterproofing procedures) to interpretation (e.g., drift correction). PMID:28352243

  8. A one-dimensional model for gas-solid heat transfer in pneumatic conveying

    NASA Astrophysics Data System (ADS)

    Smajstrla, Kody Wayne

    A one-dimensional ODE model reduced from a two-fluid model of a higher dimensional order is developed to study dilute, two-phase (air and solid particles) flows with heat transfer in a horizontal pneumatic conveying pipe. Instead of using constant air properties (e.g., density, viscosity, thermal conductivity) evaluated at the initial flow temperature and pressure, this model uses an iteration approach to couple the air properties with flow pressure and temperature. Multiple studies comparing the use of constant or variable air density, viscosity, and thermal conductivity are conducted to study the impact of the changing properties to system performance. The results show that the fully constant property calculation will overestimate the results of the fully variable calculation by 11.4%, while the constant density with variable viscosity and thermal conductivity calculation resulted in an 8.7% overestimation, the constant viscosity with variable density and thermal conductivity overestimated by 2.7%, and the constant thermal conductivity with variable density and viscosity calculation resulted in a 1.2% underestimation. These results demonstrate that gas properties varying with gas temperature can have a significant impact on a conveying system and that the varying density accounts for the majority of that impact. The accuracy of the model is also validated by comparing the simulation results to the experimental values found in the literature.

  9. Three-Dimensional Flow of an Oldroyd-B Fluid with Variable Thermal Conductivity and Heat Generation/Absorption

    PubMed Central

    Shehzad, Sabir Ali; Alsaedi, Ahmed; Hayat, Tasawar; Alhuthali, M. Shahab

    2013-01-01

    This paper looks at the series solutions of three dimensional boundary layer flow. An Oldroyd-B fluid with variable thermal conductivity is considered. The flow is induced due to stretching of a surface. Analysis has been carried out in the presence of heat generation/absorption. Homotopy analysis is implemented in developing the series solutions to the governing flow and energy equations. Graphs are presented and discussed for various parameters of interest. Comparison of present study with the existing limiting solution is shown and examined. PMID:24223780

  10. Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations

    PubMed Central

    Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul

    2015-01-01

    The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067

  11. Multi-sensor Oceanographic Correlations for Pacific Hake Acoustic Survey Improvement

    NASA Astrophysics Data System (ADS)

    Brozen, M.; Hillyer, N.; Holt, B.; Armstrong, E. M.

    2010-12-01

    North Pacific hake (Merluccius productus), the most abundant groundfish along the Pacific coast of northwestern America, are an essential source of income for the coastal region from southern California to British Columbia, Canada. However, hake abundance and distribution are highly variable among years, exhibiting variance in both the north-south and east-west distribution as seen in the results from biannual acoustic surveys. This project is part of a larger undertaking, ultimately focused on the prediction of hake distribution to improve the distribution of survey effort and precision of stock assessments in the future. Four remotely sensed oceanographic variables are examined as a first step in improving our understanding the relationship between the intensity of coastal upwelling and other ocean dynamics, and the north-south summer hake distribution. Sea surface height, wind vectors, chlorophyll - a concentrations, and sea surface temperature were acquired from several satellites, including AVHRR, SeaWifs, TOPEX/Poseidon, Jason-1, Jason-2, SSM/I, ASMR-E, and QuikScat. Data were aligned to the same spatial and temporal resolution, and these re-gridded data were then analyzed using empirical orthogonal functions (EOFs). EOFs were used as a spatio-temporally compact representation of the data and to reduce the co-variability of the multiple time series in the dataset. The EOF results were plotted and acoustic survey results were overlaid to understand differences between regions. Although this pilot project used data from only a single year (2007), it demonstrated a methodology for reducing dimensionality of linearly related satellite variables that can used in future applications, and provided insight into multi-dimensional ocean characteristics important for hake distribution.

  12. Object Based Numerical Zooming Between the NPSS Version 1 and a 1-Dimensional Meanline High Pressure Compressor Design Analysis Code

    NASA Technical Reports Server (NTRS)

    Follen, G.; Naiman, C.; auBuchon, M.

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of propulsion systems for aircraft and space vehicles called the Numerical Propulsion System Simulation (NPSS). The NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer, along with the concept of numerical zooming between 0- Dimensional to 1-, 2-, and 3-dimensional component engine codes. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Current "state-of-the-art" engine simulations are 0-dimensional in that there is there is no axial, radial or circumferential resolution within a given component (e.g. a compressor or turbine has no internal station designations). In these 0-dimensional cycle simulations the individual component performance characteristics typically come from a table look-up (map) with adjustments for off-design effects such as variable geometry, Reynolds effects, and clearances. Zooming one or more of the engine components to a higher order, physics-based analysis means a higher order code is executed and the results from this analysis are used to adjust the 0-dimensional component performance characteristics within the system simulation. By drawing on the results from more predictive, physics based higher order analysis codes, "cycle" simulations are refined to closely model and predict the complex physical processes inherent to engines. As part of the overall development of the NPSS, NASA and industry began the process of defining and implementing an object class structure that enables Numerical Zooming between the NPSS Version I (0-dimension) and higher order 1-, 2- and 3-dimensional analysis codes. The NPSS Version I preserves the historical cycle engineering practices but also extends these classical practices into the area of numerical zooming for use within a companies' design system. What follows here is a description of successfully zooming I-dimensional (row-by-row) high pressure compressor results back to a NPSS engine 0-dimension simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the fidelity of the engine system simulation and enable the engine system to be "pre-validated" prior to commitment to engine hardware.

  13. Visual analytics of large multidimensional data using variable binned scatter plots

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Dayal, Umeshwar; Sharma, Ratnesh K.; Keim, Daniel A.; Janetzko, Halldór

    2010-01-01

    The scatter plot is a well-known method of visualizing pairs of two-dimensional continuous variables. Multidimensional data can be depicted in a scatter plot matrix. They are intuitive and easy-to-use, but often have a high degree of overlap which may occlude a significant portion of data. In this paper, we propose variable binned scatter plots to allow the visualization of large amounts of data without overlapping. The basic idea is to use a non-uniform (variable) binning of the x and y dimensions and plots all the data points that fall within each bin into corresponding squares. Further, we map a third attribute to color for visualizing clusters. Analysts are able to interact with individual data points for record level information. We have applied these techniques to solve real-world problems on credit card fraud and data center energy consumption to visualize their data distribution and cause-effect among multiple attributes. A comparison of our methods with two recent well-known variants of scatter plots is included.

  14. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  15. Well-balanced high-order solver for blood flow in networks of vessels with variable properties.

    PubMed

    Müller, Lucas O; Toro, Eleuterio F

    2013-12-01

    We present a well-balanced, high-order non-linear numerical scheme for solving a hyperbolic system that models one-dimensional flow in blood vessels with variable mechanical and geometrical properties along their length. Using a suitable set of test problems with exact solution, we rigorously assess the performance of the scheme. In particular, we assess the well-balanced property and the effective order of accuracy through an empirical convergence rate study. Schemes of up to fifth order of accuracy in both space and time are implemented and assessed. The numerical methodology is then extended to realistic networks of elastic vessels and is validated against published state-of-the-art numerical solutions and experimental measurements. It is envisaged that the present scheme will constitute the building block for a closed, global model for the human circulation system involving arteries, veins, capillaries and cerebrospinal fluid. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces

    PubMed Central

    Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.

    2015-01-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID:25265627

  17. Integrative Exploratory Analysis of Two or More Genomic Datasets.

    PubMed

    Meng, Chen; Culhane, Aedin

    2016-01-01

    Exploratory analysis is an essential step in the analysis of high throughput data. Multivariate approaches such as correspondence analysis (CA), principal component analysis, and multidimensional scaling are widely used in the exploratory analysis of single dataset. Modern biological studies often assay multiple types of biological molecules (e.g., mRNA, protein, phosphoproteins) on a same set of biological samples, thereby creating multiple different types of omics data or multiassay data. Integrative exploratory analysis of these multiple omics data is required to leverage the potential of multiple omics studies. In this chapter, we describe the application of co-inertia analysis (CIA; for analyzing two datasets) and multiple co-inertia analysis (MCIA; for three or more datasets) to address this problem. These methods are powerful yet simple multivariate approaches that represent samples using a lower number of variables, allowing a more easily identification of the correlated structure in and between multiple high dimensional datasets. Graphical representations can be employed to this purpose. In addition, the methods simultaneously project samples and variables (genes, proteins) onto the same lower dimensional space, so the most variant variables from each dataset can be selected and associated with samples, which can be further used to facilitate biological interpretation and pathway analysis. We applied CIA to explore the concordance between mRNA and protein expression in a panel of 60 tumor cell lines from the National Cancer Institute. In the same 60 cell lines, we used MCIA to perform a cross-platform comparison of mRNA gene expression profiles obtained on four different microarray platforms. Last, as an example of integrative analysis of multiassay or multi-omics data we analyzed transcriptomic, proteomic, and phosphoproteomic data from pluripotent (iPS) and embryonic stem (ES) cell lines.

  18. Three-variable solution in the (2+1)-dimensional null-surface formulation

    NASA Astrophysics Data System (ADS)

    Harriott, Tina A.; Williams, J. G.

    2018-04-01

    The null-surface formulation of general relativity (NSF) describes gravity by using families of null surfaces instead of a spacetime metric. Despite the fact that the NSF is (to within a conformal factor) equivalent to general relativity, the equations of the NSF are exceptionally difficult to solve, even in 2+1 dimensions. The present paper gives the first exact (2+1)-dimensional solution that depends nontrivially upon all three of the NSF's intrinsic spacetime variables. The metric derived from this solution is shown to represent a spacetime whose source is a massless scalar field that satisfies the general relativistic wave equation and the Einstein equations with minimal coupling. The spacetime is identified as one of a family of (2+1)-dimensional general relativistic spacetimes discovered by Cavaglià.

  19. Use of shape-from-shading to characterize mucosal topography in celiac disease videocapsule images

    PubMed Central

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2017-01-01

    AIM To use a computerized shape-from-shading technique to characterize the topography of the small intestinal mucosa. METHODS Videoclips comprised of 100-200 images each were obtained from the distal duodenum in 8 celiac and 8 control patients. Images with high texture were selected from each videoclip and projected from two to three dimensions by using grayscale pixel brightness as the Z-axis spatial variable. The resulting images for celiac patients were then ordered using the Marsh score to estimate the degree of villous atrophy, and compared with control data. RESULTS Topographic changes in celiac patient three-dimensional constructs were often more variable as compared to controls. The mean absolute derivative in elevation was 2.34 ± 0.35 brightness units for celiacs vs 1.95 ± 0.28 for controls (P = 0.014). The standard deviation of the derivative in elevation was 4.87 ± 0.35 brightness units for celiacs vs 4.47 ± 0.36 for controls (P = 0.023). Celiac patients with Marsh IIIC villous atrophy tended to have the largest topographic changes. Plotted in two dimensions, celiac data could be separated from controls with 80% sensitivity and specificity. CONCLUSION Use of shape-from-shading to construct three-dimensional projections approximating the actual spatial geometry of the small intestinal substrate is useful to observe features not readily apparent in two-dimensional videocapsule images. This method represents a potentially helpful adjunct to detect areas of pathology during videocapsule analysis. PMID:28744343

  20. Nonlinear static and dynamic analysis of beam structures using fully intrinsic equations

    NASA Astrophysics Data System (ADS)

    Sotoudeh, Zahra

    2011-07-01

    Beams are structural members with one dimension much larger than the other two. Examples of beams include propeller blades, helicopter rotor blades, and high aspect-ratio aircraft wings in aerospace engineering; shafts and wind turbine blades in mechanical engineering; towers, highways and bridges in civil engineering; and DNA modeling in biomedical engineering. Beam analysis includes two sets of equations: a generally linear two-dimensional problem over the cross-sectional plane and a nonlinear, global one-dimensional analysis. This research work deals with a relatively new set of equations for one-dimensional beam analysis, namely the so-called fully intrinsic equations. Fully intrinsic equations comprise a set of geometrically exact, nonlinear, first-order partial differential equations that is suitable for analyzing initially curved and twisted anisotropic beams. A fully intrinsic formulation is devoid of displacement and rotation variables, making it especially attractive because of the absence of singularities, infinite-degree nonlinearities, and other undesirable features associated with finite rotation variables. In spite of the advantages of these equations, using them with certain boundary conditions presents significant challenges. This research work will take a broad look at these challenges of modeling various boundary conditions when using the fully intrinsic equations. Hopefully it will clear the path for wider and easier use of the fully intrinsic equations in future research. This work also includes application of fully intrinsic equations in structural analysis of joined-wing aircraft, different rotor blade configuration and LCO analysis of HALE aircraft.

  1. Variable range hopping electric and thermoelectric transport in anisotropic black phosphorus

    DOE PAGES

    Liu, Huili; Sung Choe, Hwan; Chen, Yabin; ...

    2017-09-05

    Black phosphorus (BP) is a layered semiconductor with a high mobility of up to ~1000 cm 2 V -1 s -1 and a narrow bandgap of ~0.3 eV, and shows potential applications in thermoelectrics. In stark contrast to most other layered materials, electrical and thermoelectric properties in the basal plane of BP are highly anisotropic. In order to elucidate the mechanism for such anisotropy, we fabricated BP nanoribbons (~100 nm thick) along the armchair and zigzag directions, and measured the transport properties. It is found that both the electrical conductivity and Seebeck co efficient increase with temperature, a behavior contradictorymore » to that of traditional semiconductors. The three-dimensional variable range hopping model is adopted to analyze this abnormal temperature dependency of electrical conductivity and Seebeck coefficient. Furthermore, the hopping transport of the BP nanoribbons, attributed to high density of trap states in the samples, provides a fundamental understanding of the anisotropic BP for potential thermoelectric applications.« less

  2. Variable range hopping electric and thermoelectric transport in anisotropic black phosphorus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Huili; Sung Choe, Hwan; Chen, Yabin

    Black phosphorus (BP) is a layered semiconductor with a high mobility of up to ~1000 cm 2 V -1 s -1 and a narrow bandgap of ~0.3 eV, and shows potential applications in thermoelectrics. In stark contrast to most other layered materials, electrical and thermoelectric properties in the basal plane of BP are highly anisotropic. In order to elucidate the mechanism for such anisotropy, we fabricated BP nanoribbons (~100 nm thick) along the armchair and zigzag directions, and measured the transport properties. It is found that both the electrical conductivity and Seebeck co efficient increase with temperature, a behavior contradictorymore » to that of traditional semiconductors. The three-dimensional variable range hopping model is adopted to analyze this abnormal temperature dependency of electrical conductivity and Seebeck coefficient. Furthermore, the hopping transport of the BP nanoribbons, attributed to high density of trap states in the samples, provides a fundamental understanding of the anisotropic BP for potential thermoelectric applications.« less

  3. Mapping coastal sea level at high resolution with radar interferometry: the SWOT Mission

    NASA Astrophysics Data System (ADS)

    Fu, L. L.; Chao, Y.; Laignel, B.; Turki, I., Sr.

    2017-12-01

    The spatial resolution of the present constellation of radar altimeters in mapping two-dimensional sea surface height (SSH) variability is approaching 100 km (in wavelength). At scales shorter than 100 km, the eddies and fronts are responsible for the stirring and mixing of the ocean, especially important in the various coastal processes. A mission currently in development will make high-resolution measurement of the height of water over the ocean as well as on land. It is called Surface Water and Ocean Topography (SWOT), which is a joint mission of US NASA and French CNES, with contributions from Canada and UK. SWOT will carry a pair of interferometry radars and make 2-dimensional SSH measurements over a swath of 120 km with a nadir gap of 20 km in a 21-day repeat orbit. The synthetic aperture radar of SWOT will make SSH measurement at extremely high resolution of 10-70 m. SWOT will also carry a nadir looking conventional altimeter and make 1-dimensional SSH measurements along the nadir gap. The temporal sampling varies from 2 repeats per 21 days at the equator to more than 4 repeats at mid latitudes and more than 6 at high latitudes. This new mission will allow a continuum of fine-scale observations from the open ocean to the coasts, estuaries and rivers, allowing us to investigate a number of scientific and technical questions in the coastal and estuarine domain to assess the coastal impacts of regional sea level change, such as the interaction of sea level with river flow, estuary inundation, storm surge, coastal wetlands, salt water intrusion, etc. As examples, we will illustrate the potential impact of SWOT to the studies of the San Francisco Bay Delta, and the Seine River estuary, etc. Preliminary results suggest that the SWOT Mission will provide fundamental data to map the spatial variability of water surface elevations under different hydrodynamic conditions and at different scales (local, regional and global) to improve our knowledge of the complex physical processes in the coastal and estuarine systems in response to global sea level changes.

  4. Classification of high dimensional multispectral image data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1993-01-01

    A method for classifying high dimensional remote sensing data is described. The technique uses a radiometric adjustment to allow a human operator to identify and label training pixels by visually comparing the remotely sensed spectra to laboratory reflectance spectra. Training pixels for material without obvious spectral features are identified by traditional means. Features which are effective for discriminating between the classes are then derived from the original radiance data and used to classify the scene. This technique is applied to Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data taken over Cuprite, Nevada in 1992, and the results are compared to an existing geologic map. This technique performed well even with noisy data and the fact that some of the materials in the scene lack absorption features. No adjustment for the atmosphere or other scene variables was made to the data classified. While the experimental results compare favorably with an existing geologic map, the primary purpose of this research was to demonstrate the classification method, as compared to the geology of the Cuprite scene.

  5. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2014-02-01

    This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.

  6. Application of a laser interferometer skin-friction meter in complex flows

    NASA Technical Reports Server (NTRS)

    Monson, D. J.; Driver, D. M.; Szodruch, J.

    1981-01-01

    The application of a nonintrusive laser-interferometer skin-friction meter, which measures skin friction with a remotely located laser interferometer that monitors the thickness change of a thin oil film, is extended both experimentally and theoretically to several complex wind-tunnel flows. These include two-dimensional seperated and reattached subsonic flows with large pressure and shear gradients, and two and three-dimensional supersonic flows at high Reynolds number, which include variable wall temperatures and cross-flows. In addition, it is found that the instrument can provide an accurate location of the mean reattachment length for separated flows. Results show that levels up to 120 N/sq m, or 40 times higher than previous tests, can be obtained, despite encountering some limits to the method for very high skin-friction levels. It is concluded that these results establish the utility of this instrument for measuring skin friction in a wide variety of flows of interest in aerodynamic testing.

  7. Heterogeneity of fluvial-deltaic reservoirs in the Appalachian basin: A case study from a Lower Mississippian oil field in central West Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohn, M.E.; McDowell, R.R.; Matchen, D.L.

    1997-06-01

    Since discovery in 1924, Granny Creek field in central West Virginia has experienced several periods of renewed drilling for oil in a fluvial-deltaic sandstone in the Lower Mississippian Price Formation. Depositional and diagenetic features leading to reservoir heterogeneity include highly variable grain size, thin shale and siltstone beds, and zones containing large quantities of calcite, siderite, or quartz cement. Electrofacies defined through cluster analysis of wireline log responses corresponded approximately to facies observed in core. Three-dimensional models of porosity computed from density logs showed that zones of relatively high porosity were discontinuous across the field. The regression of core permeabilitymore » on core porosity is statistically significant, and differs for each electrofacies. Zones of high permeability estimated from porosity and electrofacies tend to be discontinuous and aligned roughly north-south. Cumulative oil production varies considerably between adjacent wells, and corresponds very poorly with trends in porosity and permeability. Original oil in place, estimated for each well from reservoir thickness, porosity, water saturation, and an assumed value for drainage radius, is highly variable in the southern part of the field, which is characterized by relatively complex interfingering of electrofacies and similar variability in porosity and permeability.« less

  8. A THREE-DIMENSIONAL AIR FLOW MODEL FOR SOIL VENTING: SUPERPOSITION OF ANLAYTICAL FUNCTIONS

    EPA Science Inventory

    A three-dimensional computer model was developed for the simulation of the soil-air pressure distribution at steady state and specific discharge vectors during soil venting with multiple wells in unsaturated soil. The Kirchhoff transformation of dependent variables and coordinate...

  9. A reconstruction algorithm for three-dimensional object-space data using spatial-spectral multiplexing

    NASA Astrophysics Data System (ADS)

    Wu, Zhejun; Kudenov, Michael W.

    2017-05-01

    This paper presents a reconstruction algorithm for the Spatial-Spectral Multiplexing (SSM) optical system. The goal of this algorithm is to recover the three-dimensional spatial and spectral information of a scene, given that a one-dimensional spectrometer array is used to sample the pupil of the spatial-spectral modulator. The challenge of the reconstruction is that the non-parametric representation of the three-dimensional spatial and spectral object requires a large number of variables, thus leading to an underdetermined linear system that is hard to uniquely recover. We propose to reparameterize the spectrum using B-spline functions to reduce the number of unknown variables. Our reconstruction algorithm then solves the improved linear system via a least- square optimization of such B-spline coefficients with additional spatial smoothness regularization. The ground truth object and the optical model for the measurement matrix are simulated with both spatial and spectral assumptions according to a realistic field of view. In order to test the robustness of the algorithm, we add Poisson noise to the measurement and test on both two-dimensional and three-dimensional spatial and spectral scenes. Our analysis shows that the root mean square error of the recovered results can be achieved within 5.15%.

  10. Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap.

    PubMed

    Spiwok, Vojtěch; Králová, Blanka

    2011-12-14

    Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling. © 2011 American Institute of Physics

  11. A Three-Dimensional Finite-Element Model for Simulating Water Flow in Variably Saturated Porous Media

    NASA Astrophysics Data System (ADS)

    Huyakorn, Peter S.; Springer, Everett P.; Guvanasen, Varut; Wadsworth, Terry D.

    1986-12-01

    A three-dimensional finite-element model for simulating water flow in variably saturated porous media is presented. The model formulation is general and capable of accommodating complex boundary conditions associated with seepage faces and infiltration or evaporation on the soil surface. Included in this formulation is an improved Picard algorithm designed to cope with severely nonlinear soil moisture relations. The algorithm is formulated for both rectangular and triangular prism elements. The element matrices are evaluated using an "influence coefficient" technique that avoids costly numerical integration. Spatial discretization of a three-dimensional region is performed using a vertical slicing approach designed to accommodate complex geometry with irregular boundaries, layering, and/or lateral discontinuities. Matrix solution is achieved using a slice successive overrelaxation scheme that permits a fairly large number of nodal unknowns (on the order of several thousand) to be handled efficiently on small minicomputers. Six examples are presented to verify and demonstrate the utility of the proposed finite-element model. The first four examples concern one- and two-dimensional flow problems used as sample problems to benchmark the code. The remaining examples concern three-dimensional problems. These problems are used to illustrate the performance of the proposed algorithm in three-dimensional situations involving seepage faces and anisotropic soil media.

  12. Autocorrelation structure of convective rainfall in semiarid-arid climate derived from high-resolution X-Band radar estimates

    NASA Astrophysics Data System (ADS)

    Marra, Francesco; Morin, Efrat

    2018-02-01

    Small scale rainfall variability is a key factor driving runoff response in fast responding systems, such as mountainous, urban and arid catchments. In this paper, the spatial-temporal autocorrelation structure of convective rainfall is derived with extremely high resolutions (60 m, 1 min) using estimates from an X-Band weather radar recently installed in a semiarid-arid area. The 2-dimensional spatial autocorrelation of convective rainfall fields and the temporal autocorrelation of point-wise and distributed rainfall fields are examined. The autocorrelation structures are characterized by spatial anisotropy, correlation distances 1.5-2.8 km and rarely exceeding 5 km, and time-correlation distances 1.8-6.4 min and rarely exceeding 10 min. The observed spatial variability is expected to negatively affect estimates from rain gauges and microwave links rather than satellite and C-/S-Band radars; conversely, the temporal variability is expected to negatively affect remote sensing estimates rather than rain gauges. The presented results provide quantitative information for stochastic weather generators, cloud-resolving models, dryland hydrologic and agricultural models, and multi-sensor merging techniques.

  13. A new approach to fluid-structure interaction within graphics hardware accelerated smooth particle hydrodynamics considering heterogeneous particle size distribution

    NASA Astrophysics Data System (ADS)

    Eghtesad, Adnan; Knezevic, Marko

    2018-07-01

    A corrective smooth particle method (CSPM) within smooth particle hydrodynamics (SPH) is used to study the deformation of an aircraft structure under high-velocity water-ditching impact load. The CSPM-SPH method features a new approach for the prediction of two-way fluid-structure interaction coupling. Results indicate that the implementation is well suited for modeling the deformation of structures under high-velocity impact into water as evident from the predicted stress and strain localizations in the aircraft structure as well as the integrity of the impacted interfaces, which show no artificial particle penetrations. To reduce the simulation time, a heterogeneous particle size distribution over a complex three-dimensional geometry is used. The variable particle size is achieved from a finite element mesh with variable element size and, as a result, variable nodal (i.e., SPH particle) spacing. To further accelerate the simulations, the SPH code is ported to a graphics processing unit using the OpenACC standard. The implementation and simulation results are described and discussed in this paper.

  14. A new approach to fluid-structure interaction within graphics hardware accelerated smooth particle hydrodynamics considering heterogeneous particle size distribution

    NASA Astrophysics Data System (ADS)

    Eghtesad, Adnan; Knezevic, Marko

    2017-12-01

    A corrective smooth particle method (CSPM) within smooth particle hydrodynamics (SPH) is used to study the deformation of an aircraft structure under high-velocity water-ditching impact load. The CSPM-SPH method features a new approach for the prediction of two-way fluid-structure interaction coupling. Results indicate that the implementation is well suited for modeling the deformation of structures under high-velocity impact into water as evident from the predicted stress and strain localizations in the aircraft structure as well as the integrity of the impacted interfaces, which show no artificial particle penetrations. To reduce the simulation time, a heterogeneous particle size distribution over a complex three-dimensional geometry is used. The variable particle size is achieved from a finite element mesh with variable element size and, as a result, variable nodal (i.e., SPH particle) spacing. To further accelerate the simulations, the SPH code is ported to a graphics processing unit using the OpenACC standard. The implementation and simulation results are described and discussed in this paper.

  15. Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.

    2012-01-01

    The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.

  16. Separation of variables in Maxwell equations in Plebański-Demiański spacetime

    NASA Astrophysics Data System (ADS)

    Frolov, Valeri P.; Krtouš, Pavel; KubizÅák, David

    2018-05-01

    A new method for separating variables in the Maxwell equations in four- and higher-dimensional Kerr-(A)dS spacetimes proposed recently by Lunin is generalized to any off-shell metric that admits a principal Killing-Yano tensor. The key observation is that Lunin's ansatz for the vector potential can be formulated in a covariant form—in terms of the principal tensor. In particular, focusing on the four-dimensional case we demonstrate separability of Maxwell's equations in the Kerr-NUT-(A)dS and the Plebański-Demiański family of spacetimes. The new method of separation of variables is quite different from the standard approach based on the Newman-Penrose formalism.

  17. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Cliff

    2015-01-01

    Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  18. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2016-01-01

    Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  19. Reduced-Order Models Based on POD-Tpwl for Compositional Subsurface Flow Simulation

    NASA Astrophysics Data System (ADS)

    Durlofsky, L. J.; He, J.; Jin, L. Z.

    2014-12-01

    A reduced-order modeling procedure applicable for compositional subsurface flow simulation will be described and applied. The technique combines trajectory piecewise linearization (TPWL) and proper orthogonal decomposition (POD) to provide highly efficient surrogate models. The method is based on a molar formulation (which uses pressure and overall component mole fractions as the primary variables) and is applicable for two-phase, multicomponent systems. The POD-TPWL procedure expresses new solutions in terms of linearizations around solution states generated and saved during previously simulated 'training' runs. High-dimensional states are projected into a low-dimensional subspace using POD. Thus, at each time step, only a low-dimensional linear system needs to be solved. Results will be presented for heterogeneous three-dimensional simulation models involving CO2 injection. Both enhanced oil recovery and carbon storage applications (with horizontal CO2 injectors) will be considered. Reasonably close agreement between full-order reference solutions and compositional POD-TPWL simulations will be demonstrated for 'test' runs in which the well controls differ from those used for training. Construction of the POD-TPWL model requires preprocessing overhead computations equivalent to about 3-4 full-order runs. Runtime speedups using POD-TPWL are, however, very significant - typically O(100-1000). The use of POD-TPWL for well control optimization will also be illustrated. For this application, some amount of retraining during the course of the optimization is required, which leads to smaller, but still significant, speedup factors.

  20. A latent class distance association model for cross-classified data with a categorical response variable.

    PubMed

    Vera, José Fernando; de Rooij, Mark; Heiser, Willem J

    2014-11-01

    In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented. © 2014 The British Psychological Society.

  1. Emotional Variability and Clarity in Depression and Social Anxiety

    PubMed Central

    Thompson, Renee J.; Boden, Matthew Tyler; Gotlib, Ian H.

    2016-01-01

    Recent research has underscored the importance of elucidating specific patterns of emotion that characterize mental disorders. We examined two emotion traits, emotional variability and emotional clarity, in relation to both categorical (diagnostic interview) and dimensional (self-report) measures of Major Depressive Disorder (MDD) and Social Anxiety Disorder (SAD) in women diagnosed with MDD only (n=35), SAD only (n=31), MDD and SAD (n=26), or no psychiatric disorder (n=38). Results of the categorical analyses suggest that elevated emotional variability and diminished emotional clarity are transdiagnostic of MDD and SAD. More specifically, emotional variability was elevated for MDD and SAD diagnoses compared to no diagnosis, showing an additive effect for co-occurring MDD and SAD. Similarly diminished levels of emotional clarity characterized all three clinical groups compared to the healthy control group. Dimensional findings suggest that whereas emotional variability is associated more consistently with depression than with social anxiety, emotional clarity is associated more consistently with social anxiety than with depression. Results are interpreted using a threshold- and dose-response framework. PMID:26371579

  2. Batch-mode Reinforcement Learning for improved hydro-environmental systems management

    NASA Astrophysics Data System (ADS)

    Castelletti, A.; Galelli, S.; Restelli, M.; Soncini-Sessa, R.

    2010-12-01

    Despite the great progresses made in the last decades, the optimal management of hydro-environmental systems still remains a very active and challenging research area. The combination of multiple, often conflicting interests, high non-linearities of the physical processes and the management objectives, strong uncertainties in the inputs, and high dimensional state makes the problem challenging and intriguing. Stochastic Dynamic Programming (SDP) is one of the most suitable methods for designing (Pareto) optimal management policies preserving the original problem complexity. However, it suffers from a dual curse, which, de facto, prevents its practical application to even reasonably complex water systems. (i) Computational requirement grows exponentially with state and control dimension (Bellman's curse of dimensionality), so that SDP can not be used with water systems where the state vector includes more than few (2-3) units. (ii) An explicit model of each system's component is required (curse of modelling) to anticipate the effects of the system transitions, i.e. any information included into the SDP framework can only be either a state variable described by a dynamic model or a stochastic disturbance, independent in time, with the associated pdf. Any exogenous information that could effectively improve the system operation cannot be explicitly considered in taking the management decision, unless a dynamic model is identified for each additional information, thus adding to the problem complexity through the curse of dimensionality (additional state variables). To mitigate this dual curse, the combined use of batch-mode Reinforcement Learning (bRL) and Dynamic Model Reduction (DMR) techniques is explored in this study. bRL overcomes the curse of modelling by replacing explicit modelling with an external simulator and/or historical observations. The curse of dimensionality is averted using a functional approximation of the SDP value function based on proper non-linear regressors. DMR reduces the complexity and the associated computational requirements of non-linear distributed process based models, making them suitable for being included into optimization schemes. Results from real world applications of the approach are also presented, including reservoir operation with both quality and quantity targets.

  3. Applications of an exponential finite difference technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handschuh, R.F.; Keith, T.G. Jr.

    1988-07-01

    An exponential finite difference scheme first presented by Bhattacharya for one dimensional unsteady heat conduction problems in Cartesian coordinates was extended. The finite difference algorithm developed was used to solve the unsteady diffusion equation in one dimensional cylindrical coordinates and was applied to two and three dimensional conduction problems in Cartesian coordinates. Heat conduction involving variable thermal conductivity was also investigated. The method was used to solve nonlinear partial differential equations in one and two dimensional Cartesian coordinates. Predicted results are compared to exact solutions where available or to results obtained by other numerical methods.

  4. Impact of interannual variability (1979-1986) of transport and temperature on ozone as computed using a two-dimensional photochemical model

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Douglass, Anne R.; Chandra, Sushil; Stolarski, Richard S.; Rosenfield, Joan E.; Kaye, Jack A.

    1991-01-01

    Values of the monthly mean heating rates and the residual circulation characteristics were calculated using NMC data for temperature and the solar backscattered UV ozone for the period between 1979 and 1986. The results were used in a two-dimensional photochemical model in order to examine the effects of temperature and residual circulation on the interannual variability of ozone. It was found that the calculated total ozone was more sensitive to variations in interannual residual circulation than in the interannual temperature. The magnitude of the modeled ozone variability was found to be similar to the observed variability, but the observed and modeled year-to-year deviations were, for the most part, uncorrelated, due to the fact that the model did not account for most of the QBO forcing and for some of the observed tropospheric changes.

  5. Multivariate Analysis of Genotype-Phenotype Association.

    PubMed

    Mitteroecker, Philipp; Cheverud, James M; Pavlicev, Mihaela

    2016-04-01

    With the advent of modern imaging and measurement technology, complex phenotypes are increasingly represented by large numbers of measurements, which may not bear biological meaning one by one. For such multivariate phenotypes, studying the pairwise associations between all measurements and all alleles is highly inefficient and prevents insight into the genetic pattern underlying the observed phenotypes. We present a new method for identifying patterns of allelic variation (genetic latent variables) that are maximally associated-in terms of effect size-with patterns of phenotypic variation (phenotypic latent variables). This multivariate genotype-phenotype mapping (MGP) separates phenotypic features under strong genetic control from less genetically determined features and thus permits an analysis of the multivariate structure of genotype-phenotype association, including its dimensionality and the clustering of genetic and phenotypic variables within this association. Different variants of MGP maximize different measures of genotype-phenotype association: genetic effect, genetic variance, or heritability. In an application to a mouse sample, scored for 353 SNPs and 11 phenotypic traits, the first dimension of genetic and phenotypic latent variables accounted for >70% of genetic variation present in all 11 measurements; 43% of variation in this phenotypic pattern was explained by the corresponding genetic latent variable. The first three dimensions together sufficed to account for almost 90% of genetic variation in the measurements and for all the interpretable genotype-phenotype association. Each dimension can be tested as a whole against the hypothesis of no association, thereby reducing the number of statistical tests from 7766 to 3-the maximal number of meaningful independent tests. Important alleles can be selected based on their effect size (additive or nonadditive effect on the phenotypic latent variable). This low dimensionality of the genotype-phenotype map has important consequences for gene identification and may shed light on the evolvability of organisms. Copyright © 2016 by the Genetics Society of America.

  6. Distance correlation methods for discovering associations in large astrophysical databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-Gómez, Elizabeth; Richards, Mercedes T.; Richards, Donald St. P., E-mail: elizabeth.martinez@itam.mx, E-mail: mrichards@astro.psu.edu, E-mail: richards@stat.psu.edu

    2014-01-20

    High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension,more » can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral.« less

  7. The staircase method: integrals for periodic reductions of integrable lattice equations

    NASA Astrophysics Data System (ADS)

    van der Kamp, Peter H.; Quispel, G. R. W.

    2010-11-01

    We show, in full generality, that the staircase method (Papageorgiou et al 1990 Phys. Lett. A 147 106-14, Quispel et al 1991 Physica A 173 243-66) provides integrals for mappings, and correspondences, obtained as traveling wave reductions of (systems of) integrable partial difference equations. We apply the staircase method to a variety of equations, including the Korteweg-De Vries equation, the five-point Bruschi-Calogero-Droghei equation, the quotient-difference (QD)-algorithm and the Boussinesq system. We show that, in all these cases, if the staircase method provides r integrals for an n-dimensional mapping, with 2r, then one can introduce q <= 2r variables, which reduce the dimension of the mapping from n to q. These dimension-reducing variables are obtained as joint invariants of k-symmetries of the mappings. Our results support the idea that often the staircase method provides sufficiently many integrals for the periodic reductions of integrable lattice equations to be completely integrable. We also study reductions on other quad-graphs than the regular {\\ Z}^2 lattice, and we prove linear growth of the multi-valuedness of iterates of high-dimensional correspondences obtained as reductions of the QD-algorithm.

  8. On the use of transition matrix methods with extended ensembles.

    PubMed

    Escobedo, Fernando A; Abreu, Charlles R A

    2006-03-14

    Different extended ensemble schemes for non-Boltzmann sampling (NBS) of a selected reaction coordinate lambda were formulated so that they employ (i) "variable" sampling window schemes (that include the "successive umbrella sampling" method) to comprehensibly explore the lambda domain and (ii) transition matrix methods to iteratively obtain the underlying free-energy eta landscape (or "importance" weights) associated with lambda. The connection between "acceptance ratio" and transition matrix methods was first established to form the basis of the approach for estimating eta(lambda). The validity and performance of the different NBS schemes were then assessed using as lambda coordinate the configurational energy of the Lennard-Jones fluid. For the cases studied, it was found that the convergence rate in the estimation of eta is little affected by the use of data from high-order transitions, while it is noticeably improved by the use of a broader window of sampling in the variable window methods. Finally, it is shown how an "elastic" window of sampling can be used to effectively enact (nonuniform) preferential sampling over the lambda domain, and how to stitch the weights from separate one-dimensional NBS runs to produce a eta surface over a two-dimensional domain.

  9. Learning multivariate distributions by competitive assembly of marginals.

    PubMed

    Sánchez-Vega, Francisco; Younes, Laurent; Geman, Donald

    2013-02-01

    We present a new framework for learning high-dimensional multivariate probability distributions from estimated marginals. The approach is motivated by compositional models and Bayesian networks, and designed to adapt to small sample sizes. We start with a large, overlapping set of elementary statistical building blocks, or "primitives," which are low-dimensional marginal distributions learned from data. Each variable may appear in many primitives. Subsets of primitives are combined in a Lego-like fashion to construct a probabilistic graphical model; only a small fraction of the primitives will participate in any valid construction. Since primitives can be precomputed, parameter estimation and structure search are separated. Model complexity is controlled by strong biases; we adapt the primitives to the amount of training data and impose rules which restrict the merging of them into allowable compositions. The likelihood of the data decomposes into a sum of local gains, one for each primitive in the final structure. We focus on a specific subclass of networks which are binary forests. Structure optimization corresponds to an integer linear program and the maximizing composition can be computed for reasonably large numbers of variables. Performance is evaluated using both synthetic data and real datasets from natural language processing and computational biology.

  10. Fokker-Planck description for the queue dynamics of large tick stocks.

    PubMed

    Garèche, A; Disdier, G; Kockelkoren, J; Bouchaud, J-P

    2013-09-01

    Motivated by empirical data, we develop a statistical description of the queue dynamics for large tick assets based on a two-dimensional Fokker-Planck (diffusion) equation. Our description explicitly includes state dependence, i.e., the fact that the drift and diffusion depend on the volume present on both sides of the spread. "Jump" events, corresponding to sudden changes of the best limit price, must also be included as birth-death terms in the Fokker-Planck equation. All quantities involved in the equation can be calibrated using high-frequency data on the best quotes. One of our central findings is that the dynamical process is approximately scale invariant, i.e., the only relevant variable is the ratio of the current volume in the queue to its average value. While the latter shows intraday seasonalities and strong variability across stocks and time periods, the dynamics of the rescaled volumes is universal. In terms of rescaled volumes, we found that the drift has a complex two-dimensional structure, which is a sum of a gradient contribution and a rotational contribution, both stable across stocks and time. This drift term is entirely responsible for the dynamical correlations between the ask queue and the bid queue.

  11. Fokker-Planck description for the queue dynamics of large tick stocks

    NASA Astrophysics Data System (ADS)

    Garèche, A.; Disdier, G.; Kockelkoren, J.; Bouchaud, J.-P.

    2013-09-01

    Motivated by empirical data, we develop a statistical description of the queue dynamics for large tick assets based on a two-dimensional Fokker-Planck (diffusion) equation. Our description explicitly includes state dependence, i.e., the fact that the drift and diffusion depend on the volume present on both sides of the spread. “Jump” events, corresponding to sudden changes of the best limit price, must also be included as birth-death terms in the Fokker-Planck equation. All quantities involved in the equation can be calibrated using high-frequency data on the best quotes. One of our central findings is that the dynamical process is approximately scale invariant, i.e., the only relevant variable is the ratio of the current volume in the queue to its average value. While the latter shows intraday seasonalities and strong variability across stocks and time periods, the dynamics of the rescaled volumes is universal. In terms of rescaled volumes, we found that the drift has a complex two-dimensional structure, which is a sum of a gradient contribution and a rotational contribution, both stable across stocks and time. This drift term is entirely responsible for the dynamical correlations between the ask queue and the bid queue.

  12. Rotary engine performance limits predicted by a zero-dimensional model

    NASA Technical Reports Server (NTRS)

    Bartrand, Timothy A.; Willis, Edward A.

    1992-01-01

    A parametric study was performed to determine the performance limits of a rotary combustion engine. This study shows how well increasing the combustion rate, insulating, and turbocharging increase brake power and decrease fuel consumption. Several generalizations can be made from the findings. First, it was shown that the fastest combustion rate is not necessarily the best combustion rate. Second, several engine insulation schemes were employed for a turbocharged engine. Performance improved only for a highly insulated engine. Finally, the variability of turbocompounding and the influence of exhaust port shape were calculated. Rotary engines performance was predicted by an improved zero-dimensional computer model based on a model developed at the Massachusetts Institute of Technology in the 1980's. Independent variables in the study include turbocharging, manifold pressures, wall thermal properties, leakage area, and exhaust port geometry. Additions to the computer programs since its results were last published include turbocharging, manifold modeling, and improved friction power loss calculation. The baseline engine for this study is a single rotor 650 cc direct-injection stratified-charge engine with aluminum housings and a stainless steel rotor. Engine maps are provided for the baseline and turbocharged versions of the engine.

  13. On the use of the singular value decomposition for text retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Husbands, P.; Simon, H.D.; Ding, C.

    2000-12-04

    The use of the Singular Value Decomposition (SVD) has been proposed for text retrieval in several recent works. This technique uses the SVD to project very high dimensional document and query vectors into a low dimensional space. In this new space it is hoped that the underlying structure of the collection is revealed thus enhancing retrieval performance. Theoretical results have provided some evidence for this claim and to some extent experiments have confirmed this. However, these studies have mostly used small test collections and simplified document models. In this work we investigate the use of the SVD on large documentmore » collections. We show that, if interpreted as a mechanism for representing the terms of the collection, this technique alone is insufficient for dealing with the variability in term occurrence. Section 2 introduces the text retrieval concepts necessary for our work. A short description of our experimental architecture is presented in Section 3. Section 4 describes how term occurrence variability affects the SVD and then shows how the decomposition influences retrieval performance. A possible way of improving SVD-based techniques is presented in Section 5 and concluded in Section 6.« less

  14. Comparative Variable Temperature Studies of Polyamide II with a Benchtop Fourier Transform and a Miniature Handheld Near-Infrared Spectrometer Using 2D-COS and PCMW-2D Analysis.

    PubMed

    Unger, Miriam; Pfeifer, Frank; Siesler, Heinz W

    2016-07-01

    The main objective of this communication is to compare the performance of a miniaturized handheld near-infrared (NIR) spectrometer with a benchtop Fourier transform near-infrared (FT-NIR) spectrometer. Generally, NIR spectroscopy is an extremely powerful analytical tool to study hydrogen-bonding changes of amide functionalities in solid and liquid materials and therefore variable temperature NIR measurements of polyamide II (PAII) have been selected as a case study. The information content of the measurement data has been further enhanced by exploiting the potential of two-dimensional correlation spectroscopy (2D-COS) and the perturbation correlation moving window two-dimensional (PCMW2D) evaluation technique. The data provide valuable insights not only into the changes of the hydrogen-bonding structure and the recrystallization of the hydrocarbon segments of the investigated PAII but also in their sequential order. Furthermore, it has been demonstrated that the 2D-COS and PCMW2D results derived from the spectra measured with the miniaturized NIR instrument are equivalent to the information extracted from the data obtained with the high-performance FT-NIR instrument. © The Author(s) 2016.

  15. Incorporating biological information in sparse principal component analysis with application to genomic data.

    PubMed

    Li, Ziyi; Safo, Sandra E; Long, Qi

    2017-07-11

    Sparse principal component analysis (PCA) is a popular tool for dimensionality reduction, pattern recognition, and visualization of high dimensional data. It has been recognized that complex biological mechanisms occur through concerted relationships of multiple genes working in networks that are often represented by graphs. Recent work has shown that incorporating such biological information improves feature selection and prediction performance in regression analysis, but there has been limited work on extending this approach to PCA. In this article, we propose two new sparse PCA methods called Fused and Grouped sparse PCA that enable incorporation of prior biological information in variable selection. Our simulation studies suggest that, compared to existing sparse PCA methods, the proposed methods achieve higher sensitivity and specificity when the graph structure is correctly specified, and are fairly robust to misspecified graph structures. Application to a glioblastoma gene expression dataset identified pathways that are suggested in the literature to be related with glioblastoma. The proposed sparse PCA methods Fused and Grouped sparse PCA can effectively incorporate prior biological information in variable selection, leading to improved feature selection and more interpretable principal component loadings and potentially providing insights on molecular underpinnings of complex diseases.

  16. The Units Tell You What to Do

    ERIC Educational Resources Information Center

    Brown, Simon

    2009-01-01

    Many students have some difficulty with calculations. Simple dimensional analysis provides a systematic means of checking for errors and inconsistencies and for developing both new insight and new relationships between variables. Teaching dimensional analysis at even the most basic level strengthens the insight and confidence of students, and…

  17. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  18. tICA-Metadynamics: Accelerating Metadynamics by Using Kinetically Selected Collective Variables.

    PubMed

    M Sultan, Mohammad; Pande, Vijay S

    2017-06-13

    Metadynamics is a powerful enhanced molecular dynamics sampling method that accelerates simulations by adding history-dependent multidimensional Gaussians along selective collective variables (CVs). In practice, choosing a small number of slow CVs remains challenging due to the inherent high dimensionality of biophysical systems. Here we show that time-structure based independent component analysis (tICA), a recent advance in Markov state model literature, can be used to identify a set of variationally optimal slow coordinates for use as CVs for Metadynamics. We show that linear and nonlinear tICA-Metadynamics can complement existing MD studies by explicitly sampling the system's slowest modes and can even drive transitions along the slowest modes even when no such transitions are observed in unbiased simulations.

  19. Thermal History and Mantle Dynamics of Venus

    NASA Technical Reports Server (NTRS)

    Hsui, Albert T.

    1997-01-01

    One objective of this research proposal is to develop a 3-D thermal history model for Venus. The basis of our study is a finite-element computer model to simulate thermal convection of fluids with highly temperature- and pressure-dependent viscosities in a three-dimensional spherical shell. A three-dimensional model for thermal history studies is necessary for the following reasons. To study planetary thermal evolution, one needs to consider global heat budgets of a planet throughout its evolution history. Hence, three-dimensional models are necessary. This is in contrasts to studies of some local phenomena or local structures where models of lower dimensions may be sufficient. There are different approaches to treat three-dimensional thermal convection problems. Each approach has its own advantages and disadvantages. Therefore, the choice of the various approaches is subjective and dependent on the problem addressed. In our case, we are interested in the effects of viscosities that are highly temperature dependent and that their magnitudes within the computing domain can vary over many orders of magnitude. In order to resolve the rapid change of viscosities, small grid spacings are often necessary. To optimize the amount of computing, variable grids become desirable. Thus, the finite-element numerical approach is chosen for its ability to place grid elements of different sizes over the complete computational domain. For this research proposal, we did not start from scratch and develop the finite element codes from the beginning. Instead, we adopted a finite-element model developed by Baumgardner, a collaborator of this research proposal, for three-dimensional thermal convection with constant viscosity. Over the duration supported by this research proposal, a significant amount of advancements have been accomplished.

  20. Micro-computed tomography assessment of human alveolar bone: bone density and three-dimensional micro-architecture.

    PubMed

    Kim, Yoon Jeong; Henkin, Jeffrey

    2015-04-01

    Micro-computed tomography (micro-CT) is a valuable means to evaluate and secure information related to bone density and quality in human necropsy samples and small live animals. The aim of this study was to assess the bone density of the alveolar jaw bones in human cadaver, using micro-CT. The correlation between bone density and three-dimensional micro architecture of trabecular bone was evaluated. Thirty-four human cadaver jaw bone specimens were harvested. Each specimen was scanned with micro-CT at resolution of 10.5 μm. The bone volume fraction (BV/TV) and the bone mineral density (BMD) value within a volume of interest were measured. The three-dimensional micro architecture of trabecular bone was assessed. All the parameters in the maxilla and the mandible were subject to comparison. The variables for the bone density and the three-dimensional micro architecture were analyzed for nonparametric correlation using Spearman's rho at the significance level of p < .05. A wide range of bone density was observed. There was a significant difference between the maxilla and mandible. All micro architecture parameters were consistently higher in the mandible, up to 3.3 times greater than those in the maxilla. The most linear correlation was observed between BV/TV and BMD, with Spearman's rho = 0.99 (p = .01). Both BV/TV and BMD were highly correlated with all micro architecture parameters with Spearman's rho above 0.74 (p = .01). Two aspects of bone density using micro-CT, the BV/TV and BMD, are highly correlated with three-dimensional micro architecture parameters, which represent the quality of trabecular bone. This noninvasive method may adequately enhance evaluation of the alveolar bone. © 2013 Wiley Periodicals, Inc.

  1. Hyper-Spectral Image Analysis With Partially Latent Regression and Spatial Markov Dependencies

    NASA Astrophysics Data System (ADS)

    Deleforge, Antoine; Forbes, Florence; Ba, Sileye; Horaud, Radu

    2015-09-01

    Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.

  2. Biodynamic imaging for phenotypic profiling of three-dimensional tissue culture

    PubMed Central

    Sun, Hao; Merrill, Daniel; An, Ran; Turek, John; Matei, Daniela; Nolte, David D.

    2017-01-01

    Abstract. Three-dimensional (3-D) tissue culture represents a more biologically relevant environment for testing new drugs compared to conventional two-dimensional cancer cell culture models. Biodynamic imaging is a high-content 3-D optical imaging technology based on low-coherence interferometry and digital holography that uses dynamic speckle as high-content image contrast to probe deep inside 3-D tissue. Speckle contrast is shown to be a scaling function of the acquisition time relative to the persistence time of intracellular transport and hence provides a measure of cellular activity. Cellular responses of 3-D multicellular spheroids to paclitaxel are compared among three different growth techniques: rotating bioreactor (BR), hanging-drop (HD), and nonadherent (U-bottom, UB) plate spheroids, compared with ex vivo living tissues. HD spheroids have the most homogeneous tissue, whereas BR spheroids display large sample-to-sample variability as well as spatial heterogeneity. The responses of BR-grown tumor spheroids to paclitaxel are more similar to those of ex vivo biopsies than the responses of spheroids grown using HD or plate methods. The rate of mitosis inhibition by application of taxol is measured through tissue dynamics spectroscopic imaging, demonstrating the ability to monitor antimitotic chemotherapy. These results illustrate the potential use of low-coherence digital holography for 3-D pharmaceutical screening applications. PMID:28301634

  3. Biodynamic imaging for phenotypic profiling of three-dimensional tissue culture

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Merrill, Daniel; An, Ran; Turek, John; Matei, Daniela; Nolte, David D.

    2017-01-01

    Three-dimensional (3-D) tissue culture represents a more biologically relevant environment for testing new drugs compared to conventional two-dimensional cancer cell culture models. Biodynamic imaging is a high-content 3-D optical imaging technology based on low-coherence interferometry and digital holography that uses dynamic speckle as high-content image contrast to probe deep inside 3-D tissue. Speckle contrast is shown to be a scaling function of the acquisition time relative to the persistence time of intracellular transport and hence provides a measure of cellular activity. Cellular responses of 3-D multicellular spheroids to paclitaxel are compared among three different growth techniques: rotating bioreactor (BR), hanging-drop (HD), and nonadherent (U-bottom, UB) plate spheroids, compared with ex vivo living tissues. HD spheroids have the most homogeneous tissue, whereas BR spheroids display large sample-to-sample variability as well as spatial heterogeneity. The responses of BR-grown tumor spheroids to paclitaxel are more similar to those of ex vivo biopsies than the responses of spheroids grown using HD or plate methods. The rate of mitosis inhibition by application of taxol is measured through tissue dynamics spectroscopic imaging, demonstrating the ability to monitor antimitotic chemotherapy. These results illustrate the potential use of low-coherence digital holography for 3-D pharmaceutical screening applications.

  4. Signatures of a globally optimal searching strategy in the three-dimensional foraging flights of bumblebees

    NASA Astrophysics Data System (ADS)

    Lihoreau, Mathieu; Ings, Thomas C.; Chittka, Lars; Reynolds, Andy M.

    2016-07-01

    Simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a search space. It is frequently implemented in computers working on complex optimization problems but until now has not been directly observed in nature as a searching strategy adopted by foraging animals. We analysed high-speed video recordings of the three-dimensional searching flights of bumblebees (Bombus terrestris) made in the presence of large or small artificial flowers within a 0.5 m3 enclosed arena. Analyses of the three-dimensional flight patterns in both conditions reveal signatures of simulated annealing searches. After leaving a flower, bees tend to scan back-and forth past that flower before making prospecting flights (loops), whose length increases over time. The search pattern becomes gradually more expansive and culminates when another rewarding flower is found. Bees then scan back and forth in the vicinity of the newly discovered flower and the process repeats. This looping search pattern, in which flight step lengths are typically power-law distributed, provides a relatively simple yet highly efficient strategy for pollinators such as bees to find best quality resources in complex environments made of multiple ephemeral feeding sites with nutritionally variable rewards.

  5. Generalized Lie symmetry approach for fractional order systems of differential equations. III

    NASA Astrophysics Data System (ADS)

    Singla, Komal; Gupta, R. K.

    2017-06-01

    The generalized Lie symmetry technique is proposed for the derivation of point symmetries for systems of fractional differential equations with an arbitrary number of independent as well as dependent variables. The efficiency of the method is illustrated by its application to three higher dimensional nonlinear systems of fractional order partial differential equations consisting of the (2 + 1)-dimensional asymmetric Nizhnik-Novikov-Veselov system, (3 + 1)-dimensional Burgers system, and (3 + 1)-dimensional Navier-Stokes equations. With the help of derived Lie point symmetries, the corresponding invariant solutions transform each of the considered systems into a system of lower-dimensional fractional partial differential equations.

  6. Hybrid approach combining chemometrics and likelihood ratio framework for reporting the evidential value of spectra.

    PubMed

    Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema

    2016-08-10

    Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Exploring high dimensional data with Butterfly: a novel classification algorithm based on discrete dynamical systems.

    PubMed

    Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken

    2014-03-01

    We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer dataset that comes along with the included Butterfly R package. In the included R script, a univariate feature selection method is used for the dimension reduction step, but in the future we wish to use a more powerful multivariate feature reduction method based on neural networks (Kriesel, 2007). A script written in R (designed to run on R studio) accompanies this article that implements this algorithm and is available at http://butterflygeraci.codeplex.com/. For details on the R package or for help installing the software refer to the accompanying document, Supporting Material and Appendix.

  8. The underlying structure of diagnostic systems of schizophrenia: a comprehensive polydiagnostic approach.

    PubMed

    Peralta, Victor; Cuesta, Manuel J

    2005-11-15

    The objective was to ascertain the underlying factor structure of alternative definitions of schizophrenia, and to examine the distribution of schizophrenia-related variables against the resulting factor solution. Twenty-three diagnostic schemes of schizophrenia were applied to 660 patients presenting with psychotic symptoms regardless of the specific diagnosis of psychotic disorder. Factor analysis of the 23 diagnostic schemes yielded three interpretable factors explaining 58% of the variance, the first factor (general schizophrenia factor) accounting for most of the variance (36%). On the basis of the general schizophrenia factor score, the sample was divided in quintile groups representing 5 levels of schizophrenia definition (absent, doubtful, very broad, broad and narrow) and the distribution of a number of schizophrenia-related variables was examined across the groups. This grouping procedure was used for examining the comparative validity of alternative levels of categorically defined schizophrenia and an ordinal (i.e. dimensional) definition. Overall, schizophrenia-related variables displayed a dose-response relationship with level of schizophrenia definition. Logistic regression analyses revealed that the dimensional definition explained more variance in the schizophrenia-related variables than the alternative levels for defining schizophrenia categorically. These results are consistent with a unitary and dimensional construct of schizophrenia with no clear "points of rarity" at its boundaries, thus supporting the continuum hypothesis of the psychotic illness.

  9. Modeling variably saturated multispecies reactive groundwater solute transport with MODFLOW-UZF and RT3D

    USGS Publications Warehouse

    Bailey, Ryan T.; Morway, Eric D.; Niswonger, Richard G.; Gates, Timothy K.

    2013-01-01

    A numerical model was developed that is capable of simulating multispecies reactive solute transport in variably saturated porous media. This model consists of a modified version of the reactive transport model RT3D (Reactive Transport in 3 Dimensions) that is linked to the Unsaturated-Zone Flow (UZF1) package and MODFLOW. Referred to as UZF-RT3D, the model is tested against published analytical benchmarks as well as other published contaminant transport models, including HYDRUS-1D, VS2DT, and SUTRA, and the coupled flow and transport modeling system of CATHY and TRAN3D. Comparisons in one-dimensional, two-dimensional, and three-dimensional variably saturated systems are explored. While several test cases are included to verify the correct implementation of variably saturated transport in UZF-RT3D, other cases are included to demonstrate the usefulness of the code in terms of model run-time and handling the reaction kinetics of multiple interacting species in variably saturated subsurface systems. As UZF1 relies on a kinematic-wave approximation for unsaturated flow that neglects the diffusive terms in Richards equation, UZF-RT3D can be used for large-scale aquifer systems for which the UZF1 formulation is reasonable, that is, capillary-pressure gradients can be neglected and soil parameters can be treated as homogeneous. Decreased model run-time and the ability to include site-specific chemical species and chemical reactions make UZF-RT3D an attractive model for efficient simulation of multispecies reactive transport in variably saturated large-scale subsurface systems.

  10. Consideration of correlativity between litho and etching shape

    NASA Astrophysics Data System (ADS)

    Matsuoka, Ryoichi; Mito, Hiroaki; Shinoda, Shinichi; Toyoda, Yasutaka

    2012-03-01

    We developed an effective method for evaluating the correlation of shape of Litho and Etching pattern. The purpose of this method, makes the relations of the shape after that is the etching pattern an index in wafer same as a pattern shape on wafer made by a lithography process. Therefore, this method measures the characteristic of the shape of the wafer pattern by the lithography process and can predict the hotspot pattern shape by the etching process. The method adopts a metrology management system based on DBM (Design Based Metrology). This is the high accurate contouring created by an edge detection algorithm used wafer CD-SEM. Currently, as semiconductor manufacture moves towards even smaller feature size, this necessitates more aggressive optical proximity correction (OPC) to drive the super-resolution technology (RET). In other words, there is a trade-off between highly precise RET and lithography management, and this has a big impact on the semiconductor market that centers on the semiconductor business. 2-dimensional shape of wafer quantification is important as optimal solution over these problems. Although 1-dimensional shape measurement has been performed by the conventional technique, 2-dimensional shape management is needed in the mass production line under the influence of RET. We developed the technique of analyzing distribution of shape edge performance as the shape management technique. In this study, we conducted experiments for correlation method of the pattern (Measurement Based Contouring) as two-dimensional litho and etch evaluation technique. That is, observation of the identical position of a litho and etch was considered. It is possible to analyze variability of the edge of the same position with high precision.

  11. High frequency vibration analysis by the complex envelope vectorization.

    PubMed

    Giannini, O; Carcaterra, A; Sestieri, A

    2007-06-01

    The complex envelope displacement analysis (CEDA) is a procedure to solve high frequency vibration and vibro-acoustic problems, providing the envelope of the physical solution. CEDA is based on a variable transformation mapping the high frequency oscillations into signals of low frequency content and has been successfully applied to one-dimensional systems. However, the extension to plates and vibro-acoustic fields met serious difficulties so that a general revision of the theory was carried out, leading finally to a new method, the complex envelope vectorization (CEV). In this paper the CEV method is described, underlying merits and limits of the procedure, and a set of applications to vibration and vibro-acoustic problems of increasing complexity are presented.

  12. Differentiating Categories and Dimensions: Evaluating the Robustness of Taxometric Analyses

    ERIC Educational Resources Information Center

    Ruscio, John; Kaczetow, Walter

    2009-01-01

    Interest in modeling the structure of latent variables is gaining momentum, and many simulation studies suggest that taxometric analysis can validly assess the relative fit of categorical and dimensional models. The generation and parallel analysis of categorical and dimensional comparison data sets reduces the subjectivity required to interpret…

  13. Complexity as a Reflection of the Dimensionality of a Task.

    ERIC Educational Resources Information Center

    Spilsbury, Georgina

    1992-01-01

    The hypothesis that a task that increases in complexity (increasing its correlation with a central measure of intelligence) does so by increasing its dimensionality by tapping individual differences or another variable was supported by findings from 46 adults aged 20-70 years performing a mental counting task. (SLD)

  14. Separation of variables in the special diagonal Hamilton-Jacobi equation: Application to the dynamical problem of a particle constrained on a moving surface

    NASA Technical Reports Server (NTRS)

    Blanchard, D. L.; Chan, F. K.

    1973-01-01

    For a time-dependent, n-dimensional, special diagonal Hamilton-Jacobi equation a necessary and sufficient condition for the separation of variables to yield a complete integral of the form was established by specifying the admissible forms in terms of arbitrary functions. A complete integral was then expressed in terms of these arbitrary functions and also the n irreducible constants. As an application of the results obtained for the two-dimensional Hamilton-Jacobi equation, analysis was made for a comparatively wide class of dynamical problems involving a particle moving in Euclidean three-dimensional space under the action of external forces but constrained on a moving surface. All the possible cases in which this equation had a complete integral of the form were obtained and these are tubulated for reference.

  15. Magnetron injection gun for a broadband gyrotron backward-wave oscillator

    NASA Astrophysics Data System (ADS)

    Yuan, C. P.; Chang, T. H.; Chen, N. C.; Yeh, Y. S.

    2009-07-01

    The magnetron injection gun is capable of generating relativistic electron beam with high velocity ratio and low velocity spread for a gyrotron backward-wave oscillator (gyro-BWO). However, the velocity ratio (α) varies drastically against both the magnetic field and the beam voltage, which significantly limits the tuning bandwidth of a gyro-BWO. This study remedies this drawback by adding a variable trim field to adjust the magnetic compression ratio when changing the operating conditions. Theoretical results obtained by employing a two-dimensional electron gun code (EGUN) demonstrate a constant velocity ratio of 1.5 with a low axial velocity spread of 6% from 3.4-4.8 Tesla. These results are compared with a three-dimensional particle-tracing code (computer simulation technology, CST). The underlying physics for constant α will be discussed in depth.

  16. A two-dimensional composite grid numerical model based on the reduced system for oceanography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Y.F.; Browning, G.L.; Chesshire, G.

    The proper mathematical limit of a hyperbolic system with multiple time scales, the reduced system, is a system that contains no high-frequency motions and is well posed if suitable boundary conditions are chosen for the initial-boundary value problem. The composite grid method, a robust and efficient grid-generation technique that smoothly and accurately treats general irregular boundaries, is used to approximate the two-dimensional version of the reduced system for oceanography on irregular ocean basins. A change-of-variable technique that substantially increases the accuracy of the model and a method for efficiently solving the elliptic equation for the geopotential are discussed. Numerical resultsmore » are presented for circular and kidney-shaped basins by using a set of analytic solutions constructed in this paper.« less

  17. An analysis of random projection for changeable and privacy-preserving biometric verification.

    PubMed

    Wang, Yongjin; Plataniotis, Konstantinos N

    2010-10-01

    Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.

  18. Experimental witness of genuine high-dimensional entanglement

    NASA Astrophysics Data System (ADS)

    Guo, Yu; Hu, Xiao-Min; Liu, Bi-Heng; Huang, Yun-Feng; Li, Chuan-Feng; Guo, Guang-Can

    2018-06-01

    Growing interest has been invested in exploring high-dimensional quantum systems, for their promising perspectives in certain quantum tasks. How to characterize a high-dimensional entanglement structure is one of the basic questions to take full advantage of it. However, it is not easy for us to catch the key feature of high-dimensional entanglement, for the correlations derived from high-dimensional entangled states can be possibly simulated with copies of lower-dimensional systems. Here, we follow the work of Kraft et al. [Phys. Rev. Lett. 120, 060502 (2018), 10.1103/PhysRevLett.120.060502], and present the experimental realizing of creation and detection, by the normalized witness operation, of the notion of genuine high-dimensional entanglement, which cannot be decomposed into lower-dimensional Hilbert space and thus form the entanglement structures existing in high-dimensional systems only. Our experiment leads to further exploration of high-dimensional quantum systems.

  19. Exploring High-D Spaces with Multiform Matrices and Small Multiples

    PubMed Central

    MacEachren, Alan; Dai, Xiping; Hardisty, Frank; Guo, Diansheng; Lengerich, Gene

    2011-01-01

    We introduce an approach to visual analysis of multivariate data that integrates several methods from information visualization, exploratory data analysis (EDA), and geovisualization. The approach leverages the component-based architecture implemented in GeoVISTA Studio to construct a flexible, multiview, tightly (but generically) coordinated, EDA toolkit. This toolkit builds upon traditional ideas behind both small multiples and scatterplot matrices in three fundamental ways. First, we develop a general, MultiForm, Bivariate Matrix and a complementary MultiForm, Bivariate Small Multiple plot in which different bivariate representation forms can be used in combination. We demonstrate the flexibility of this approach with matrices and small multiples that depict multivariate data through combinations of: scatterplots, bivariate maps, and space-filling displays. Second, we apply a measure of conditional entropy to (a) identify variables from a high-dimensional data set that are likely to display interesting relationships and (b) generate a default order of these variables in the matrix or small multiple display. Third, we add conditioning, a kind of dynamic query/filtering in which supplementary (undisplayed) variables are used to constrain the view onto variables that are displayed. Conditioning allows the effects of one or more well understood variables to be removed from the analysis, making relationships among remaining variables easier to explore. We illustrate the individual and combined functionality enabled by this approach through application to analysis of cancer diagnosis and mortality data and their associated covariates and risk factors. PMID:21947129

  20. Right ventricular volumes assessed by echocardiographic three-dimensional knowledge-based reconstruction compared with magnetic resonance imaging in a clinical setting.

    PubMed

    Neukamm, Christian; Try, Kirsti; Norgård, Gunnar; Brun, Henrik

    2014-01-01

    A technique that uses two-dimensional images to create a knowledge-based, three-dimensional model was tested and compared to magnetic resonance imaging. Measurement of right ventricular volumes and function is important in the follow-up of patients after pulmonary valve replacement. Magnetic resonance imaging is the gold standard for volumetric assessment. Echocardiographic methods have been validated and are attractive alternatives. Thirty patients with tetralogy of Fallot (25 ± 14 years) after pulmonary valve replacement were examined. Magnetic resonance imaging volumetric measurements and echocardiography-based three-dimensional reconstruction were performed. End-diastolic volume, end-systolic volume, and ejection fraction were measured, and the results were compared. Magnetic resonance imaging measurements gave coefficient of variation in the intraobserver study of 3.5, 4.6, and 5.3 and in the interobserver study of 3.6, 5.9, and 6.7 for end-diastolic volume, end-systolic volume, and ejection fraction, respectively. Echocardiographic three-dimensional reconstruction was highly feasible (97%). In the intraobserver study, the corresponding values were 6.0, 7.0, and 8.9 and in the interobserver study 7.4, 10.8, and 13.4. In comparison of the methods, correlations with magnetic resonance imaging were r = 0.91, 0.91, and 0.38, and the corresponding coefficient of variations were 9.4, 10.8, and 14.7. Echocardiography derived volumes (mL/m(2)) were significantly higher than magnetic resonance imaging volumes in end-diastolic volume 13.7 ± 25.6 and in end-systolic volume 9.1 ± 17.0 (both P < .05). The knowledge-based three-dimensional right ventricular volume method was highly feasible. Intra and interobserver variabilities were satisfactory. Agreement with magnetic resonance imaging measurements for volumes was reasonable but unsatisfactory for ejection fraction. Knowledge-based reconstruction may replace magnetic resonance imaging measurements for serial follow-up, whereas magnetic resonance imaging should be used for surgical decision making.

  1. Characterization of the spatial variability of channel morphology

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2002-01-01

    The spatial variability of two fundamental morphological variables is investigated for rivers having a wide range of discharge (five orders of magnitude). The variables, water-surface width and average depth, were measured at 58 to 888 equally spaced cross-sections in channel links (river reaches between major tributaries). These measurements provide data to characterize the two-dimensional structure of a channel link which is the fundamental unit of a channel network. The morphological variables have nearly log-normal probability distributions. A general relation was determined which relates the means of the log-transformed variables to the logarithm of discharge similar to previously published downstream hydraulic geometry relations. The spatial variability of the variables is described by two properties: (1) the coefficient of variation which was nearly constant (0.13-0.42) over a wide range of discharge; and (2) the integral length scale in the downstream direction which was approximately equal to one to two mean channel widths. The joint probability distribution of the morphological variables in the downstream direction was modelled as a first-order, bivariate autoregressive process. This model accounted for up to 76 per cent of the total variance. The two-dimensional morphological variables can be scaled such that the channel width-depth process is independent of discharge. The scaling properties will be valuable to modellers of both basin and channel dynamics. Published in 2002 John Wiley and Sons, Ltd.

  2. Visions of visualization aids: Design philosophy and experimental results

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1990-01-01

    Aids for the visualization of high-dimensional scientific or other data must be designed. Simply casting multidimensional data into a two- or three-dimensional spatial metaphor does not guarantee that the presentation will provide insight or parsimonious description of the phenomena underlying the data. Indeed, the communication of the essential meaning of some multidimensional data may be obscured by presentation in a spatially distributed format. Useful visualization is generally based on pre-existing theoretical beliefs concerning the underlying phenomena which guide selection and formatting of the plotted variables. Two examples from chaotic dynamics are used to illustrate how a visulaization may be an aid to insight. Two examples of displays to aid spatial maneuvering are described. The first, a perspective format for a commercial air traffic display, illustrates how geometric distortion may be introduced to insure that an operator can understand a depicted three-dimensional situation. The second, a display for planning small spacecraft maneuvers, illustrates how the complex counterintuitive character of orbital maneuvering may be made more tractable by removing higher-order nonlinear control dynamics, and allowing independent satisfaction of velocity and plume impingement constraints on orbital changes.

  3. Consistency of kinematic and kinetic patterns during a prolonged spell of cricket fast bowling: an exploratory laboratory study.

    PubMed

    Schaefer, Andrew; O'dwyer, Nicholas; Ferdinands, René E D; Edwards, Suzi

    2018-03-01

    Due to the high incidence of lumbar spine injury in fast bowlers, international cricket organisations advocate limits on workload for bowlers under 19 years of age in training/matches. The purpose of this study was to determine whether significant changes in either fast bowling technique or movement variability could be detected throughout a 10-over bowling spell that exceeded the recommended limit. Twenty-five junior male fast bowlers bowled at competition pace while three-dimensional kinematic and kinetic data were collected for the leading leg, trunk and bowling arm. Separate analyses for the mean and within-participant standard deviation of each variable were performed using repeated measures factorial analyses of variance and computation of effect sizes. No substantial changes were observed in mean values or variability of any kinematic, kinetic or performance variables, which instead revealed a high degree of consistency in kinematic and kinetic patterns. Therefore, the suggestion that exceeding the workload limit per spell causes technique- and loading-related changes associated with lumbar injury risk is not valid and cannot be used to justify the restriction of bowling workload. For injury prevention, the focus instead should be on the long-term effect of repeated spells and on the fast bowling technique itself.

  4. Constructing Compact Takagi-Sugeno Rule Systems: Identification of Complex Interactions in Epidemiological Data

    PubMed Central

    Zhou, Shang-Ming; Lyons, Ronan A.; Brophy, Sinead; Gravenor, Mike B.

    2012-01-01

    The Takagi-Sugeno (TS) fuzzy rule system is a widely used data mining technique, and is of particular use in the identification of non-linear interactions between variables. However the number of rules increases dramatically when applied to high dimensional data sets (the curse of dimensionality). Few robust methods are available to identify important rules while removing redundant ones, and this results in limited applicability in fields such as epidemiology or bioinformatics where the interaction of many variables must be considered. Here, we develop a new parsimonious TS rule system. We propose three statistics: R, L, and ω-values, to rank the importance of each TS rule, and a forward selection procedure to construct a final model. We use our method to predict how key components of childhood deprivation combine to influence educational achievement outcome. We show that a parsimonious TS model can be constructed, based on a small subset of rules, that provides an accurate description of the relationship between deprivation indices and educational outcomes. The selected rules shed light on the synergistic relationships between the variables, and reveal that the effect of targeting specific domains of deprivation is crucially dependent on the state of the other domains. Policy decisions need to incorporate these interactions, and deprivation indices should not be considered in isolation. The TS rule system provides a basis for such decision making, and has wide applicability for the identification of non-linear interactions in complex biomedical data. PMID:23272108

  5. Constructing compact Takagi-Sugeno rule systems: identification of complex interactions in epidemiological data.

    PubMed

    Zhou, Shang-Ming; Lyons, Ronan A; Brophy, Sinead; Gravenor, Mike B

    2012-01-01

    The Takagi-Sugeno (TS) fuzzy rule system is a widely used data mining technique, and is of particular use in the identification of non-linear interactions between variables. However the number of rules increases dramatically when applied to high dimensional data sets (the curse of dimensionality). Few robust methods are available to identify important rules while removing redundant ones, and this results in limited applicability in fields such as epidemiology or bioinformatics where the interaction of many variables must be considered. Here, we develop a new parsimonious TS rule system. We propose three statistics: R, L, and ω-values, to rank the importance of each TS rule, and a forward selection procedure to construct a final model. We use our method to predict how key components of childhood deprivation combine to influence educational achievement outcome. We show that a parsimonious TS model can be constructed, based on a small subset of rules, that provides an accurate description of the relationship between deprivation indices and educational outcomes. The selected rules shed light on the synergistic relationships between the variables, and reveal that the effect of targeting specific domains of deprivation is crucially dependent on the state of the other domains. Policy decisions need to incorporate these interactions, and deprivation indices should not be considered in isolation. The TS rule system provides a basis for such decision making, and has wide applicability for the identification of non-linear interactions in complex biomedical data.

  6. Improving Mixed Variable Optimization of Computational and Model Parameters Using Multiple Surrogate Functions

    DTIC Science & Technology

    2008-03-01

    multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space

  7. On the use of GPS tomography to investigate water vapor variability during a Mistral/sea breeze event in southeastern France

    NASA Astrophysics Data System (ADS)

    Bastin, Sophie; Champollion, Cédric; Bock, Olivier; Drobinski, Philippe; Masson, Frédéric

    2005-03-01

    Global Positioning System (GPS) tomography analyses of water vapor, complemented by high-resolution numerical simulations are used to investigate a Mistral/sea breeze event in the region of Marseille, France, during the ESCOMPTE experiment. This is the first time GPS tomography has been used to validate the three-dimensional water vapor concentration from numerical simulation, and to analyze a small-scale meteorological event. The high spatial and temporal resolution of GPS analyses provides a unique insight into the evolution of the vertical and horizontal distribution of water vapor during the Mistral/sea-breeze transition.

  8. Electron dynamics and transverse-kick elimination in a high-field short-period helical microwave undulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C.; Shumail, M.; Tantawi, S.

    2012-10-15

    Single electron dynamics for a circular polarized standing wave (CPSW) undulator synthesized from a corrugated cavity operating with a very low-loss HE{sub 11} mode are analyzed. The mechanism of the transverse drift of the CPSW undulator and its elimination are researched, and the tapered-field ends are found effectively to suppress the kick. A prototype of the CPSW undulator with the characters of short undulating-period 1.4 cm, high field K {approx} 1, large aperture {approx} 5 cm, and variable polarization is designed and modeled, whose 3-dimensional electromagnetic fields are used to research the suppression of the transverse kick.

  9. Walking the Filament of Feasibility: Global Optimization of Highly-Constrained, Multi-Modal Interplanetary Trajectories Using a Novel Stochastic Search Technique

    NASA Technical Reports Server (NTRS)

    Englander, Arnold C.; Englander, Jacob A.

    2017-01-01

    Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.

  10. Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2014-05-01

    Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.

  11. Robust learning for optimal treatment decision with NP-dimensionality

    PubMed Central

    Shi, Chengchun; Song, Rui; Lu, Wenbin

    2016-01-01

    In order to identify important variables that are involved in making optimal treatment decision, Lu, Zhang and Zeng (2013) proposed a penalized least squared regression framework for a fixed number of predictors, which is robust against the misspecification of the conditional mean model. Two problems arise: (i) in a world of explosively big data, effective methods are needed to handle ultra-high dimensional data set, for example, with the dimension of predictors is of the non-polynomial (NP) order of the sample size; (ii) both the propensity score and conditional mean models need to be estimated from data under NP dimensionality. In this paper, we propose a robust procedure for estimating the optimal treatment regime under NP dimensionality. In both steps, penalized regressions are employed with the non-concave penalty function, where the conditional mean model of the response given predictors may be misspecified. The asymptotic properties, such as weak oracle properties, selection consistency and oracle distributions, of the proposed estimators are investigated. In addition, we study the limiting distribution of the estimated value function for the obtained optimal treatment regime. The empirical performance of the proposed estimation method is evaluated by simulations and an application to a depression dataset from the STAR*D study. PMID:28781717

  12. Practical limits on muscle synergy identification by non-negative matrix factorization in systems with mechanical constraints.

    PubMed

    Burkholder, Thomas J; van Antwerp, Keith W

    2013-02-01

    Statistical decomposition, including non-negative matrix factorization (NMF), is a convenient tool for identifying patterns of structured variability within behavioral motor programs, but it is unclear how the resolved factors relate to actual neural structures. Factors can be extracted from a uniformly sampled, low-dimension command space. In practical application, the command space is limited, either to those activations that perform some task(s) successfully or to activations induced in response to specific perturbations. NMF was applied to muscle activation patterns synthesized from low dimensional, synergy-like control modules mimicking simple task performance or feedback activation from proprioceptive signals. In the task-constrained paradigm, the accuracy of control module recovery was highly dependent on the sampled volume of control space, such that sampling even 50% of control space produced a substantial degradation in factor accuracy. In the feedback paradigm, NMF was not capable of extracting more than four control modules, even in a mechanical model with seven internal degrees of freedom. Reduced access to the low-dimensional control space imposed by physical constraints may result in substantial distortion of an existing low dimensional controller, such that neither the dimensionality nor the composition of the recovered/extracted factors match the original controller.

  13. Risk patterns and correlated brain activities. Multidimensional statistical analysis of FMRI data in economic decision making study.

    PubMed

    van Bömmel, Alena; Song, Song; Majer, Piotr; Mohr, Peter N C; Heekeren, Hauke R; Härdle, Wolfgang K

    2014-07-01

    Decision making usually involves uncertainty and risk. Understanding which parts of the human brain are activated during decisions under risk and which neural processes underly (risky) investment decisions are important goals in neuroeconomics. Here, we analyze functional magnetic resonance imaging (fMRI) data on 17 subjects who were exposed to an investment decision task from Mohr, Biele, Krugel, Li, and Heekeren (in NeuroImage 49, 2556-2563, 2010b). We obtain a time series of three-dimensional images of the blood-oxygen-level dependent (BOLD) fMRI signals. We apply a panel version of the dynamic semiparametric factor model (DSFM) presented in Park, Mammen, Wolfgang, and Borak (in Journal of the American Statistical Association 104(485), 284-298, 2009) and identify task-related activations in space and dynamics in time. With the panel DSFM (PDSFM) we can capture the dynamic behavior of the specific brain regions common for all subjects and represent the high-dimensional time-series data in easily interpretable low-dimensional dynamic factors without large loss of variability. Further, we classify the risk attitudes of all subjects based on the estimated low-dimensional time series. Our classification analysis successfully confirms the estimated risk attitudes derived directly from subjects' decision behavior.

  14. On the three-quarter view advantage of familiar object recognition.

    PubMed

    Nonose, Kohei; Niimi, Ryosuke; Yokosawa, Kazuhiko

    2016-11-01

    A three-quarter view, i.e., an oblique view, of familiar objects often leads to a higher subjective goodness rating when compared with other orientations. What is the source of the high goodness for oblique views? First, we confirmed that object recognition performance was also best for oblique views around 30° view, even when the foreshortening disadvantage of front- and side-views was minimized (Experiments 1 and 2). In Experiment 3, we measured subjective ratings of view goodness and two possible determinants of view goodness: familiarity of view, and subjective impression of three-dimensionality. Three-dimensionality was measured as the subjective saliency of visual depth information. The oblique views were rated best, most familiar, and as approximating greatest three-dimensionality on average; however, the cluster analyses showed that the "best" orientation systematically varied among objects. We found three clusters of objects: front-preferred objects, oblique-preferred objects, and side-preferred objects. Interestingly, recognition performance and the three-dimensionality rating were higher for oblique views irrespective of the clusters. It appears that recognition efficiency is not the major source of the three-quarter view advantage. There are multiple determinants and variability among objects. This study suggests that the classical idea that a canonical view has a unique advantage in object perception requires further discussion.

  15. A simple ecohydrological model captures essentials of seasonal leaf dynamics in semi-arid tropical grasslands

    NASA Astrophysics Data System (ADS)

    Choler, P.; Sea, W.; Briggs, P.; Raupach, M.; Leuning, R.

    2009-09-01

    Modelling leaf phenology in water-controlled ecosystems remains a difficult task because of high spatial and temporal variability in the interaction of plant growth and soil moisture. Here, we move beyond widely used linear models to examine the performance of low-dimensional, nonlinear ecohydrological models that couple the dynamics of plant cover and soil moisture. The study area encompasses 400 000 km2 of semi-arid perennial tropical grasslands, dominated by C4 grasses, in the Northern Territory and Queensland (Australia). We prepared 8 yr time series (2001-2008) of climatic variables and estimates of fractional vegetation cover derived from MODIS Normalized Difference Vegetation Index (NDVI) for 400 randomly chosen sites, of which 25% were used for model calibration and 75% for model validation. We found that the mean absolute error of linear and nonlinear models did not markedly differ. However, nonlinear models presented key advantages: (1) they exhibited far less systematic error than their linear counterparts; (2) their error magnitude was consistent throughout a precipitation gradient while the performance of linear models deteriorated at the driest sites, and (3) they better captured the sharp transitions in leaf cover that are observed under high seasonality of precipitation. Our results showed that low-dimensional models including feedbacks between soil water balance and plant growth adequately predict leaf dynamics in semi-arid perennial grasslands. Because these models attempt to capture fundamental ecohydrological processes, they should be the favoured approach for prognostic models of phenology.

  16. A simple ecohydrological model captures essentials of seasonal leaf dynamics in semi-arid tropical grasslands

    NASA Astrophysics Data System (ADS)

    Choler, P.; Sea, W.; Briggs, P.; Raupach, M.; Leuning, R.

    2010-03-01

    Modelling leaf phenology in water-controlled ecosystems remains a difficult task because of high spatial and temporal variability in the interaction of plant growth and soil moisture. Here, we move beyond widely used linear models to examine the performance of low-dimensional, nonlinear ecohydrological models that couple the dynamics of plant cover and soil moisture. The study area encompasses 400 000 km2 of semi-arid perennial tropical grasslands, dominated by C4 grasses, in the Northern Territory and Queensland (Australia). We prepared 8-year time series (2001-2008) of climatic variables and estimates of fractional vegetation cover derived from MODIS Normalized Difference Vegetation Index (NDVI) for 400 randomly chosen sites, of which 25% were used for model calibration and 75% for model validation. We found that the mean absolute error of linear and nonlinear models did not markedly differ. However, nonlinear models presented key advantages: (1) they exhibited far less systematic error than their linear counterparts; (2) their error magnitude was consistent throughout a precipitation gradient while the performance of linear models deteriorated at the driest sites, and (3) they better captured the sharp transitions in leaf cover that are observed under high seasonality of precipitation. Our results showed that low-dimensional models including feedbacks between soil water balance and plant growth adequately predict leaf dynamics in semi-arid perennial grasslands. Because these models attempt to capture fundamental ecohydrological processes, they should be the favoured approach for prognostic models of phenology.

  17. High-dimensional fitting of sparse datasets of CCSD(T) electronic energies and MP2 dipole moments, illustrated for the formic acid dimer and its complex IR spectrum

    NASA Astrophysics Data System (ADS)

    Qu, Chen; Bowman, Joel M.

    2018-06-01

    We present high-level, coupled-mode calculations of the infrared spectrum of the cyclic formic acid dimer. The calculations make use of full-dimensional, ab initio potential energy and dipole moment surfaces. The potential is a linear least-squares fit to 13 475 CCSD(T)-F12a/haTZ (haTZ means aug-cc-pVTZ basis set for O and C, and cc-pVTZ for H) energies, and the dipole moment surface is a fit to the dipole components, calculated at the MP2/haTZ level of theory. The variables of both fits are all (45) internuclear distances (actually Morse variables). The potential, which is fully permutationally invariant, is the one published recently and the dipole moment surface is newly reported here. Details of the fits, especially the dipole moment, and the database of configurations are given. The infrared spectrum of the dimer is calculated by solving the nuclear Schrödinger equation using a vibrational self-consistent field and virtual-state configuration interaction method, with subsets of the 24 normal modes, up to 15 modes. The calculations indicate strong mode-coupling in the C—H and O—H stretching region of the spectrum. Comparisons are made with experiments and the complexity of the experimental spectrum in the C—H and O—H stretching region is successfully reproduced.

  18. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  19. SNR-optimized phase-sensitive dual-acquisition turbo spin echo imaging: a fast alternative to FLAIR.

    PubMed

    Lee, Hyunyeol; Park, Jaeseok

    2013-07-01

    Phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo imaging was recently introduced, producing high-resolution isotropic cerebrospinal fluid attenuated brain images without long inversion recovery preparation. Despite the advantages, the weighted-averaging-based technique suffers from noise amplification resulting from different levels of cerebrospinal fluid signal modulations over the two acquisitions. The purpose of this work is to develop a signal-to-noise ratio-optimized version of the phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo. Variable refocusing flip angles in the first acquisition are calculated using a three-step prescribed signal evolution while those in the second acquisition are calculated using a two-step pseudo-steady state signal transition with a high flip-angle pseudo-steady state at a later portion of the echo train, balancing the levels of cerebrospinal fluid signals in both the acquisitions. Low spatial frequency signals are sampled during the high flip-angle pseudo-steady state to further suppress noise. Numerical simulations of the Bloch equations were performed to evaluate signal evolutions of brain tissues along the echo train and optimize imaging parameters. In vivo studies demonstrate that compared with conventional phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo, the proposed optimization yields 74% increase in apparent signal-to-noise ratio for gray matter and 32% decrease in imaging time. The proposed method can be a potential alternative to conventional fluid-attenuated imaging. Copyright © 2012 Wiley Periodicals, Inc.

  20. Asymptotic and spectral analysis of the gyrokinetic-waterbag integro-differential operator in toroidal geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besse, Nicolas, E-mail: Nicolas.Besse@oca.eu; Institut Jean Lamour, UMR CNRS/UL 7198, Université de Lorraine, BP 70239 54506 Vandoeuvre-lès-Nancy Cedex; Coulette, David, E-mail: David.Coulette@ipcms.unistra.fr

    2016-08-15

    Achieving plasmas with good stability and confinement properties is a key research goal for magnetic fusion devices. The underlying equations are the Vlasov–Poisson and Vlasov–Maxwell (VPM) equations in three space variables, three velocity variables, and one time variable. Even in those somewhat academic cases where global equilibrium solutions are known, studying their stability requires the analysis of the spectral properties of the linearized operator, a daunting task. We have identified a model, for which not only equilibrium solutions can be constructed, but many of their stability properties are amenable to rigorous analysis. It uses a class of solution to themore » VPM equations (or to their gyrokinetic approximations) known as waterbag solutions which, in particular, are piecewise constant in phase-space. It also uses, not only the gyrokinetic approximation of fast cyclotronic motion around magnetic field lines, but also an asymptotic approximation regarding the magnetic-field-induced anisotropy: the spatial variation along the field lines is taken much slower than across them. Together, these assumptions result in a drastic reduction in the dimensionality of the linearized problem, which becomes a set of two nested one-dimensional problems: an integral equation in the poloidal variable, followed by a one-dimensional complex Schrödinger equation in the radial variable. We show here that the operator associated to the poloidal variable is meromorphic in the eigenparameter, the pulsation frequency. We also prove that, for all but a countable set of real pulsation frequencies, the operator is compact and thus behaves mostly as a finite-dimensional one. The numerical algorithms based on such ideas have been implemented in a companion paper [D. Coulette and N. Besse, “Numerical resolution of the global eigenvalue problem for gyrokinetic-waterbag model in toroidal geometry” (submitted)] and were found to be surprisingly close to those for the original gyrokinetic-Vlasov equations. The purpose of the present paper is to make these new ideas accessible to two readerships: applied mathematicians and plasma physicists.« less

  1. Approximate furrow infiltration model for time-variable ponding depth

    USDA-ARS?s Scientific Manuscript database

    A methodology is proposed for estimating furrow infiltration under time-variable ponding depth conditions. The methodology approximates the solution to the two-dimensional Richards equation, and is a modification of a procedure that was originally proposed for computing infiltration under constant ...

  2. De Finetti representation theorem for infinite-dimensional quantum systems and applications to quantum cryptography.

    PubMed

    Renner, R; Cirac, J I

    2009-03-20

    We show that the quantum de Finetti theorem holds for states on infinite-dimensional systems, provided they satisfy certain experimentally verifiable conditions. This result can be applied to prove the security of quantum key distribution based on weak coherent states or other continuous variable states against general attacks.

  3. Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method

    ERIC Educational Resources Information Center

    Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev

    2018-01-01

    The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…

  4. Bright-dark soliton solutions for the (2+1)-dimensional variable-coefficient coupled nonlinear Schrödinger system in a graded-index waveguide

    NASA Astrophysics Data System (ADS)

    Yuan, Yu-Qiang; Tian, Bo; Xie, Xi-Yang; Chai, Jun; Liu, Lei

    2017-04-01

    Under investigation in this paper is the (2+1)-dimensional coupled nonlinear Schrödinger (NLS) system with variable coefficients, which describes the propagation of an optical beam inside the two-dimensional graded-index waveguide amplifier with the polarization effects. Through a similarity transformation, we convert that system into a set of the integrable defocusing (1+1)-dimensional coupled NLS equations, and subsequently construct the bright-dark soliton solutions for the original system which are converted from the ones of the latter set. With the graphic analysis, we discuss the soliton propagation and collision with r(t), which is related to the nonlinear, profile and gain/loss coefficients. When r(t) is a constant, one soliton propagates with the amplitude, width and velocity unvaried, while velocity and width of the one soliton can be affected, and two solitons possess the elastic collision; When r(t) is a linear function, velocity and width of the one soliton varies with t increasing, and collision of the two solitons is altered. Besides, bound-state solitons are seen.

  5. A 2.7 Myr record of sedimentary processes on a high-latitude continental slope: 3D seismic evidence from the mid-Norwegian margin

    NASA Astrophysics Data System (ADS)

    Montelli, A.; Dowdeswell, J. A.; Ottesen, D.; Johansen, S. E.

    2017-12-01

    An extensive three-dimensional seismic dataset is used to investigate the sedimentary processes and morphological evolution of the mid-Norwegian continental slope through the Quaternary. These data reveal hundreds of buried landforms, including channels and debris flows of variable morphology, as well as gullies, iceberg ploughmarks, slide scars and sediment waves. Slide scars, turbidity currents and debris flows comprise slope systems controlled by local slope morphology, showing the spatial variability of high-latitude sedimentation. Channels dominate the Early Pleistocene ( 2.7-0.8 Ma) morphological record of the mid-Norwegian slope. During Early Plesitocene, glacimarine sedimentation on the slope was influenced by dense bottom-water flow and turbidity currents. Glacigenic debris-flows appear within the Middle-Late Pleistocene ( 0.8-0 Ma) succession. Their abundance increases on Late Pleistocene palaeo-surfaces, marking a paleo-environmental change characterised by decreasing role for channelized turbidity currents and dense water flows. This transition coincides with the gradual shift to full-glacial ice-sheet conditions marked by the appearance of the first erosive fast-flowing ice streams and an associated increase in sediment flux to the shelf edge, emphasizing first-order climate control on the temporal variability of high-latitude sedimentary slope records.

  6. Biomechanical factors associated with mandibular cantilevers: analysis with three-dimensional finite element models.

    PubMed

    Gonda, Tomoya; Yasuda, Daiisa; Ikebe, Kazunori; Maeda, Yoshinobu

    2014-01-01

    Although the risks of using a cantilever to treat missing teeth have been described, the mechanisms remain unclear. This study aimed to reveal these mechanisms from a biomechanical perspective. The effects of various implant sites, number of implants, and superstructural connections on stress distribution in the marginal bone were analyzed with three-dimensional finite element models based on mandibular computed tomography data. Forces from the masseter, temporalis, and internal pterygoid were applied as vectors. Two three-dimensional finite element models were created with the edentulous mandible showing severe and relatively modest residual ridge resorption. Cantilevers of the premolar and molar were simulated in the superstructures in the models. The following conditions were also included as factors in the models to investigate changes: poor bone quality, shortened dental arch, posterior occlusion, lateral occlusion, double force of the masseter, and short implant. Multiple linear regression analysis with a forced-entry method was performed with stress values as the objective variable and the factors as the explanatory variable. When bone mass was high, stress around the implant caused by differences in implantation sites was reduced. When bone mass was low, the presence of a cantilever was a possible risk factor. The stress around the implant increased significantly if bone quality was poor or if increased force (eg, bruxism) was applied. The addition of a cantilever to the superstructure increased stress around implants. When large muscle forces were applied to a superstructure with cantilevers or if bone quality was poor, stress around the implants increased.

  7. A four-dimensional motion field atlas of the tongue from tagged and cine magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Xing, Fangxu; Prince, Jerry L.; Stone, Maureen; Wedeen, Van J.; El Fakhri, Georges; Woo, Jonghye

    2017-02-01

    Representation of human tongue motion using three-dimensional vector fields over time can be used to better understand tongue function during speech, swallowing, and other lingual behaviors. To characterize the inter-subject variability of the tongue's shape and motion of a population carrying out one of these functions it is desirable to build a statistical model of the four-dimensional (4D) tongue. In this paper, we propose a method to construct a spatio-temporal atlas of tongue motion using magnetic resonance (MR) images acquired from fourteen healthy human subjects. First, cine MR images revealing the anatomical features of the tongue are used to construct a 4D intensity image atlas. Second, tagged MR images acquired to capture internal motion are used to compute a dense motion field at each time frame using a phase-based motion tracking method. Third, motion fields from each subject are pulled back to the cine atlas space using the deformation fields computed during the cine atlas construction. Finally, a spatio-temporal motion field atlas is created to show a sequence of mean motion fields and their inter-subject variation. The quality of the atlas was evaluated by deforming cine images in the atlas space. Comparison between deformed and original cine images showed high correspondence. The proposed method provides a quantitative representation to observe the commonality and variability of the tongue motion field for the first time, and shows potential in evaluation of common properties such as strains and other tensors based on motion fields.

  8. A Four-dimensional Motion Field Atlas of the Tongue from Tagged and Cine Magnetic Resonance Imaging.

    PubMed

    Xing, Fangxu; Prince, Jerry L; Stone, Maureen; Wedeen, Van J; Fakhri, Georges El; Woo, Jonghye

    2017-01-01

    Representation of human tongue motion using three-dimensional vector fields over time can be used to better understand tongue function during speech, swallowing, and other lingual behaviors. To characterize the inter-subject variability of the tongue's shape and motion of a population carrying out one of these functions it is desirable to build a statistical model of the four-dimensional (4D) tongue. In this paper, we propose a method to construct a spatio-temporal atlas of tongue motion using magnetic resonance (MR) images acquired from fourteen healthy human subjects. First, cine MR images revealing the anatomical features of the tongue are used to construct a 4D intensity image atlas. Second, tagged MR images acquired to capture internal motion are used to compute a dense motion field at each time frame using a phase-based motion tracking method. Third, motion fields from each subject are pulled back to the cine atlas space using the deformation fields computed during the cine atlas construction. Finally, a spatio-temporal motion field atlas is created to show a sequence of mean motion fields and their inter-subject variation. The quality of the atlas was evaluated by deforming cine images in the atlas space. Comparison between deformed and original cine images showed high correspondence. The proposed method provides a quantitative representation to observe the commonality and variability of the tongue motion field for the first time, and shows potential in evaluation of common properties such as strains and other tensors based on motion fields.

  9. Movement variability in the golf swing of male and female skilled golfers.

    PubMed

    Horan, Sean A; Evans, Kerrie; Kavanagh, Justin J

    2011-08-01

    Despite the complexity of movement, the swings of skilled golfers are considered to be highly consistent. Interestingly, no direct investigation of movement variability or coupling variability during the swings of skilled golfers has occurred. To determine whether differences in movement variability exist between male and female skilled golfers during the downswing of the full golf swing. Three-dimensional thorax, pelvis, hand, and clubhead data were collected from 19 male (mean ± SD: age = 26 ± 7 yr) and 19 female (age = 25 ± 7 yr) skilled golfers. Variability of segmental movement and clubhead trajectory were examined at three phases of the downswing using discrete (SD) and continuous analyses (spanning set), whereas variability of intersegment coupling was examined using average coefficient of correspondence. Compared with males, females exhibited higher thorax and pelvis variability for axial rotation at the midpoint of the downswing and ball contact (BC). Similarly, thorax-pelvis coupling variability was higher for females than males at both the midpoint of the downswing and BC. Regardless of thorax and pelvis motion, the variability of hand and clubhead trajectory sequentially decreased from the top of the backswing to BC for both males and females. Male and female skilled golfers use different upper body movement strategies during the downswing while achieving similarly low levels of clubhead trajectory variability at BC. It is apparent that the priority of skilled golfers is to progressively minimize hand and clubhead trajectory variability toward BC, despite the individual motion or coupling of the thorax and pelvis.

  10. Sex differences in the behavior of children with the 22q11 deletion syndrome

    PubMed Central

    Sobin, Christina; Kiley-Brabeck, Karen; Monk, Samantha Hadley; Khuri, Jananne; Karayiorgou, Maria

    2009-01-01

    High rates of psychiatric impairment in adults with 22q11DS suggest that behavioral trajectories of children with 22q11DS may provide critical etiologic insights. Past findings that report DSM diagnoses are extremely variable; moreover sex differences in behavior have not yet been examined. Dimensional CBCL ratings from 82 children, including 51 with the 22q11DS and 31 control siblings were analyzed. Strikingly consistent with rates of psychiatric impairment among affected adults, 25% percent of children with 22q11DS had high CBCL scores for Total Impairment, and 20% had high CBCL Internalizing Scale scores. Males accounted for 90% of high Internalizing scores and 67% of high Total Impairment scores. Attention and Social Problems were ubiquitous; more affected males than females (23% vs. 4%) scored high on Thought Problems. With regard to CBCL/DSM overlap, 20% of affected males as compared with 0 affected females had one or more high CBCL ratings in the absence of a DSM diagnosis. Behaviors of children with 22q11DS are characterized by marked sex differences when rated dimensionally, with significantly more males experiencing Internalizing and Thought Problems. Categorical diagnoses do not reflect behavioral differences between male and female children with 22q11DS, and may miss significant behavior problems in 20% of affected males. PMID:19217670

  11. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  12. A repeatable geometric morphometric approach to the analysis of hand entheseal three-dimensional form.

    PubMed

    Karakostis, Fotios Alexandros; Hotz, Gerhard; Scherf, Heike; Wahl, Joachim; Harvati, Katerina

    2018-05-01

    The purpose of this study was to put forth a precise landmark-based technique for reconstructing the three-dimensional shape of human entheseal surfaces, to investigate whether the shape of human entheses is related to their size. The effects of age-at-death and bone length on entheseal shapes were also assessed. The sample comprised high-definition three-dimensional models of three right hand entheseal surfaces, which correspond to 45 male adult individuals of known age. For each enthesis, a particular landmark configuration was introduced, whose precision was tested both within and between observers. The effect of three-dimensional size, age-at-death, and bone length on shape was investigated through shape regression. The method presented high intra-observer and inter-observer repeatability. All entheses showed significant allometry, with the area of opponens pollicis demonstrating the most substantial relationship. This was particularly due to variation related to its proximal elongated ridge. The effect of age-at-death and bone length on entheses was limited. The introduced methodology can set a reliable basis for further research on the factors affecting entheseal shape. Using both size and shape, variables can provide further information on entheseal variation and its biomechanical implications. The low entheseal variation by age verifies that specimens under 50 years of age are not substantially affected by age-related changes. The lack of correlation between entheseal shape and bone length or age implies that other factors may regulate entheseal surfaces. Future research should focus on multivariate shape patterns among entheses and their association with occupation. © 2018 Wiley Periodicals, Inc.

  13. Postural tasks are associated with center of pressure spatial patterns of three-dimensional statokinesigrams in young and elderly healthy subjects.

    PubMed

    Baracat, Patrícia Junqueira Ferraz; de Sá Ferreira, Arthur

    2013-12-01

    The present study investigated the association between postural tasks and center of pressure spatial patterns of three-dimensional statokinesigrams. Young (n=35; 27.0±7.7years) and elderly (n=38; 67.3±8.7years) healthy volunteers maintained an undisturbed standing position during postural tasks characterized by combined sensory (vision/no vision) and biomechanical challenges (feet apart/together). A method for the analysis of three-dimensional statokinesigrams based on nonparametric statistics and image-processing analysis was employed. Four patterns of spatial distribution were derived from ankle and hip strategies according to the quantity (single; double; multi) and location (anteroposterior; mediolateral) of high-density regions on three-dimensional statokinesigrams. Significant associations between postural task and spatial pattern were observed (young: gamma=0.548, p<.001; elderly: gamma=0.582, p<.001). Robustness analysis revealed small changes related to parameter choices for histogram processing. MANOVA revealed multivariate main effects for postural task [Wilks' Lambda=0.245, p<.001] and age [Wilks' Lambda=0.308, p<.001], with interaction [Wilks' Lambda=0.732, p<.001]. The quantity of high-density regions was positively correlated to stabilogram and statokinesigram variables (p<.05 or lower). In conclusion, postural tasks are associated with center of pressure spatial patterns and are similar in young and elderly healthy volunteers. Single-centered patterns reflected more stable postural conditions and were more frequent with complete visual input and a wide base of support. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Comment: Spurious Correlation and Other Observations on Experimental Design for Engineering Dimensional Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.

    2013-08-01

    This article discusses the paper "Experimental Design for Engineering Dimensional Analysis" by Albrecht et al. (2013, Technometrics). That paper provides and overview of engineering dimensional analysis (DA) for use in developing DA models. The paper proposes methods for generating model-robust experimental designs to supporting fitting DA models. The specific approach is to develop a design that maximizes the efficiency of a specified empirical model (EM) in the original independent variables, subject to a minimum efficiency for a DA model expressed in terms of dimensionless groups (DGs). This discussion article raises several issues and makes recommendations regarding the proposed approach. Also,more » the concept of spurious correlation is raised and discussed. Spurious correlation results from the response DG being calculated using several independent variables that are also used to calculate predictor DGs in the DA model.« less

  15. Two-dimensional analytical modeling of a linear variable filter for spectral order sorting.

    PubMed

    Ko, Cheng-Hao; Wu, Yueh-Hsun; Tsai, Jih-Run; Wang, Bang-Ji; Chakraborty, Symphony

    2016-06-10

    A two-dimensional thin film thickness model based on the geometry of a commercial coater which can calculate more effectively the profiles of linear variable filters (LVFs) has been developed. This is done by isolating the substrate plane as an independent coordinate (local coordinate), while the rotation and translation matrices are used to establish the coordinate transformation and combine the characteristic vector with the step function to build a borderline which can conclude whether the local mask will block the deposition or not. The height of the local mask has been increased up to 40 mm in the proposed model, and two-dimensional simulations are developed to obtain a thin film profile deposition on the substrate inside the evaporation chamber to achieve the specific request of producing a LVF zone width in a more economical way than previously reported [Opt. Express23, 5102 (2015)OPEXFF1094-408710.1364/OE.23.005102].

  16. Solitons interaction and integrability for a (2+1)-dimensional variable-coefficient Broer-Kaup system in water waves

    NASA Astrophysics Data System (ADS)

    Zhao, Xue-Hui; Tian, Bo; Guo, Yong-Jiang; Li, Hui-Min

    2018-03-01

    Under investigation in this paper is a (2+1)-dimensional variable-coefficient Broer-Kaup system in water waves. Via the symbolic computation, Bell polynomials and Hirota method, the Bäcklund transformation, Lax pair, bilinear forms, one- and two-soliton solutions are derived. Propagation and interaction for the solitons are illustrated: Amplitudes and shapes of the one soliton keep invariant during the propagation, which implies that the transport of the energy is stable for the (2+1)-dimensional water waves; and inelastic interactions between the two solitons are discussed. Elastic interactions between the two parabolic-, cubic- and periodic-type solitons are displayed, where the solitonic amplitudes and shapes remain unchanged except for certain phase shifts. However, inelastically, amplitudes of the two solitons have a linear superposition after each interaction which is called as a soliton resonance phenomenon.

  17. Smoothed Particle Hydrodynamics Simulations of Ultrarelativistic Shocks with Artificial Viscosity

    NASA Astrophysics Data System (ADS)

    Siegler, S.; Riffert, H.

    2000-03-01

    We present a fully Lagrangian conservation form of the general relativistic hydrodynamic equations for perfect fluids with artificial viscosity in a given arbitrary background spacetime. This conservation formulation is achieved by choosing suitable Lagrangian time evolution variables, from which the generic fluid variables of rest-mass density, 3-velocity, and thermodynamic pressure have to be determined. We present the corresponding equations for an ideal gas and show the existence and uniqueness of the solution. On the basis of the Lagrangian formulation we have developed a three-dimensional general relativistic smoothed particle hydrodynamics (SPH) code using the standard SPH formalism as known from nonrelativistic fluid dynamics. One-dimensional simulations of a shock tube and a wall shock are presented together with a two-dimensional test calculation of an inclined shock tube. With our method we can model ultrarelativistic fluid flows including shocks with Lorentz factors of even 1000.

  18. Quantitative and qualitative measure of intralaboratory two-dimensional protein gel reproducibility and the effects of sample preparation, sample load, and image analysis.

    PubMed

    Choe, Leila H; Lee, Kelvin H

    2003-10-01

    We investigate one approach to assess the quantitative variability in two-dimensional gel electrophoresis (2-DE) separations based on gel-to-gel variability, sample preparation variability, sample load differences, and the effect of automation on image analysis. We observe that 95% of spots present in three out of four replicate gels exhibit less than a 0.52 coefficient of variation (CV) in fluorescent stain intensity (% volume) for a single sample run on multiple gels. When four parallel sample preparations are performed, this value increases to 0.57. We do not observe any significant change in quantitative value for an increase or decrease in sample load of 30% when using appropriate image analysis variables. Increasing use of automation, while necessary in modern 2-DE experiments, does change the observed level of quantitative and qualitative variability among replicate gels. The number of spots that change qualitatively for a single sample run in parallel varies from a CV = 0.03 for fully manual analysis to CV = 0.20 for a fully automated analysis. We present a systematic method by which a single laboratory can measure gel-to-gel variability using only three gel runs.

  19. Mining High-Dimensional Data

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Yang, Jiong

    With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.

  20. Variability in Humoral Immunity to Measles Vaccine: New Developments

    PubMed Central

    Haralambieva, Iana H.; Kennedy, Richard B.; Ovsyannikova, Inna G.; Whitaker, Jennifer A.; Poland, Gregory A.

    2015-01-01

    Despite the existence of an effective measles vaccine, resurgence in measles cases in the United States and across Europe has occurred, including in individuals vaccinated with two doses of the vaccine. Host genetic factors result in inter-individual variation in measles vaccine-induced antibodies, and play a role in vaccine failure. Studies have identified HLA and non-HLA genetic influences that individually or jointly contribute to the observed variability in the humoral response to vaccination among healthy individuals. In this exciting era, new high-dimensional approaches and techniques including vaccinomics, systems biology, GWAS, epitope prediction and sophisticated bioinformatics/statistical algorithms, provide powerful tools to investigate immune response mechanisms to the measles vaccine. These might predict, on an individual basis, outcomes of acquired immunity post measles vaccination. PMID:26602762

  1. Influence of Smartphones and Software on Acoustic Voice Measures

    PubMed Central

    GRILLO, ELIZABETH U.; BROSIOUS, JENNA N.; SORRELL, STACI L.; ANAND, SUPRAJA

    2016-01-01

    This study assessed the within-subject variability of voice measures captured using different recording devices (i.e., smartphones and head mounted microphone) and software programs (i.e., Analysis of Dysphonia in Speech and Voice (ADSV), Multi-dimensional Voice Program (MDVP), and Praat). Correlations between the software programs that calculated the voice measures were also analyzed. Results demonstrated no significant within-subject variability across devices and software and that some of the measures were highly correlated across software programs. The study suggests that certain smartphones may be appropriate to record daily voice measures representing the effects of vocal loading within individuals. In addition, even though different algorithms are used to compute voice measures across software programs, some of the programs and measures share a similar relationship. PMID:28775797

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dayman, Ken J; Ade, Brian J; Weber, Charles F

    High-dimensional, nonlinear function estimation using large datasets is a current area of interest in the machine learning community, and applications may be found throughout the analytical sciences, where ever-growing datasets are making more information available to the analyst. In this paper, we leverage the existing relevance vector machine, a sparse Bayesian version of the well-studied support vector machine, and expand the method to include integrated feature selection and automatic function shaping. These innovations produce an algorithm that is able to distinguish variables that are useful for making predictions of a response from variables that are unrelated or confusing. We testmore » the technology using synthetic data, conduct initial performance studies, and develop a model capable of making position-independent predictions of the coreaveraged burnup using a single specimen drawn randomly from a nuclear reactor core.« less

  3. Clinic value of two-dimensional speckle tracking combined with adenosine stress echocardiography for assessment of myocardial viability.

    PubMed

    Ran, Hong; Zhang, Ping-Yang; Fang, Ling-Ling; Ma, Xiao-Wu; Wu, Wen-Fang; Feng, Wang-Fei

    2012-07-01

    To evaluate whether myocardial strain under adenosine stress calculated from two-dimensional echocardiography by automatic frame-by-frame tracking of natural acoustic markers enables objective description of myocardial viability in clinic. Two-dimensional echocardiography and two-dimensional speckle tracking imaging (2D STI) at rest were performed first and once again after adenosine was infused at 140 ug/kg/min over a period of 6 minutes in 36 stable patients with previous myocardial infarction. Then radionuclide myocardial perfusion/metabolic imaging served as the "gold standard" to define myocardial viability was given in all patients within 1 day. Two-dimensional speckle tracking images were acquired at rest and after adenosine administration. An automatic frame-by-frame tracking system of natural acoustic echocardiographic markers was used to calculate 2D strain variables including peak-systolic circumferential strain (CS(peak-sys)), radial strain (RS(peak-sys)), and longitudinal strain (LS(peak-sys)). Those segments with abnormal motion from visual assessment of two-dimensional echocardiography were selected for further study. As a result, 126 regions were viable whereas 194 were nonviable among 320 abnormal motion segments in 36 patients according to radionuclide imaging. At rest, there were no significant changes of 2D strain between the viable and nonviable myocardium. After adenosine administration (140 ug/kg/min), CS(peak-sys) had a little change of the viable myocardium while RS(peak-sys) and LS(peak-sys) increased significantly compared with those at rest. In nonviable group, CS(peak-sys), RS(peak-sys), and LS(peak-sys) had no significant changes during adenosine administration. After adenosine administration, RS(peak-sys) and LS(peak-sys) in viable group increased significantly compared with nonviable group. Obtained strain data were highly reproducible and affected in small intraobserver and interobserver variabilities. A change of radial strain more than 9.5% has a sensitivity of 83.9% and a specificity of 81.4% for viable whereas a change of longitudinal strain more than 14.6% allowed a sensitivity of 86.7% and a specificity of 90.2%. 2D STI combined with adenosine stress echocardiography could provide a new and reliable method to identify myocardium viability. © 2012, Wiley Periodicals, Inc.

  4. Spectroscopic properties of a two-dimensional time-dependent Cepheid model. I. Description and validation of the model

    NASA Astrophysics Data System (ADS)

    Vasilyev, V.; Ludwig, H.-G.; Freytag, B.; Lemasle, B.; Marconi, M.

    2017-10-01

    Context. Standard spectroscopic analyses of Cepheid variables are based on hydrostatic one-dimensional model atmospheres, with convection treated using various formulations of mixing-length theory. Aims: This paper aims to carry out an investigation of the validity of the quasi-static approximation in the context of pulsating stars. We check the adequacy of a two-dimensional time-dependent model of a Cepheid-like variable with focus on its spectroscopic properties. Methods: With the radiation-hydrodynamics code CO5BOLD, we construct a two-dimensional time-dependent envelope model of a Cepheid with Teff = 5600 K, log g = 2.0, solar metallicity, and a 2.8-day pulsation period. Subsequently, we perform extensive spectral syntheses of a set of artificial iron lines in local thermodynamic equilibrium. The set of lines allows us to systematically study effects of line strength, ionization stage, and excitation potential. Results: We evaluate the microturbulent velocity, line asymmetry, projection factor, and Doppler shifts. The microturbulent velocity, averaged over all lines, depends on the pulsational phase and varies between 1.5 and 2.7 km s-1. The derived projection factor lies between 1.23 and 1.27, which agrees with observational results. The mean Doppler shift is non-zero and negative, -1 km s-1, after averaging over several full periods and lines. This residual line-of-sight velocity (related to the "K-term") is primarily caused by horizontal inhomogeneities, and consequently we interpret it as the familiar convective blueshift ubiquitously present in non-pulsating late-type stars. Limited statistics prevent firm conclusions on the line asymmetries. Conclusions: Our two-dimensional model provides a reasonably accurate representation of the spectroscopic properties of a short-period Cepheid-like variable star. Some properties are primarily controlled by convective inhomogeneities rather than by the Cepheid-defining pulsations. Extended multi-dimensional modelling offers new insight into the nature of pulsating stars.

  5. Reliability of tunnel angle in ACL reconstruction: two-dimensional versus three-dimensional guide technique.

    PubMed

    Leiter, Jeff R S; de Korompay, Nevin; Macdonald, Lindsey; McRae, Sheila; Froese, Warren; Macdonald, Peter B

    2011-08-01

    To compare the reliability of tibial tunnel position and angle produced with a standard ACL guide (two-dimensional guide) or Howell 65° Guide (three-dimensional guide) in the coronal and sagittal planes. In the sagittal plane, the dependent variables were the angle of the tibial tunnel relative to the tibial plateau and the position of the tibial tunnel with respect to the most posterior aspect of the tibia. In the coronal plane, the dependent variables were the angle of the tunnel with respect to the medial joint line of the tibia and the medial and lateral placement of the tibial tunnel relative to the most medial aspect of the tibia. The position and angle of the tibial tunnel in the coronal and sagittal planes were determined from anteroposterior and lateral radiographs, respectively, taken 2-6 months postoperatively. The two-dimensional and three-dimensional guide groups included 28 and 24 sets of radiographs, respectively. Tibial tunnel position was identified, and tunnel angle measurements were completed. Multiple investigators measured the position and angle of the tunnel 3 times, at least 7 days apart. The angle of the tibial tunnel in the coronal plane using a two-dimensional guide (61.3 ± 4.8°) was more horizontal (P < 0.05) than tunnels drilled with a three-dimensional guide (64.7 ± 6.2°). The position of the tibial tunnel in the sagittal plane was more anterior (P < 0.05) in the two-dimensional (41.6 ± 2.5%) guide group compared to the three-dimensional guide group (43.3 ± 2.9%). The Howell Tibial Guide allows for reliable placement of the tibial tunnel in the coronal plane at an angle of 65°. Tibial tunnels were within the anatomical footprint of the ACL with either technique. Future studies should investigate the effects of tibial tunnel angle on knee function and patient quality of life. Case-control retrospective comparative study, Level III.

  6. Introducing hydrological information in rainfall intensity-duration thresholds

    NASA Astrophysics Data System (ADS)

    Greco, Roberto; Bogaard, Thom

    2016-04-01

    Regional landslide hazard assessment is mainly based on empirically derived precipitation-intensity-duration (PID) thresholds. Generally, two features of rainfall events are plotted to discriminate between observed occurrence and absence of occurrence of mass movements. Hereafter, a separation line is drawn in logarithmic space. Although successfully applied in many case studies, such PID thresholds suffer from many false positives as well as limited physical process insight. One of the main limitations is indeed that they do not include any information about the hydrological processes occurring along the slopes, so that the triggering is only related to rainfall characteristics. In order to introduce such an hydrological information in the definition of rainfall thresholds for shallow landslide triggering assessment, in this study the introduction of non-dimensional rainfall characteristics is proposed. In particular, rain storm depth, intensity and duration are divided by a characteristic infiltration depth, a characteristic infiltration rate and a characteristic duration, respectively. These latter variables depend on the hydraulic properties and on the moisture state of the soil cover at the beginning of the precipitation. The proposed variables are applied to the case of a slope covered with shallow pyroclastic deposits in Cervinara (southern Italy), for which experimental data of hourly rainfall and soil suction were available. Rainfall thresholds defined with the proposed non-dimensional variables perform significantly better than those defined with dimensional variables, either in the intensity-duration plane or in the depth-duration plane.

  7. Time-dependent Models of Magnetospheric Accretion onto Young Stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, C. E.; Espaillat, C. C.; Owen, J. E.

    Accretion onto Classical T Tauri stars is thought to take place through the action of magnetospheric processes, with gas in the inner disk being channeled onto the star’s surface by the stellar magnetic field lines. Young stars are known to accrete material in a time-variable manner, and the source of this variability remains an open problem, particularly on the shortest (∼day) timescales. Using one-dimensional time-dependent numerical simulations that follow the field line geometry, we find that for plausibly realistic young stars, steady-state transonic accretion occurs naturally in the absence of any other source of variability. However, we show that ifmore » the density in the inner disk varies smoothly in time with ∼day-long timescales (e.g., due to turbulence), this complication can lead to the development of shocks in the accretion column. These shocks propagate along the accretion column and ultimately hit the star, leading to rapid, large amplitude changes in the accretion rate. We argue that when these shocks hit the star, the observed time dependence will be a rapid increase in accretion luminosity, followed by a slower decline, and could be an explanation for some of the short-period variability observed in accreting young stars. Our one-dimensional approach bridges previous analytic work to more complicated multi-dimensional simulations and observations.« less

  8. Computation of viscous incompressible flows

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    1989-01-01

    Incompressible Navier-Stokes solution methods and their applications to three-dimensional flows are discussed. A brief review of existing methods is given followed by a detailed description of recent progress on development of three-dimensional generalized flow solvers. Emphasis is placed on primitive variable formulations which are most promising and flexible for general three-dimensional computations of viscous incompressible flows. Both steady- and unsteady-solution algorithms and their salient features are discussed. Finally, examples of real world applications of these flow solvers are given.

  9. Determination of the temperature field of shell structures

    NASA Astrophysics Data System (ADS)

    Rodionov, N. G.

    1986-10-01

    A stationary heat conduction problem is formulated for the case of shell structures, such as those found in gas-turbine and jet engines. A two-dimensional elliptic differential equation of stationary heat conduction is obtained which allows, in an approximate manner, for temperature changes along a third variable, i.e., the shell thickness. The two-dimensional problem is reduced to a series of one-dimensional problems which are then solved using efficient difference schemes. The approach proposed here is illustrated by a specific example.

  10. Bloch Surface Waves Biosensors for High Sensitivity Detection of Soluble ERBB2 in a Complex Biological Environment.

    PubMed

    Sinibaldi, Alberto; Sampaoli, Camilla; Danz, Norbert; Munzert, Peter; Sonntag, Frank; Centola, Fabio; Occhicone, Agostino; Tremante, Elisa; Giacomini, Patrizio; Michelotti, Francesco

    2017-08-17

    We report on the use of one-dimensional photonic crystals to detect clinically relevant concentrations of the cancer biomarker ERBB2 in cell lysates. Overexpression of the ERBB2 protein is associated with aggressive breast cancer subtypes. To detect soluble ERBB2, we developed an optical set-up which operates in both label-free and fluorescence modes. The detection approach makes use of a sandwich assay, in which the one-dimensional photonic crystals sustaining Bloch surface waves are modified with monoclonal antibodies, in order to guarantee high specificity during the biological recognition. We present the results of exemplary protein G based label-free assays in complex biological matrices, reaching an estimated limit of detection of 0.5 ng/mL. On-chip and chip-to-chip variability of the results is addressed too, providing repeatability rates. Moreover, results on fluorescence operation demonstrate the capability to perform high sensitive cancer biomarker assays reaching a resolution of 0.6 ng/mL, without protein G assistance. The resolution obtained in both modes meets international guidelines and recommendations (15 ng/mL) for ERBB2 quantification assays, providing an alternative tool to phenotype and diagnose molecular cancer subtypes.

  11. 2D VARIABLY SATURATED FLOWS: PHYSICAL SCALING AND BAYESIAN ESTIMATION

    EPA Science Inventory

    A novel dimensionless formulation for water flow in two-dimensional variably saturated media is presented. It shows that scaling physical systems requires conservation of the ratio between capillary forces and gravity forces. A direct result of this finding is that for two phys...

  12. Temperature, Pressure, and Infrared Image Survey of an Axisymmetric Heated Exhaust Plume

    NASA Technical Reports Server (NTRS)

    Nelson, Edward L.; Mahan, J. Robert; Birckelbaw, Larry D.; Turk, Jeffrey A.; Wardwell, Douglas A.; Hange, Craig E.

    1996-01-01

    The focus of this research is to numerically predict an infrared image of a jet engine exhaust plume, given field variables such as temperature, pressure, and exhaust plume constituents as a function of spatial position within the plume, and to compare this predicted image directly with measured data. This work is motivated by the need to validate computational fluid dynamic (CFD) codes through infrared imaging. The technique of reducing the three-dimensional field variable domain to a two-dimensional infrared image invokes the use of an inverse Monte Carlo ray trace algorithm and an infrared band model for exhaust gases. This report describes an experiment in which the above-mentioned field variables were carefully measured. Results from this experiment, namely tables of measured temperature and pressure data, as well as measured infrared images, are given. The inverse Monte Carlo ray trace technique is described. Finally, experimentally obtained infrared images are directly compared to infrared images predicted from the measured field variables.

  13. Rank-based estimation in the {ell}1-regularized partly linear model for censored outcomes with application to integrated analyses of clinical predictors and gene expression data.

    PubMed

    Johnson, Brent A

    2009-10-01

    We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.

  14. Data-driven discovery of Koopman eigenfunctions using deep learning

    NASA Astrophysics Data System (ADS)

    Lusch, Bethany; Brunton, Steven L.; Kutz, J. Nathan

    2017-11-01

    Koopman operator theory transforms any autonomous non-linear dynamical system into an infinite-dimensional linear system. Since linear systems are well-understood, a mapping of non-linear dynamics to linear dynamics provides a powerful approach to understanding and controlling fluid flows. However, finding the correct change of variables remains an open challenge. We present a strategy to discover an approximate mapping using deep learning. Our neural networks find this change of variables, its inverse, and a finite-dimensional linear dynamical system defined on the new variables. Our method is completely data-driven and only requires measurements of the system, i.e. it does not require derivatives or knowledge of the governing equations. We find a minimal set of approximate Koopman eigenfunctions that are sufficient to reconstruct and advance the system to future states. We demonstrate the method on several dynamical systems.

  15. Characterizing Transitions Between Decadal States of the Tropical Pacific using State Space Reconstruction

    NASA Astrophysics Data System (ADS)

    Ramesh, N.; Cane, M. A.

    2017-12-01

    The complex coupled ocean-atmosphere system of the Tropical Pacific generates variability on timescales from intraseasonal to multidecadal. Pacific Decadal Variability (PDV) is among the key drivers of global climate, with effects on hydroclimate on several continents, marine ecosystems, and the rate of global mean surface temperature rise under anthropogenic greenhouse gas forcing. Predicting phase shifts in the PDV would therefore be highly useful. However, the small number of PDV phase shifts that have occurred in the observational record pose a substantial challenge to developing an understanding of the mechanisms that underlie decadal variability. In this study, we use a 100,000-year unforced simulation from an intermediate-complexity model of the Tropical Pacific region that has been shown to produce PDV comparable to that in the real world. We apply the Simplex Projection method to the NINO3 index from this model to reconstruct a shadow manifold that preserves the topology of the true attractor of this system. We find that the high- and low-variance phases of PDV emerge as a pair of regimes in a 3-dimensional state space, and that the transitions between decadal states lie in a highly predictable region of the attractor. We then use a random forest algorithm to develop a physical interpretation of the processes associated with these highly-predictable transitions. We find that transitions to low-variance states are most likely to occur approximately 2.5 years after an El Nino event, and that ocean-atmosphere variables in the southeastern Tropical Pacific play a crucial role in driving these transitions.

  16. Coarse analysis of collective behaviors: Bifurcation analysis of the optimal velocity model for traffic jam formation

    NASA Astrophysics Data System (ADS)

    Miura, Yasunari; Sugiyama, Yuki

    2017-12-01

    We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.

  17. Variational analysis of the coupling between a geometrically exact Cosserat rod and an elastic continuum

    NASA Astrophysics Data System (ADS)

    Sander, Oliver; Schiela, Anton

    2014-12-01

    We formulate the static mechanical coupling of a geometrically exact Cosserat rod to a nonlinearly elastic continuum. In this setting, appropriate coupling conditions have to connect a one-dimensional model with director variables to a three-dimensional model without directors. Two alternative coupling conditions are proposed, which correspond to two different configuration trace spaces. For both, we show existence of solutions of the coupled problems, using the direct method of the calculus of variations. From the first-order optimality conditions, we also derive the corresponding conditions for the dual variables. These are then interpreted in mechanical terms.

  18. A boundary value approach for solving three-dimensional elliptic and hyperbolic partial differential equations.

    PubMed

    Biala, T A; Jator, S N

    2015-01-01

    In this article, the boundary value method is applied to solve three dimensional elliptic and hyperbolic partial differential equations. The partial derivatives with respect to two of the spatial variables (y, z) are discretized using finite difference approximations to obtain a large system of ordinary differential equations (ODEs) in the third spatial variable (x). Using interpolation and collocation techniques, a continuous scheme is developed and used to obtain discrete methods which are applied via the Block unification approach to obtain approximations to the resulting large system of ODEs. Several test problems are investigated to elucidate the solution process.

  19. Scaling of Device Variability and Subthreshold Swing in Ballistic Carbon Nanotube Transistors

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Tersoff, Jerry; Han, Shu-Jen; Penumatcha, Ashish V.

    2015-08-01

    In field-effect transistors, the inherent randomness of dopants and other charges is a major cause of device-to-device variability. For a quasi-one-dimensional device such as carbon nanotube transistors, even a single charge can drastically change the performance, making this a critical issue for their adoption as a practical technology. Here we calculate the effect of the random charges at the gate-oxide surface in ballistic carbon nanotube transistors, finding good agreement with the variability statistics in recent experiments. A combination of experimental and simulation results further reveals that these random charges are also a major factor limiting the subthreshold swing for nanotube transistors fabricated on thin gate dielectrics. We then establish that the scaling of the nanotube device uniformity with the gate dielectric, fixed-charge density, and device dimension is qualitatively different from conventional silicon transistors, reflecting the very different device physics of a ballistic transistor with a quasi-one-dimensional channel. The combination of gate-oxide scaling and improved control of fixed-charge density should provide the uniformity needed for large-scale integration of such novel one-dimensional transistors even at extremely scaled device dimensions.

  20. The development of a virtual reality training programme for ophthalmology: repeatability and reproducibility (part of the International Forum for Ophthalmic Simulation Studies).

    PubMed

    Saleh, G M; Theodoraki, K; Gillan, S; Sullivan, P; O'Sullivan, F; Hussain, B; Bunce, C; Athanasiadis, I

    2013-11-01

    To evaluate the variability of performance among novice ophthalmic trainees in a range of repeated tasks using the Eyesi virtual reality (VR) simulator. Eighteen subjects undertook three attempts of five cataract specific and generic three-dimensional tasks: continuous curvilinear capsulorhexis, cracking and chopping, cataract navigation, bimanual cataract training, anti-tremor. Scores for each attempt were out of a maximum of 100 points. A non-parametric test was used to analyse the data, where a P-value of <0.05 was considered statistically significant. Highly significant differences were found between the scores achieved in the first attempt and that during the second (P<0.0001) and third (P<0.0001) but not between the second and third attempt (P=0.65). There was no significant variability in the overall score between the users (P=0.1104) or in the difference between their highest and lowest score (P=0.3878). Highly significant differences between tasks were shown both in the overall score (P=0.0001) and in the difference between highest and lowest score (P=0.003). This study, which is the first to quantify reproducibility of performance in entry level trainees using a VR tool, demonstrated significant intra-novice variability. The cohort of subjects performed equally overall in the range of tasks (no inter-novice variability) but each showed that performance varies significantly with the complexity of the task when using this high-fidelity instrument.

  1. Low-dimensional manifold of actin polymerization dynamics

    NASA Astrophysics Data System (ADS)

    Floyd, Carlos; Jarzynski, Christopher; Papoian, Garegin

    2017-12-01

    Actin filaments are critical components of the eukaryotic cytoskeleton, playing important roles in a number of cellular functions, such as cell migration, organelle transport, and mechanosensation. They are helical polymers with a well-defined polarity, composed of globular subunits that bind nucleotides in one of three hydrolysis states (ATP, ADP-Pi, or ADP). Mean-field models of the dynamics of actin polymerization have succeeded in, among other things, determining the nucleotide profile of an average filament and resolving the mechanisms of accessory proteins. However, these models require numerical solution of a high-dimensional system of nonlinear ordinary differential equations. By truncating a set of recursion equations, the Brooks-Carlsson (BC) model reduces dimensionality to 11, but it still remains nonlinear and does not admit an analytical solution, hence, significantly hindering understanding of its resulting dynamics. In this work, by taking advantage of the fast timescales of the hydrolysis states of the filament tips, we propose two model reduction schemes: the quasi steady-state approximation model is five-dimensional and nonlinear, whereas the constant tip (CT) model is five-dimensional and linear, resulting from the approximation that the tip states are not dynamic variables. We provide an exact solution of the CT model and use it to shed light on the dynamical behaviors of the full BC model, highlighting the relative ordering of the timescales of various collective processes, and explaining some unusual dependence of the steady-state behavior on initial conditions.

  2. 3-D flow and scour near a submerged wing dike: ADCP measurements on the Missouri River

    USGS Publications Warehouse

    Jamieson, E.C.; Rennie, C.D.; Jacobson, R.B.; Townsend, R.D.

    2011-01-01

    Detailed mapping of bathymetry and three-dimensional water velocities using a boat-mounted single-beam sonar and acoustic Doppler current profiler (ADCP) was carried out in the vicinity of two submerged wing dikes located in the Lower Missouri River near Columbia, Missouri. During high spring flows the wing dikes become submerged, creating a unique combination of vertical flow separation and overtopping (plunging) flow conditions, causing large-scale three-dimensional turbulent flow structures to form. On three different days and for a range of discharges, sampling transects at 5 and 20 m spacing were completed, covering the area adjacent to and upstream and downstream from two different wing dikes. The objectives of this research are to evaluate whether an ADCP can identify and measure large-scale flow features such as recirculating flow and vortex shedding that develop in the vicinity of a submerged wing dike; and whether or not moving-boat (single-transect) data are sufficient for resolving complex three-dimensional flow fields. Results indicate that spatial averaging from multiple nearby single transects may be more representative of an inherently complex (temporally and spatially variable) three-dimensional flow field than repeated single transects. Results also indicate a correspondence between the location of calculated vortex cores (resolved from the interpolated three-dimensional flow field) and the nearby scour holes, providing new insight into the connections between vertically oriented coherent structures and local scour, with the unique perspective of flow and morphology in a large river.

  3. Convective dynamics - Panel report

    NASA Technical Reports Server (NTRS)

    Carbone, Richard; Foote, G. Brant; Moncrieff, Mitch; Gal-Chen, Tzvi; Cotton, William; Heymsfield, Gerald

    1990-01-01

    Aspects of highly organized forms of deep convection at midlatitudes are reviewed. Past emphasis in field work and cloud modeling has been directed toward severe weather as evidenced by research on tornadoes, hail, and strong surface winds. A number of specific issues concerning future thrusts, tactics, and techniques in convective dynamics are presented. These subjects include; convective modes and parameterization, global structure and scale interaction, convective energetics, transport studies, anvils and scale interaction, and scale selection. Also discussed are analysis workshops, four-dimensional data assimilation, matching models with observations, network Doppler analyses, mesoscale variability, and high-resolution/high-performance Doppler. It is also noted, that, classical surface measurements and soundings, flight-level research aircraft data, passive satellite data, and traditional photogrammetric studies are examples of datasets that require assimilation and integration.

  4. Pseudo spectral collocation with Maxwell polynomials for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Sánchez-Vizuet, Tonatiuh; Cerfon, Antoine J.

    2018-02-01

    We study the approximation and stability properties of a recently popularized discretization strategy for the speed variable in kinetic equations, based on pseudo-spectral collocation on a grid defined by the zeros of a non-standard family of orthogonal polynomials called Maxwell polynomials. Taking a one-dimensional equation describing energy diffusion due to Fokker-Planck collisions with a Maxwell-Boltzmann background distribution as the test bench for the performance of the scheme, we find that Maxwell based discretizations outperform other commonly used schemes in most situations, often by orders of magnitude. This provides a strong motivation for their use in high-dimensional gyrokinetic simulations. However, we also show that Maxwell based schemes are subject to a non-modal time stepping instability in their most straightforward implementation, so that special care must be given to the discrete representation of the linear operators in order to benefit from the advantages provided by Maxwell polynomials.

  5. Historical background and design evolution of the transonic aircraft technology supercritical wing

    NASA Technical Reports Server (NTRS)

    Ayers, T. G.; Hallissy, J. B.

    1981-01-01

    Two dimensional wind tunnel test results obtained for supercritical airfoils indicated that substantial improvements in aircraft performance at high subsonic speeds could be achieved by shaping the airfoil to improve the supercritical flow above the upper surface. Significant increases in the drag divergence Mach number, the maximum lift coefficient for buffer onset, and the Mach number for buffet onset at a given lift coefficient were demonstrated for the supercritical airfoil, as compared with a NACA 6 series airfoil of comparable thickness. These trends were corroborated by results from three dimensional wind tunnel and flight tests. Because these indicated extensions of the buffet boundaries could provide significant improvements in the maneuverability of a fighter airplane, an exploratory wind tunnel investigation was initiated which demonstrated that significant aerodynamic improvements could be achieved from the direct substitution of a supercritical airfoil on a variable wing sweep multimission airplane model.

  6. Sound generated by instability waves of supersonic flows. I Two-dimensional mixing layers. II - Axisymmetric jets

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W.; Burton, D. E.

    1984-01-01

    An investigation is conducted of the phenomenon of sound generation by spatially growing instability waves in high-speed flows. It is pointed out that this process of noise generation is most effective when the flow is supersonic relative to the ambient speed of sound. The inner and outer asymptotic expansions corresponding to an excited instability wave in a two-dimensional mixing layer and its associated acoustic fields are constructed in terms of the inner and outer spatial variables. In matching the solutions, the intermediate matching principle of Van Dyke and Cole is followed. The validity of the theory is tested by applying it to an axisymmetric supersonic jet and comparing the calculated results with experimental measurements. Very favorable agreements are found both in the calculated instability-wave amplitude distribution (the inner solution) and the near pressure field level contours (the outer solution) in each case.

  7. 1-D Photochemical Modeling of the Martian Atmosphere: Seasonal Variations

    NASA Astrophysics Data System (ADS)

    Boxe, C.; Emmanuel, S.; Hafsa, U.; Griffith, E.; Moore, J.; Tam, J.; Khan, I.; Cai, Z.; Bocolod, B.; Zhao, J.; Ahsan, S.; Tang, N.; Bartholomew, J.; Rafi, R.; Caltenco, K.; Smith, K.; Rivas, M.; Ditta, H.; Alawlaqi, H.; Rowley, N.; Khatim, F.; Ketema, N.; Strothers, J.; Diallo, I.; Owens, C.; Radosavljevic, J.; Austin, S. A.; Johnson, L. P.; Zavala-Gutierrez, R.; Breary, N.; Saint-Hilaire, D.; Skeete, D.; Stock, J.; Blue, S.; Gurung, D.; Salako, O.

    2016-12-01

    High school and undergraduate students, representative of academic institutions throughout USA's Tri-State Area (New York, New Jersey, Connecticut), utilize Caltech/JPL's one-dimensional atmospheric, photochemical models. These sophisticated models, were built over the course of the last four decades, describing all planetary bodies in our Solar System and selected extrasolar planets. Specifically, students employed the Martian one-dimensional photochemical model to assess the seasonal variability of molecules in its atmosphere. Students learned the overall model construct, running a baseline simulation, and fluctuating parameters (e.g., obliquity, orbital eccentricity) which affects the incoming solar radiation on Mars, temperature and pressure induce by seasonal variations. Students also attain a `real-world' experience that exemplifies the required level of coding competency and innovativeness needed for building an environment that can simulate observations and forecast. Such skills permeate STEM-related occupations that model systems and/or predict how that system may/will behave.

  8. Two-dimensional enzyme diffusion in laterally confined DNA monolayers.

    PubMed

    Castronovo, Matteo; Lucesoli, Agnese; Parisse, Pietro; Kurnikova, Anastasia; Malhotra, Aseem; Grassi, Mario; Grassi, Gabriele; Scaggiante, Bruna; Casalis, Loredana; Scoles, Giacinto

    2011-01-01

    Addressing the effects of confinement and crowding on biomolecular function may provide insight into molecular mechanisms within living organisms, and may promote the development of novel biotechnology tools. Here, using molecular manipulation methods, we investigate restriction enzyme reactions with double-stranded (ds)DNA oligomers confined in relatively large (and flat) brushy matrices of monolayer patches of controlled, variable density. We show that enzymes from the contacting solution cannot access the dsDNAs from the top-matrix interface, and instead enter at the matrix sides to diffuse two-dimensionally in the gap between top- and bottom-matrix interfaces. This is achieved by limiting lateral access with a barrier made of high-density molecules that arrest enzyme diffusion. We put forward, as a possible explanation, a simple and general model that relates these data to the steric hindrance in the matrix, and we briefly discuss the implications and applications of this strikingly new phenomenon.

  9. Prediction of clinical depression scores and detection of changes in whole-brain using resting-state functional MRI data with partial least squares regression

    PubMed Central

    Shimizu, Yu; Yoshimoto, Junichiro; Takamura, Masahiro; Okada, Go; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    In diagnostic applications of statistical machine learning methods to brain imaging data, common problems include data high-dimensionality and co-linearity, which often cause over-fitting and instability. To overcome these problems, we applied partial least squares (PLS) regression to resting-state functional magnetic resonance imaging (rs-fMRI) data, creating a low-dimensional representation that relates symptoms to brain activity and that predicts clinical measures. Our experimental results, based upon data from clinically depressed patients and healthy controls, demonstrated that PLS and its kernel variants provided significantly better prediction of clinical measures than ordinary linear regression. Subsequent classification using predicted clinical scores distinguished depressed patients from healthy controls with 80% accuracy. Moreover, loading vectors for latent variables enabled us to identify brain regions relevant to depression, including the default mode network, the right superior frontal gyrus, and the superior motor area. PMID:28700672

  10. An extended Lagrangian method

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1992-01-01

    A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. Unlike the Lagrangian method previously imposed which is valid only for supersonic flows, the present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multi-dimensional discontinuities with a high level of accuracy, similar to that found in one-dimensional problems.

  11. Observing temporal patterns of vertical flux through streambed sediments using time-series analysis of temperature records

    NASA Astrophysics Data System (ADS)

    Lautz, Laura K.

    2012-09-01

    SummaryRates of water exchange between surface water and groundwater (SW-GW) can be highly variable over time due to temporal changes in streambed hydraulic conductivity, storm events, and oscillation of stage due to natural and regulated river flow. There are few effective field methods available to make continuous measurements of SW-GW exchange rates with the temporal resolution required in many field applications. Here, controlled laboratory experiments were used to explore the accuracy of analytical solutions to the one-dimensional heat transport model for capturing temporal variability of flux through porous media from propagation of a periodic temperature signal to depth. Column experiments were used to generate one-dimensional flow of water and heat through saturated sand with a quasi-sinusoidal temperature oscillation at the upstream boundary. Measured flux rates through the column were compared to modeled flux rates derived using the computer model VFLUX and the amplitude ratio between filtered temperature records from two depths in the column. Imposed temporal changes in water flux through the column were designed to replicate observed patterns of flux in the field, derived using the same methodology. Field observations of temporal changes in flux were made over multiple days during a large-scale storm event and diurnally during seasonal baseflow recession. Temporal changes in flux that occur gradually over days, sub-daily, and instantaneously in time can be accurately measured using the one-dimensional heat transport model, although those temporal changes may be slightly smoothed over time. Filtering methods effectively isolate the time-variable amplitude and phase of the periodic temperature signal, effectively eliminating artificial temporal flux patterns otherwise imposed by perturbations of the temperature signal, which result from typical weather patterns during field investigations. Although previous studies have indicated that sub-cycle information from the heat transport model is not reliable, this laboratory experiment shows that the sub-cycle information is real and sub-cycle changes in flux can be observed using heat transport modeling. One-dimensional heat transport modeling provides an easy-to-implement, cost effective, reliable field tool for making continuous observations of SW-GW exchange through time, which may be particularly useful for monitoring exchange rates during storms and other conditions that create temporal change in hydraulic gradient across the streambed interface or change in streambed hydraulic conductivity.

  12. Inter-individual Differences in Heart Rate Variability Are Associated with Inter-individual Differences in Empathy and Alexithymia.

    PubMed

    Lischke, Alexander; Pahnke, Rike; Mau-Moeller, Anett; Behrens, Martin; Grabe, Hans J; Freyberger, Harald J; Hamm, Alfons O; Weippert, Matthias

    2018-01-01

    In the present study, we investigated whether inter-individual differences in vagally mediated heart rate variability (vmHRV) would be associated with inter-individual differences in empathy and alexithymia. To this end, we determined resting state HF-HRV in 90 individuals that also completed questionnaires assessing inter-individual differences in empathy and alexithymia. Our categorical and dimensional analyses revealed that inter-individual differences in HF-HRV were differently associated with inter-individual differences in empathy and alexithymia. We found that individuals with high HF-HRV reported more empathy and less alexithymia than individuals with low HF-HRV. Moreover, we even found that an increase in HF-HRV was associated with an increase in empathy and a decrease in alexithymia across all participants. Taken together, these findings indicate that individuals with high HF-HRV are more empathetic and less alexithymic than individuals with low HF-HRV. These differences in empathy and alexithymia may explain why individuals with high HF-HRV are more successful in sharing and understanding the mental and emotional states of others than individuals with low HF-HRV.

  13. Close-range laser scanning in forests: towards physically based semantics across scales.

    PubMed

    Morsdorf, F; Kükenbrink, D; Schneider, F D; Abegg, M; Schaepman, M E

    2018-04-06

    Laser scanning with its unique measurement concept holds the potential to revolutionize the way we assess and quantify three-dimensional vegetation structure. Modern laser systems used at close range, be it on terrestrial, mobile or unmanned aerial platforms, provide dense and accurate three-dimensional data whose information just waits to be harvested. However, the transformation of such data to information is not as straightforward as for airborne and space-borne approaches, where typically empirical models are built using ground truth of target variables. Simpler variables, such as diameter at breast height, can be readily derived and validated. More complex variables, e.g. leaf area index, need a thorough understanding and consideration of the physical particularities of the measurement process and semantic labelling of the point cloud. Quantified structural models provide a framework for such labelling by deriving stem and branch architecture, a basis for many of the more complex structural variables. The physical information of the laser scanning process is still underused and we show how it could play a vital role in conjunction with three-dimensional radiative transfer models to shape the information retrieval methods of the future. Using such a combined forward and physically based approach will make methods robust and transferable. In addition, it avoids replacing observer bias from field inventories with instrument bias from different laser instruments. Still, an intensive dialogue with the users of the derived information is mandatory to potentially re-design structural concepts and variables so that they profit most of the rich data that close-range laser scanning provides.

  14. Controls/CFD Interdisciplinary Research Software Generates Low-Order Linear Models for Control Design From Steady-State CFD Results

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    1997-01-01

    The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.

  15. Spacecraft Angular Rates Estimation with Gyrowheel Based on Extended High Gain Observer.

    PubMed

    Liu, Xiaokun; Yao, Yu; Ma, Kemao; Zhao, Hui; He, Fenghua

    2016-04-14

    A gyrowheel (GW) is a kind of electronic electric-mechanical servo system, which can be applied to a spacecraft attitude control system (ACS) as both an actuator and a sensor simultaneously. In order to solve the problem of two-dimensional spacecraft angular rate sensing as a GW outputting three-dimensional control torque, this paper proposed a method of an extended high gain observer (EHGO) with the derived GW mathematical model to implement the spacecraft angular rate estimation when the GW rotor is working at large angles. For this purpose, the GW dynamic equation is firstly derived with the second kind Lagrange method, and the relationship between the measurable and unmeasurable variables is built. Then, the EHGO is designed to estimate and calculate spacecraft angular rates with the GW, and the stability of the designed EHGO is proven by the Lyapunov function. Moreover, considering the engineering application, the effect of measurement noise in the tilt angle sensors on the estimation accuracy of the EHGO is analyzed. Finally, the numerical simulation is performed to illustrate the validity of the method proposed in this paper.

  16. Spacecraft Angular Rates Estimation with Gyrowheel Based on Extended High Gain Observer

    PubMed Central

    Liu, Xiaokun; Yao, Yu; Ma, Kemao; Zhao, Hui; He, Fenghua

    2016-01-01

    A gyrowheel (GW) is a kind of electronic electric-mechanical servo system, which can be applied to a spacecraft attitude control system (ACS) as both an actuator and a sensor simultaneously. In order to solve the problem of two-dimensional spacecraft angular rate sensing as a GW outputting three-dimensional control torque, this paper proposed a method of an extended high gain observer (EHGO) with the derived GW mathematical model to implement the spacecraft angular rate estimation when the GW rotor is working at large angles. For this purpose, the GW dynamic equation is firstly derived with the second kind Lagrange method, and the relationship between the measurable and unmeasurable variables is built. Then, the EHGO is designed to estimate and calculate spacecraft angular rates with the GW, and the stability of the designed EHGO is proven by the Lyapunov function. Moreover, considering the engineering application, the effect of measurement noise in the tilt angle sensors on the estimation accuracy of the EHGO is analyzed. Finally, the numerical simulation is performed to illustrate the validity of the method proposed in this paper. PMID:27089347

  17. Local connectome phenotypes predict social, health, and cognitive factors

    PubMed Central

    Powell, Michael A.; Garcia, Javier O.; Yeh, Fang-Cheng; Vettel, Jean M.

    2018-01-01

    The unique architecture of the human connectome is defined initially by genetics and subsequently sculpted over time with experience. Thus, similarities in predisposition and experience that lead to similarities in social, biological, and cognitive attributes should also be reflected in the local architecture of white matter fascicles. Here we employ a method known as local connectome fingerprinting that uses diffusion MRI to measure the fiber-wise characteristics of macroscopic white matter pathways throughout the brain. This fingerprinting approach was applied to a large sample (N = 841) of subjects from the Human Connectome Project, revealing a reliable degree of between-subject correlation in the local connectome fingerprints, with a relatively complex, low-dimensional substructure. Using a cross-validated, high-dimensional regression analysis approach, we derived local connectome phenotype (LCP) maps that could reliably predict a subset of subject attributes measured, including demographic, health, and cognitive measures. These LCP maps were highly specific to the attribute being predicted but also sensitive to correlations between attributes. Collectively, these results indicate that the local architecture of white matter fascicles reflects a meaningful portion of the variability shared between subjects along several dimensions. PMID:29911679

  18. Local connectome phenotypes predict social, health, and cognitive factors.

    PubMed

    Powell, Michael A; Garcia, Javier O; Yeh, Fang-Cheng; Vettel, Jean M; Verstynen, Timothy

    2018-01-01

    The unique architecture of the human connectome is defined initially by genetics and subsequently sculpted over time with experience. Thus, similarities in predisposition and experience that lead to similarities in social, biological, and cognitive attributes should also be reflected in the local architecture of white matter fascicles. Here we employ a method known as local connectome fingerprinting that uses diffusion MRI to measure the fiber-wise characteristics of macroscopic white matter pathways throughout the brain. This fingerprinting approach was applied to a large sample ( N = 841) of subjects from the Human Connectome Project, revealing a reliable degree of between-subject correlation in the local connectome fingerprints, with a relatively complex, low-dimensional substructure. Using a cross-validated, high-dimensional regression analysis approach, we derived local connectome phenotype (LCP) maps that could reliably predict a subset of subject attributes measured, including demographic, health, and cognitive measures. These LCP maps were highly specific to the attribute being predicted but also sensitive to correlations between attributes. Collectively, these results indicate that the local architecture of white matter fascicles reflects a meaningful portion of the variability shared between subjects along several dimensions.

  19. Exploring multicollinearity using a random matrix theory approach.

    PubMed

    Feher, Kristen; Whelan, James; Müller, Samuel

    2012-01-01

    Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.

  20. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  1. Quantum-assisted Helmholtz machines: A quantum–classical deep learning framework for industrial datasets in near-term devices

    NASA Astrophysics Data System (ADS)

    Benedetti, Marcello; Realpe-Gómez, John; Perdomo-Ortiz, Alejandro

    2018-07-01

    Machine learning has been presented as one of the key applications for near-term quantum technologies, given its high commercial value and wide range of applicability. In this work, we introduce the quantum-assisted Helmholtz machine:a hybrid quantum–classical framework with the potential of tackling high-dimensional real-world machine learning datasets on continuous variables. Instead of using quantum computers only to assist deep learning, as previous approaches have suggested, we use deep learning to extract a low-dimensional binary representation of data, suitable for processing on relatively small quantum computers. Then, the quantum hardware and deep learning architecture work together to train an unsupervised generative model. We demonstrate this concept using 1644 quantum bits of a D-Wave 2000Q quantum device to model a sub-sampled version of the MNIST handwritten digit dataset with 16 × 16 continuous valued pixels. Although we illustrate this concept on a quantum annealer, adaptations to other quantum platforms, such as ion-trap technologies or superconducting gate-model architectures, could be explored within this flexible framework.

  2. Using maximum topology matching to explore differences in species distribution models

    USGS Publications Warehouse

    Poco, Jorge; Doraiswamy, Harish; Talbert, Marian; Morisette, Jeffrey; Silva, Claudio

    2015-01-01

    Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.

  3. SOCR Motion Charts: An Efficient, Open-Source, Interactive and Dynamic Applet for Visualizing Longitudinal Multivariate Data

    PubMed Central

    Al-Aziz, Jameel; Christou, Nicolas; Dinov, Ivo D.

    2011-01-01

    The amount, complexity and provenance of data have dramatically increased in the past five years. Visualization of observed and simulated data is a critical component of any social, environmental, biomedical or scientific quest. Dynamic, exploratory and interactive visualization of multivariate data, without preprocessing by dimensionality reduction, remains a nearly insurmountable challenge. The Statistics Online Computational Resource (www.SOCR.ucla.edu) provides portable online aids for probability and statistics education, technology-based instruction and statistical computing. We have developed a new Java-based infrastructure, SOCR Motion Charts, for discovery-based exploratory analysis of multivariate data. This interactive data visualization tool enables the visualization of high-dimensional longitudinal data. SOCR Motion Charts allows mapping of ordinal, nominal and quantitative variables onto time, 2D axes, size, colors, glyphs and appearance characteristics, which facilitates the interactive display of multidimensional data. We validated this new visualization paradigm using several publicly available multivariate datasets including Ice-Thickness, Housing Prices, Consumer Price Index, and California Ozone Data. SOCR Motion Charts is designed using object-oriented programming, implemented as a Java Web-applet and is available to the entire community on the web at www.socr.ucla.edu/SOCR_MotionCharts. It can be used as an instructional tool for rendering and interrogating high-dimensional data in the classroom, as well as a research tool for exploratory data analysis. PMID:21479108

  4. Dimensional stability of flakeboards as affected by board specific gravity and flake alignment

    Treesearch

    Robert L. Geimer

    1982-01-01

    The objective was to determine the relationship between the variables specific gravity (SG) and flake alignment and the dimensional stability properties of flakeboard. Boards manufactured without a density gradient were exposed to various levels of relative humidity and a vacuum-pressure soak (VPS) treatment. Changes in moisture content (MC), thickness swelling, and...

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Q; Xie, S

    This report describes the Atmospheric Radiation Measurement (ARM) Best Estimate (ARMBE) 2-dimensional (2D) gridded surface data (ARMBE2DGRID) value-added product. Spatial variability is critically important to many scientific studies, especially those that involve processes of great spatial variations at high temporal frequency (e.g., precipitation, clouds, radiation, etc.). High-density ARM sites deployed at the Southern Great Plains (SGP) allow us to observe the spatial patterns of variables of scientific interests. The upcoming megasite at SGP with its enhanced spatial density will facilitate the studies at even finer scales. Currently, however, data are reported only at individual site locations at different time resolutionsmore » for different datastreams. It is difficult for users to locate all the data they need and requires extra effort to synchronize the data. To address these problems, the ARMBE2DGRID value-added product merges key surface measurements at the ARM SGP sites and interpolates the data to a regular 2D grid to facilitate the data application.« less

  6. Unsteady density-current equations for highly curved terrain

    NASA Technical Reports Server (NTRS)

    Sivakumaran, N. S.; Dressler, R. F.

    1989-01-01

    New nonlinear partial differential equations containing terrain curvature and its rate of change are derived that describe the flow of an atmospheric density current. Unlike the classical hydraulic-type equations for density currents, the new equations are valid for two-dimensional, gradually varied flow over highly curved terrain, hence suitable for computing unsteady (or steady) flows over arbitrary mountain/valley profiles. The model assumes the atmosphere above the density current exerts a known arbitrary variable pressure upon the unknown interface. Later this is specialized to the varying hydrostatic pressure of the atmosphere above. The new equations yield the variable velocity distribution, the interface position, and the pressure distribution that contains a centrifugal component, often significantly larger than its hydrostatic component. These partial differential equations are hyperbolic, and the characteristic equations and characteristic directions are derived. Using these to form a characteristic mesh, a hypothetical unsteady curved-flow problem is calculated, not based upon observed data, merely as an example to illustrate the simplicity of their application to unsteady flows over mountains.

  7. High-speed free-space optical continuous-variable quantum key distribution enabled by three-dimensional multiplexing.

    PubMed

    Qu, Zhen; Djordjevic, Ivan B

    2017-04-03

    A high-speed four-state continuous-variable quantum key distribution (CV-QKD) system, enabled by wavelength-division multiplexing, polarization multiplexing, and orbital angular momentum (OAM) multiplexing, is studied in the presence of atmospheric turbulence. The atmospheric turbulence channel is emulated by two spatial light modulators (SLMs) on which two randomly generated azimuthal phase patterns yielding Andrews' spectrum are recorded. The phase noise is mitigated by the phase noise cancellation (PNC) stage, and channel transmittance can be monitored directly by the D.C. level in our PNC stage. After the system calibration, a total SKR of >1.68 Gbit/s can be reached in the ideal system, featured with lossless channel and free of excess noise. In our experiment, based on commercial photodetectors, the minimum transmittances of 0.21 and 0.29 are required for OAM states of 2 (or -2) and 6 (or -6), respectively, to guarantee the secure transmission, while a total SKR of 120 Mbit/s can be obtained in case of mean transmittances.

  8. Three-dimensional estimates of tree canopies: Scaling from high-resolution UAV data to satellite observations

    NASA Astrophysics Data System (ADS)

    Sankey, T.; Donald, J.; McVay, J.

    2015-12-01

    High resolution remote sensing images and datasets are typically acquired at a large cost, which poses big a challenge for many scientists. Northern Arizona University recently acquired a custom-engineered, cutting-edge UAV and we can now generate our own images with the instrument. The UAV has a unique capability to carry a large payload including a hyperspectral sensor, which images the Earth surface in over 350 spectral bands at 5 cm resolution, and a lidar scanner, which images the land surface and vegetation in 3-dimensions. Both sensors represent the newest available technology with very high resolution, precision, and accuracy. Using the UAV sensors, we are monitoring the effects of regional forest restoration treatment efforts. Individual tree canopy width and height are measured in the field and via the UAV sensors. The high-resolution UAV images are then used to segment individual tree canopies and to derive 3-dimensional estimates. The UAV image-derived variables are then correlated to the field-based measurements and scaled to satellite-derived tree canopy measurements. The relationships between the field-based and UAV-derived estimates are then extrapolated to a larger area to scale the tree canopy dimensions and to estimate tree density within restored and control forest sites.

  9. TH-A-9A-04: Incorporating Liver Functionality in Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, V; Epelman, M; Feng, M

    2014-06-15

    Purpose: Liver SBRT patients have both variable pretreatment liver function (e.g., due to degree of cirrhosis and/or prior treatments) and sensitivity to radiation, leading to high variability in potential liver toxicity with similar doses. This work aims to explicitly incorporate liver perfusion into treatment planning to redistribute dose to preserve well-functioning areas without compromising target coverage. Methods: Voxel-based liver perfusion, a measure of functionality, was computed from dynamic contrast-enhanced MRI. Two optimization models with different cost functions subject to the same dose constraints (e.g., minimum target EUD and maximum critical structure EUDs) were compared. The cost functions minimized were EUDmore » (standard model) and functionality-weighted EUD (functional model) to the liver. The resulting treatment plans delivering the same target EUD were compared with respect to their DVHs, their dose wash difference, the average dose delivered to voxels of a particular perfusion level, and change in number of high-/low-functioning voxels receiving a particular dose. Two-dimensional synthetic and three-dimensional clinical examples were studied. Results: The DVHs of all structures of plans from each model were comparable. In contrast, in plans obtained with the functional model, the average dose delivered to high-/low-functioning voxels was lower/higher than in plans obtained with its standard counterpart. The number of high-/low-functioning voxels receiving high/low dose was lower in the plans that considered perfusion in the cost function than in the plans that did not. Redistribution of dose can be observed in the dose wash differences. Conclusion: Liver perfusion can be used during treatment planning potentially to minimize the risk of toxicity during liver SBRT, resulting in better global liver function. The functional model redistributes dose in the standard model from higher to lower functioning voxels, while achieving the same target EUD and satisfying dose limits to critical structures. This project is funded by MCubed and grant R01-CA132834.« less

  10. Estimation of surface heat and moisture fluxes over a prairie grassland. II - Two-dimensional time filtering and site variability

    NASA Technical Reports Server (NTRS)

    Crosson, William L.; Smith, Eric A.

    1992-01-01

    The behavior of in situ measurements of surface fluxes obtained during FIFE 1987 is examined by using correlative and spectral techniques in order to assess the significance of fluctuations on various time scales, from subdiurnal up to synoptic, intraseasonal, and annual scales. The objectives of this analysis are: (1) to determine which temporal scales have a significant impact on areal averaged fluxes and (2) to design a procedure for filtering an extended flux time series that preserves the basic diurnal features and longer time scales while removing high frequency noise that cannot be attributed to site-induced variation. These objectives are accomplished through the use of a two-dimensional cross-time Fourier transform, which serves to separate processes inherently related to diurnal and subdiurnal variability from those which impact flux variations on the longer time scales. A filtering procedure is desirable before the measurements are utilized as input with an experimental biosphere model, to insure that model based intercomparisons at multiple sites are uncontaminated by input variance not related to true site behavior. Analysis of the spectral decomposition indicates that subdiurnal time scales having periods shorter than 6 hours have little site-to-site consistency and therefore little impact on areal integrated fluxes.

  11. Two-dimensional CFD modeling of wave rotor flow dynamics

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.; Chima, Rodrick V.

    1994-01-01

    A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. Roe's approximate Riemann solution scheme or the computationally less expensive advection upstream splitting method (AUSM) flux-splitting scheme is used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passages and the distribution of flow variables in the stationary inlet port region.

  12. Two-dimensional CFD modeling of wave rotor flow dynamics

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.; Chima, Rodrick V.

    1993-01-01

    A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. The Roe approximate Riemann solution scheme or the computationally less expensive Advection Upstream Splitting Method (AUSM) flux-splitting scheme are used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passage and the distribution of flow variables in the stationary inlet port region.

  13. Highly Parallel Alternating Directions Algorithm for Time Dependent Problems

    NASA Astrophysics Data System (ADS)

    Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.

    2011-11-01

    In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.

  14. An extinction/reignition dynamic method for turbulent combustion

    NASA Astrophysics Data System (ADS)

    Knaus, Robert; Pantano, Carlos

    2011-11-01

    Quasi-randomly distributed locations of high strain in turbulent combustion can cause a nonpremixed or partially premixed flame to develop local regions of extinction called ``flame holes''. The presence and extent of these holes can increase certain pollutants and reduce the amount of fuel burned. Accurately modeling the dynamics of these interacting regions can improve the accuracy of combustion simulations by effectively incorporating finite-rate chemistry effects. In the proposed method, the flame hole state is characterized by a progress variable that nominally exists on the stoichiometric surface. The evolution of this field is governed by a partial-differential equation embedded in the time-dependent two-manifold of the flame surface. This equation includes advection, propagation, and flame hole formation (flame hole healing or collapse is accounted by propagation naturally). We present a computational algorithm that solves this equation by embedding it in the usual three-dimensional space. A piece-wise parabolic WENO scheme combined with a compression algorithm are used to evolve the flame hole progress variable. A key aspect of the method is the extension of the surface data to the three-dimensional space in an efficient manner. We present results of this method applied to canonical turbulent combusting flows where the flame holes interact and describe their statistics.

  15. Intracranial cerebrospinal fluid spaces imaging using a pulse-triggered three-dimensional turbo spin echo MR sequence with variable flip-angle distribution.

    PubMed

    Hodel, Jérôme; Silvera, Jonathan; Bekaert, Olivier; Rahmouni, Alain; Bastuji-Garin, Sylvie; Vignaud, Alexandre; Petit, Eric; Durning, Bruno; Decq, Philippe

    2011-02-01

    To assess the three-dimensional turbo spin echo with variable flip-angle distribution magnetic resonance sequence (SPACE: Sampling Perfection with Application optimised Contrast using different flip-angle Evolution) for the imaging of intracranial cerebrospinal fluid (CSF) spaces. We prospectively investigated 18 healthy volunteers and 25 patients, 20 with communicating hydrocephalus (CH), five with non-communicating hydrocephalus (NCH), using the SPACE sequence at 1.5T. Volume rendering views of both intracranial and ventricular CSF were obtained for all patients and volunteers. The subarachnoid CSF distribution was qualitatively evaluated on volume rendering views using a four-point scale. The CSF volumes within total, ventricular and subarachnoid spaces were calculated as well as the ratio between ventricular and subarachnoid CSF volumes. Three different patterns of subarachnoid CSF distribution were observed. In healthy volunteers we found narrowed CSF spaces within the occipital aera. A diffuse narrowing of the subarachnoid CSF spaces was observed in patients with NCH whereas patients with CH exhibited narrowed CSF spaces within the high midline convexity. The ratios between ventricular and subarachnoid CSF volumes were significantly different among the volunteers, patients with CH and patients with NCH. The assessment of CSF spaces volume and distribution may help to characterise hydrocephalus.

  16. Multiple Scattering in Clouds: Insights from Three-Dimensional Diffusion/P{sub 1} Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Anthony B.; Marshak, Alexander

    2001-03-15

    In the atmosphere, multiple scattering matters nowhere more than in clouds, and being a product of its turbulence, clouds are highly variable environments. This challenges three-dimensional (3D) radiative transfer theory in a way that easily swamps any available computational resources. Fortunately, the far simpler diffusion (or P{sub 1}) theory becomes more accurate as the scattering intensifies, and allows for some analytical progress as well as computational efficiency. After surveying current approaches to 3D solar cloud-radiation problems from the diffusion standpoint, a general 3D result in steady-state diffusive transport is derived relating the variability-induced change in domain-average flux (i.e., diffuse transmittance)more » to the one-point covariance of internal fluctuations in particle density and in radiative flux. These flux variations follow specific spatial patterns in deliberately hydrodynamical language: radiative channeling. The P{sub 1} theory proves even more powerful when the photon diffusion process unfolds in time as well as space. For slab geometry, characteristic times and lengths that describe normal and transverse transport phenomena are derived. This phenomenology is used to (a) explain persistent features in satellite images of dense stratocumulus as radiative channeling, (b) set limits on current cloud remote-sensing techniques, and (c) propose new ones both active and passive.« less

  17. Mean Comparison: Manifest Variable versus Latent Variable

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2006-01-01

    An extension of multiple correspondence analysis is proposed that takes into account cluster-level heterogeneity in respondents' preferences/choices. The method involves combining multiple correspondence analysis and k-means in a unified framework. The former is used for uncovering a low-dimensional space of multivariate categorical variables…

  18. Symptom variability, affect and physical activity in ambulatory persons with multiple sclerosis: Understanding patterns and time-bound relationships.

    PubMed

    Kasser, Susan L; Goldstein, Amanda; Wood, Phillip K; Sibold, Jeremy

    2017-04-01

    Individuals with multiple sclerosis (MS) experience a clinical course that is highly variable with daily fluctuations in symptoms significantly affecting functional ability and quality of life. Yet, understanding how MS symptoms co-vary and associate with physical and psychological health is unclear. The purpose of the study was to explore variability patterns and time-bound relationships across symptoms, affect, and physical activity in individuals with MS. The study employed a multivariate, replicated, single-subject repeated-measures (MRSRM) design and involved four individuals with MS. Mood, fatigue, pain, balance confidence, and losses of balance were measured daily over 28 days by self-report. Physical activity was also measured daily over this same time period via accelerometry. Dynamic factor analysis (DFA) was used to determine the dimensionality and lagged relationships across the variables. Person-specific models revealed considerable time-dependent co-variation patterns as well as pattern variation across subjects. Results also offered insight into distinct variability structures at varying levels of disability. Modeling person-level variability may be beneficial for addressing the heterogeneity of experiences in individuals with MS and for understanding temporal and dynamic interrelationships among perceived symptoms, affect, and health outcomes in this group. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Group Variable Selection Via Convex Log-Exp-Sum Penalty with Application to a Breast Cancer Survivor Study

    PubMed Central

    Geng, Zhigeng; Wang, Sijian; Yu, Menggang; Monahan, Patrick O.; Champion, Victoria; Wahba, Grace

    2017-01-01

    Summary In many scientific and engineering applications, covariates are naturally grouped. When the group structures are available among covariates, people are usually interested in identifying both important groups and important variables within the selected groups. Among existing successful group variable selection methods, some methods fail to conduct the within group selection. Some methods are able to conduct both group and within group selection, but the corresponding objective functions are non-convex. Such a non-convexity may require extra numerical effort. In this article, we propose a novel Log-Exp-Sum(LES) penalty for group variable selection. The LES penalty is strictly convex. It can identify important groups as well as select important variables within the group. We develop an efficient group-level coordinate descent algorithm to fit the model. We also derive non-asymptotic error bounds and asymptotic group selection consistency for our method in the high-dimensional setting where the number of covariates can be much larger than the sample size. Numerical results demonstrate the good performance of our method in both variable selection and prediction. We applied the proposed method to an American Cancer Society breast cancer survivor dataset. The findings are clinically meaningful and may help design intervention programs to improve the qualify of life for breast cancer survivors. PMID:25257196

  20. Study of shape evaluation for mask and silicon using large field of view

    NASA Astrophysics Data System (ADS)

    Matsuoka, Ryoichi; Mito, Hiroaki; Shinoda, Shinichi; Toyoda, Yasutaka

    2010-09-01

    We have developed a highly integrated method of mask and silicon metrology. The aim of this integration is evaluating the performance of the silicon corresponding to Hotspot on a mask. It can use the mask shape of a large field, besides. The method adopts a metrology management system based on DBM (Design Based Metrology). This is the high accurate contouring created by an edge detection algorithm used in mask CD-SEM and silicon CD-SEM. Currently, as semiconductor manufacture moves towards even smaller feature size, this necessitates more aggressive optical proximity correction (OPC) to drive the super-resolution technology (RET). In other words, there is a trade-off between highly precise RET and mask manufacture, and this has a big impact on the semiconductor market that centers on the mask business. As an optimal solution to these issues, we provide a DFM solution that extracts 2-dimensional data for a more realistic and error-free simulation by reproducing accurately the contour of the actual mask, in addition to the simulation results from the mask data. On the other hand, there is roughness in the silicon form made from a mass-production line. Moreover, there is variation in the silicon form. For this reason, quantification of silicon form is important, in order to estimate the performance of a pattern. In order to quantify, the same form is equalized in two dimensions. And the method of evaluating based on the form is popular. In this study, we conducted experiments for averaging method of the pattern (Measurement Based Contouring) as two-dimensional mask and silicon evaluation technique. That is, observation of the identical position of a mask and a silicon was considered. The result proved its detection accuracy and reliability of variability on two-dimensional pattern (mask and silicon) and is adaptable to following fields of mask quality management. •Discrimination of nuisance defects for fine pattern. •Determination of two-dimensional variability of pattern. •Verification of the performance of the pattern of various kinds of Hotspots. In this report, we introduce the experimental results and the application. We expect that the mask measurement and the shape control on mask production will make a huge contribution to mask yield-enhancement and that the DFM solution for mask quality control process will become much more important technology than ever. It is very important to observe the form of the same location of Design, Mask, and Silicon in such a viewpoint. And we report it about algorithm of the image composition in Large Field.

  1. Canonical Measure of Correlation (CMC) and Canonical Measure of Distance (CMD) between sets of data. Part 3. Variable selection in classification.

    PubMed

    Ballabio, Davide; Consonni, Viviana; Mauri, Andrea; Todeschini, Roberto

    2010-01-11

    In multivariate regression and classification issues variable selection is an important procedure used to select an optimal subset of variables with the aim of producing more parsimonious and eventually more predictive models. Variable selection is often necessary when dealing with methodologies that produce thousands of variables, such as Quantitative Structure-Activity Relationships (QSARs) and highly dimensional analytical procedures. In this paper a novel method for variable selection for classification purposes is introduced. This method exploits the recently proposed Canonical Measure of Correlation between two sets of variables (CMC index). The CMC index is in this case calculated for two specific sets of variables, the former being comprised of the independent variables and the latter of the unfolded class matrix. The CMC values, calculated by considering one variable at a time, can be sorted and a ranking of the variables on the basis of their class discrimination capabilities results. Alternatively, CMC index can be calculated for all the possible combinations of variables and the variable subset with the maximal CMC can be selected, but this procedure is computationally more demanding and classification performance of the selected subset is not always the best one. The effectiveness of the CMC index in selecting variables with discriminative ability was compared with that of other well-known strategies for variable selection, such as the Wilks' Lambda, the VIP index based on the Partial Least Squares-Discriminant Analysis, and the selection provided by classification trees. A variable Forward Selection based on the CMC index was finally used in conjunction of Linear Discriminant Analysis. This approach was tested on several chemical data sets. Obtained results were encouraging.

  2. Defining process design space for a hydrophobic interaction chromatography (HIC) purification step: application of quality by design (QbD) principles.

    PubMed

    Jiang, Canping; Flansburg, Lisa; Ghose, Sanchayita; Jorjorian, Paul; Shukla, Abhinav A

    2010-12-15

    The concept of design space has been taking root under the quality by design paradigm as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. This paper outlines the development of a design space for a hydrophobic interaction chromatography (HIC) process step. The design space included the impact of raw material lot-to-lot variability and variations in the feed stream from cell culture. A failure modes and effects analysis was employed as the basis for the process characterization exercise. During mapping of the process design space, the multi-dimensional combination of operational variables were studied to quantify the impact on process performance in terms of yield and product quality. Variability in resin hydrophobicity was found to have a significant influence on step yield and high-molecular weight aggregate clearance through the HIC step. A robust operating window was identified for this process step that enabled a higher step yield while ensuring acceptable product quality. © 2010 Wiley Periodicals, Inc.

  3. Sparse partial least squares regression for simultaneous dimension reduction and variable selection

    PubMed Central

    Chun, Hyonho; Keleş, Sündüz

    2010-01-01

    Partial least squares regression has been an alternative to ordinary least squares for handling multicollinearity in several areas of scientific research since the 1960s. It has recently gained much attention in the analysis of high dimensional genomic data. We show that known asymptotic consistency of the partial least squares estimator for a univariate response does not hold with the very large p and small n paradigm. We derive a similar result for a multivariate response regression with partial least squares. We then propose a sparse partial least squares formulation which aims simultaneously to achieve good predictive performance and variable selection by producing sparse linear combinations of the original predictors. We provide an efficient implementation of sparse partial least squares regression and compare it with well-known variable selection and dimension reduction approaches via simulation experiments. We illustrate the practical utility of sparse partial least squares regression in a joint analysis of gene expression and genomewide binding data. PMID:20107611

  4. Effects of selected design variables on three ramp, external compression inlet performance. [boundary layer control bypasses, and mass flow rate

    NASA Technical Reports Server (NTRS)

    Kamman, J. H.; Hall, C. L.

    1975-01-01

    Two inlet performance tests and one inlet/airframe drag test were conducted in 1969 at the NASA-Ames Research Center. The basic inlet system was two-dimensional, three ramp (overhead), external compression, with variable capture area. The data from these tests were analyzed to show the effects of selected design variables on the performance of this type of inlet system. The inlet design variables investigated include inlet bleed, bypass, operating mass flow ratio, inlet geometry, and variable capture area.

  5. A novel device to stretch multiple tissue samples with variable patterns: application for mRNA regulation in tissue-engineered constructs.

    PubMed

    Imsirovic, Jasmin; Derricks, Kelsey; Buczek-Thomas, Jo Ann; Rich, Celeste B; Nugent, Matthew A; Suki, Béla

    2013-01-01

    A broad range of cells are subjected to irregular time varying mechanical stimuli within the body, particularly in the respiratory and circulatory systems. Mechanical stretch is an important factor in determining cell function; however, the effects of variable stretch remain unexplored. In order to investigate the effects of variable stretch, we designed, built and tested a uniaxial stretching device that can stretch three-dimensional tissue constructs while varying the strain amplitude from cycle to cycle. The device is the first to apply variable stretching signals to cells in tissues or three dimensional tissue constructs. Following device validation, we applied 20% uniaxial strain to Gelfoam samples seeded with neonatal rat lung fibroblasts with different levels of variability (0%, 25%, 50% and 75%). RT-PCR was then performed to measure the effects of variable stretch on key molecules involved in cell-matrix interactions including: collagen 1α, lysyl oxidase, α-actin, β1 integrin, β3 integrin, syndecan-4, and vascular endothelial growth factor-A. Adding variability to the stretching signal upregulated, downregulated or had no effect on mRNA production depending on the molecule and the amount of variability. In particular, syndecan-4 showed a statistically significant peak at 25% variability, suggesting that an optimal variability of strain may exist for production of this molecule. We conclude that cycle-by-cycle variability in strain influences the expression of molecules related to cell-matrix interactions and hence may be used to selectively tune the composition of tissue constructs.

  6. Transient Response of a PEM Fuel Cell Representing Variable Load for a Moving Vehicle on Urban Roads

    DOT National Transportation Integrated Search

    2001-01-01

    Three-dimensional numerical simulation of transient response of a Polymer Electrolyte Membrane (PEM) fuel cell subjected to a variable load is developed. The model parameters are typical of experimental cell for a 10-cm2 reactive area with serpentine...

  7. Trajectory optimization of spacecraft high-thrust orbit transfer using a modified evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shirazi, Abolfazl

    2016-10-01

    This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.

  8. Remote creation of hybrid entanglement between particle-like and wave-like optical qubits

    NASA Astrophysics Data System (ADS)

    Morin, Olivier; Huang, Kun; Liu, Jianli; Le Jeannic, Hanna; Fabre, Claude; Laurat, Julien

    2014-07-01

    The wave-particle duality of light has led to two different encodings for optical quantum information processing. Several approaches have emerged based either on particle-like discrete-variable states (that is, finite-dimensional quantum systems) or on wave-like continuous-variable states (that is, infinite-dimensional systems). Here, we demonstrate the generation of entanglement between optical qubits of these different types, located at distant places and connected by a lossy channel. Such hybrid entanglement, which is a key resource for a variety of recently proposed schemes, including quantum cryptography and computing, enables information to be converted from one Hilbert space to the other via teleportation and therefore the connection of remote quantum processors based upon different encodings. Beyond its fundamental significance for the exploration of entanglement and its possible instantiations, our optical circuit holds promise for implementations of heterogeneous network, where discrete- and continuous-variable operations and techniques can be efficiently combined.

  9. Vortex variable range hopping in a conventional superconducting film

    NASA Astrophysics Data System (ADS)

    Percher, Ilana M.; Volotsenko, Irina; Frydman, Aviad; Shklovskii, Boris I.; Goldman, Allen M.

    2017-12-01

    The behavior of a disordered amorphous thin film of superconducting indium oxide has been studied as a function of temperature and magnetic field applied perpendicular to its plane. A superconductor-insulator transition has been observed, though the isotherms do not cross at a single point. The curves of resistance versus temperature on the putative superconducting side of this transition, where the resistance decreases with decreasing temperature, obey two-dimensional Mott variable-range hopping of vortices over wide ranges of temperature and resistance. To estimate the parameters of hopping, the film is modeled as a granular system and the hopping of vortices is treated in a manner analogous to hopping of charges. The reason the long-range interaction between vortices over the range of magnetic fields investigated does not lead to a stronger variation of resistance with temperature than that of two-dimensional Mott variable-range hopping remains unresolved.

  10. Evaluation of Deep Learning Representations of Spatial Storm Data

    NASA Astrophysics Data System (ADS)

    Gagne, D. J., II; Haupt, S. E.; Nychka, D. W.

    2017-12-01

    The spatial structure of a severe thunderstorm and its surrounding environment provide useful information about the potential for severe weather hazards, including tornadoes, hail, and high winds. Statistics computed over the area of a storm or from the pre-storm environment can provide descriptive information but fail to capture structural information. Because the storm environment is a complex, high-dimensional space, identifying methods to encode important spatial storm information in a low-dimensional form should aid analysis and prediction of storms by statistical and machine learning models. Principal component analysis (PCA), a more traditional approach, transforms high-dimensional data into a set of linearly uncorrelated, orthogonal components ordered by the amount of variance explained by each component. The burgeoning field of deep learning offers two potential approaches to this problem. Convolutional Neural Networks are a supervised learning method for transforming spatial data into a hierarchical set of feature maps that correspond with relevant combinations of spatial structures in the data. Generative Adversarial Networks (GANs) are an unsupervised deep learning model that uses two neural networks trained against each other to produce encoded representations of spatial data. These different spatial encoding methods were evaluated on the prediction of severe hail for a large set of storm patches extracted from the NCAR convection-allowing ensemble. Each storm patch contains information about storm structure and the near-storm environment. Logistic regression and random forest models were trained using the PCA and GAN encodings of the storm data and were compared against the predictions from a convolutional neural network. All methods showed skill over climatology at predicting the probability of severe hail. However, the verification scores among the methods were very similar and the predictions were highly correlated. Further evaluations are being performed to determine how the choice of input variables affects the results.

  11. Enceladus Plume Structure and Time Variability: Comparison of Cassini Observations

    PubMed Central

    Perry, Mark E.; Hansen, Candice J.; Waite, J. Hunter; Porco, Carolyn C.; Spencer, John R.; Howett, Carly J. A.

    2017-01-01

    Abstract During three low-altitude (99, 66, 66 km) flybys through the Enceladus plume in 2010 and 2011, Cassini's ion neutral mass spectrometer (INMS) made its first high spatial resolution measurements of the plume's gas density and distribution, detecting in situ the individual gas jets within the broad plume. Since those flybys, more detailed Imaging Science Subsystem (ISS) imaging observations of the plume's icy component have been reported, which constrain the locations and orientations of the numerous gas/grain jets. In the present study, we used these ISS imaging results, together with ultraviolet imaging spectrograph stellar and solar occultation measurements and modeling of the three-dimensional structure of the vapor cloud, to constrain the magnitudes, velocities, and time variability of the plume gas sources from the INMS data. Our results confirm a mixture of both low and high Mach gas emission from Enceladus' surface tiger stripes, with gas accelerated as fast as Mach 10 before escaping the surface. The vapor source fluxes and jet intensities/densities vary dramatically and stochastically, up to a factor 10, both spatially along the tiger stripes and over time between flyby observations. This complex spatial variability and dynamics may result from time-variable tidal stress fields interacting with subsurface fissure geometry and tortuosity beyond detectability, including changing gas pathways to the surface, and fluid flow and boiling in response evolving lithostatic stress conditions. The total plume gas source has 30% uncertainty depending on the contributions assumed for adiabatic and nonadiabatic gas expansion/acceleration to the high Mach emission. The overall vapor plume source rate exhibits stochastic time variability up to a factor ∼5 between observations, reflecting that found in the individual gas sources/jets. Key Words: Cassini at Saturn—Geysers—Enceladus—Gas dynamics—Icy satellites. Astrobiology 17, 926–940. PMID:28872900

  12. Design of a variable area diffuser for a 15-inch Mach 6 open-jet tunnel

    NASA Technical Reports Server (NTRS)

    Loney, Norman W.

    1994-01-01

    The Langley 15-inch Mach 6 High Temperature Tunnel was recently converted from a Mach 10 Hypersonic Flow Apparatus. This conversion was effected to improve the capability of testing in Mach 6 air at relatively high reservoir temperatures not previously possible at Langley. Elevated temperatures allow the matching of the Mach numbers, Reynolds numbers, and ratio of wall-to-adiabatic-wall temperatures (TW/Taw) between this and the Langley 20-inch Mach 6 CF4 Tunnel. This ratio is also matched for Langley's 31-inch Mach 10 Tunnel and is an important parameter useful in the simulation of slender bodies such as National Aerospace Plane (NASP) configurations currently being studied. Having established the nozzle's operating characteristics, the decision was made to install another test section to provide model injection capability. This test section is an open-jet type, with an injection system capable of injecting a model from retracted position to nozzle centerline between 0.5 and 2 seconds. Preliminary calibrations with the new test section resulted in Tunnel blockage. This blockage phenomenon was eliminated when the conical center body in the diffuser was replaced. The issue then, is to provide a new and more efficient variable area diffuser configuration with the capability to withstand testing of larger models without sending the Tunnel into an unstart condition. Use of the 1-dimensional steady flow equation with due regard to friction and heat transfer was employed to estimate the required area ratios (exit area / throat area) in a variable area diffuser. Correlations between diffuser exit Mach number and area ratios, relative to the stagnation pressure ratios and diffuser inlet Mach number were derived. From these correlations, one can set upper and lower operating pressures and temperatures for a given diffuser throat area. In addition, they will provide appropriate input conditions for the full 3-dimensional computational fluid dynamics (CFD) code for further simulation studies.

  13. Drag Optimization Of Light Trucks Using Computational Fluid Dynamics

    DTIC Science & Technology

    2003-09-01

    dimensional design case 19 study on the Lockheed C-141B aircraft wing, Cosentino and Holst [Ref. 10] reduced the number of design variables from 120 to 12... case letters) 6. AUTHOR(S) 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943...23 B. TWO DIMENSIONAL LIGHT TRUCK SHAPE STUDIES .................. 23 1. Canopies

  14. Upscaling river biomass using dimensional analysis and hydrogeomorphic scaling

    NASA Astrophysics Data System (ADS)

    Barnes, Elizabeth A.; Power, Mary E.; Foufoula-Georgiou, Efi; Hondzo, Miki; Dietrich, William E.

    2007-12-01

    We propose a methodology for upscaling biomass in a river using a combination of dimensional analysis and hydro-geomorphologic scaling laws. We first demonstrate the use of dimensional analysis for determining local scaling relationships between Nostoc biomass and hydrologic and geomorphic variables. We then combine these relationships with hydraulic geometry and streamflow scaling in order to upscale biomass from point to reach-averaged quantities. The methodology is demonstrated through an illustrative example using an 18 year dataset of seasonal monitoring of biomass of a stream cyanobacterium (Nostoc parmeloides) in a northern California river.

  15. Three Dimensional Flow and Pressure Patterns in a Hydrostatic Journal Bearing

    NASA Technical Reports Server (NTRS)

    Braun, M. Jack; Dzodzo, Milorad B.

    1996-01-01

    The flow in a hydrostatic journal bearing (HJB) is described by a mathematical model that uses the three dimensional non-orthogonal form of the Navier-Stokes equations. Using the u, v, w, and p, as primary variables, a conservative formulation, finite volume multi-block method is applied through a collocated, body fitted grid. The HJB has four shallow pockets with a depth/length ratio of 0.067. This paper represents a natural extension to the two and three dimensional studies undertaken prior to this project.

  16. Sampling free energy surfaces as slices by combining umbrella sampling and metadynamics.

    PubMed

    Awasthi, Shalini; Kapil, Venkat; Nair, Nisanth N

    2016-06-15

    Metadynamics (MTD) is a very powerful technique to sample high-dimensional free energy landscapes, and due to its self-guiding property, the method has been successful in studying complex reactions and conformational changes. MTD sampling is based on filling the free energy basins by biasing potentials and thus for cases with flat, broad, and unbound free energy wells, the computational time to sample them becomes very large. To alleviate this problem, we combine the standard Umbrella Sampling (US) technique with MTD to sample orthogonal collective variables (CVs) in a simultaneous way. Within this scheme, we construct the equilibrium distribution of CVs from biased distributions obtained from independent MTD simulations with umbrella potentials. Reweighting is carried out by a procedure that combines US reweighting and Tiwary-Parrinello MTD reweighting within the Weighted Histogram Analysis Method (WHAM). The approach is ideal for a controlled sampling of a CV in a MTD simulation, making it computationally efficient in sampling flat, broad, and unbound free energy surfaces. This technique also allows for a distributed sampling of a high-dimensional free energy surface, further increasing the computational efficiency in sampling. We demonstrate the application of this technique in sampling high-dimensional surface for various chemical reactions using ab initio and QM/MM hybrid molecular dynamics simulations. Further, to carry out MTD bias reweighting for computing forward reaction barriers in ab initio or QM/MM simulations, we propose a computationally affordable approach that does not require recrossing trajectories. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Statistical mechanics of complex neural systems and high dimensional data

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-03-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.

  18. High-resolution proxies for wood density variations in Terminalia superba

    PubMed Central

    De Ridder, Maaike; Van den Bulcke, Jan; Vansteenkiste, Dries; Van Loo, Denis; Dierick, Manuel; Masschaele, Bert; De Witte, Yoni; Mannes, David; Lehmann, Eberhard; Beeckman, Hans; Van Hoorebeke, Luc; Van Acker, Joris

    2011-01-01

    Background and Aims Density is a crucial variable in forest and wood science and is evaluated by a multitude of methods. Direct gravimetric methods are mostly destructive and time-consuming. Therefore, faster and semi- to non-destructive indirect methods have been developed. Methods Profiles of wood density variations with a resolution of approx. 50 µm were derived from one-dimensional resistance drillings, two-dimensional neutron scans, and three-dimensional neutron and X-ray scans. All methods were applied on Terminalia superba Engl. & Diels, an African pioneer species which sometimes exhibits a brown heart (limba noir). Key Results The use of X-ray tomography combined with a reference material permitted direct estimates of wood density. These X-ray-derived densities overestimated gravimetrically determined densities non-significantly and showed high correlation (linear regression, R2 = 0·995). When comparing X-ray densities with the attenuation coefficients of neutron scans and the amplitude of drilling resistance, a significant linear relation was found with the neutron attenuation coefficient (R2 = 0·986) yet a weak relation with drilling resistance (R2 = 0·243). When density patterns are compared, all three methods are capable of revealing the same trends. Differences are mainly due to the orientation of tree rings and the different characteristics of the indirect methods. Conclusions High-resolution X-ray computed tomography is a promising technique for research on wood cores and will be explored further on other temperate and tropical species. Further study on limba noir is necessary to reveal the causes of density variations and to determine how resistance drillings can be further refined. PMID:21131386

  19. Protein folding: complex potential for the driving force in a two-dimensional space of collective variables.

    PubMed

    Chekmarev, Sergei F

    2013-10-14

    Using the Helmholtz decomposition of the vector field of folding fluxes in a two-dimensional space of collective variables, a potential of the driving force for protein folding is introduced. The potential has two components. One component is responsible for the source and sink of the folding flows, which represent respectively, the unfolded states and the native state of the protein, and the other, which accounts for the flow vorticity inherently generated at the periphery of the flow field, is responsible for the canalization of the flow between the source and sink. The theoretical consideration is illustrated by calculations for a model β-hairpin protein.

  20. Radiograph and passive data analysis using mixed variable optimization

    DOEpatents

    Temple, Brian A.; Armstrong, Jerawan C.; Buescher, Kevin L.; Favorite, Jeffrey A.

    2015-06-02

    Disclosed herein are representative embodiments of methods, apparatus, and systems for performing radiography analysis. For example, certain embodiments perform radiographic analysis using mixed variable computation techniques. One exemplary system comprises a radiation source, a two-dimensional detector for detecting radiation transmitted through a object between the radiation source and detector, and a computer. In this embodiment, the computer is configured to input the radiographic image data from the two-dimensional detector and to determine one or more materials that form the object by using an iterative analysis technique that selects the one or more materials from hierarchically arranged solution spaces of discrete material possibilities and selects the layer interfaces from the optimization of the continuous interface data.

Top